Recently in my philosophy class we have been discussing intelligence of machines. There have been several arguments about why machines can't be intelligent and debate about how to define intelligence. Hugo de Garis has been attending our class for a while and he did a lecture on machine intelligence. He claims that if Moore's law holds there will be god-like machines by the end of this century. IIRC, Hugo predicts that by 2020 one byte of information will be the size of an electron so that something the size of a mouse could hold trillions of bytes of information.
One of the main arguments against machine intelligence is that have to be programmed. I tend to agree with this, but I don't understand computer programming. My professor says that humans are programmed too. Nature gave us the hardware and culture, education, and our parents supply the software. I said in class that you don't have to rewire a human brain to teach a person how to do something, but you have to type in information to program a computer. He said heuristic programming isn't that simple. Hugo pointed out that he encountered a talking computer when he called Dell support. I have experienced the same thing when calling a credit card company. I would like to try an experiment of using a random word such as "doorknob" to see if the computer accepts it or not.
Alan Turing came up with a test to see if you could tell the different between machine answers and computer answers. A computer is in one room and a human is in another and an interrogator asks any variety of questions to determine which room holds the machine. If the interrogator can't tell the difference then the machine passes the test. He then concluded that a machine passing this test would be considered intelligent. My professor calls this a question switcheroo fallacy, where you take the answer from one question to answer another question.
Imagine a machine such as Bicentennial Man, Data, or the kid in A.I. Would you consider them intelligent? We watched a Star Trek episode where Data wants to resign from the star fleet to avoid being experimented on and so there is a tribunal to determine whether he has rights. The question of intelligence is really fuzzy, so is the question of what is human. We accept disabled people as still being human, so to be fair should you also apply a lenient rule to machine intelligence? We can consider all people to be humans, but is there a line for intelligence and being self-aware? Terri Schiavo is human, but it's probably debatable as to whether she is self-aware and still intelligent. I see a problem with the least common denominator when addressing machines. Should I consider the elevator and my television to be intelligent? Should they be given rights?
So what is your opinion on this issue? Do you have a boundary for what you would consider an intelligent human or machine?
Hopefully this is appropriate material.