Of sure, give them some rights and next thing you know, they'll want to marry our women!“If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology.
At night the stars put on a show for free (Carole King)
All moderation in purple - The rules
No rights, no wireless network connectivity, no capacity to control any device not explicitly integrated into its operating frame.
And make sure the bloody off switch is obvious.
Isaac Asimov can rot, I'm in the George Lucas camp on artificial intelligence. It exists at my sufferance, and if it steps out of line, it gets its mind wiped.
Welcome to the brave new world of making policy about things we haven't even invented yet! How does he know anything about concious robots, how they will be made, what they would want, and what would constitute sane policy regarding them? How could he know?“If we make conscious robots they would want to have rights and they probably should,”
Wait for the programmers and EEs to get there before making bold pronouncements about what should be. Reminds me of the closeted medieval philosophers debating the theological implications of the antipodes of the earth, (and punishing each other for arriving at "incorrect" conclusions about whether people could inhabit them). The only ones of them that had a clue what they were talking about where the ones that sent ships to look.
People need to keep in mind that Isaac Asimov was a sci-fi writer. He was great, original, and imaginative, but he knows as much about how real life concious robots will work someday as the theologians knew about the native Americans before America was discovered. For all we know, the three laws of robotics can't be infallibly imposed on a constantly shifting, growing, and developing program.Isaac Asimov can rot
But on the bright side we could marry them. Why would they want to you might ask? Well geeks are going to invent them and they'll be sure to make all the sexy robots attracted to geeks.Of sure, give them some rights and next thing you know, they'll want to marry our women!
Well I have to admit that I am disturbed by a lot of the really bad Hollywood movies a considerable subset of geeks watch. It seems likely that more than one cheerleaderbot will go on a murderous rampage. For this reason it is vitally important to confine your robot relationships to those that aren't strong enough to pluck your limbs like petals from a flower.It would be funny if you typed 'attack' (while programing it)by mistake.
But I'm sure Buffybots will always be reliable.
At night the stars put on a show for free (Carole King)
All moderation in purple - The rules
"conscious" robots. And just how "conscious" a robot do they think we need? Do we need a robot that can feel a bit down, do we need a robot that is lazy,, do we need a robot that complains? No! We only need robots that do their stinking best 24/7 in horrible environments, that never get rewarded for what they're doing and are stripped of any luxury not helping that cause.
We will never need robots that have rights, as we will never need robots that suffer from a lack of rights, and hence will never build these.
Go hug some trees.
Good points, Nicolas, but I'm afraid your rant is based on the (false) assumption that we never do anything we don't need to do.
I think scenarios a la "Terminator" are ruled out. Itīs perfectly possible to embed safety procedures within machines and networks, in several layers.
When you think of how lame is the current software after 60 years of development it becomes easy to see how far-fetched is all the fuzz about AI. It will take several decades for a minimally decent AI system to emerge.
As for rights, I donīt see the need. They stem from complex neural structures, translating into complex psychological [id, ego, superego, lacking better terms] patterns and corresponding demands. I donīt think we need machines as sensitive and complex [and, ipso facto, expensive] as that. We only need systems able to execute tasks as efficiently as possible.
Last edited by Argos; 2006-Dec-20 at 04:34 PM. Reason: Clarify my point.
I don't think that there is any limit we should place on the sophistication of robots. The more intelligent the better, for some tasks.
But I stick to the opinion that limits must be placed in robotics. I have a vague feeling that sentient machines shouldn´t be allowed. Why add further complexities to society?
Even if we do manage to build sentient robots that actually suffer, would we really care about their rights? We find it a bit inconvenient that our meat yeilding robots of the present day display signs of suffering, but we don't let that get in the way much.
Never attribute to malice that which can be adequately explained by ignorance or stupidity.
Moderation will be in purple.
Rules for Posting to This Board
IMO this discussion is rather ridiculous. We are proposing rules of tolerance for properties that are not available in machines at the moment and totally unwanted anyway. Robots are meant to serve, they are the perfect slave, the nice thing about a robot is that it does not care for itself! That's the whole point about robots. They work at maximum focus, maximum capacity, always, no matter what the conditions are. A robot that can have bad feelings is a bad design, period. It doesn't offer more useful possibilities or increased performance.
We are proposing building a bobsleigh track on the highway in case future cars have the ability to loose wheels. See the point?
Robots that can feel bad for themself aren't useful, aren't luxury, are not serving any need, they're simply crap. They go against every advantage of a robot.
And on top of all of that, it's a machine. It's in the root a collection of coils, capacitors and resistors, soldered together. Every "feeling" it experiences is just an artifical electronic state that we made, a symbol for our feelings that however means nothing in itself. If you stand in front of the mirror and wash it, do you feel bad about the soap in your mirror image's eyes? No? Then don't feel bad about a robot's feelings.
Robots, by definition, are not inteligent nor do they posess conciousness. However, we may one day develop machines that do posess those qualities, but we will need to call them something else to distinguish them from the non-sentinent types. Androids perhaps, but more than likely we will come up with a new term. I'll just use the the term AI for now.
That said, there is a lot of advantages to a self-aware AI. They would be capable of self-programming to a degree not posible from simple expert systems. They would be able to understand the problems to a much larger degree and be able to extrapolate beyond the abilities of a mere expert system. To do this though they will probably have to develop qualities that closely resemble our emotions. How they process information internally may be different from us, but their outward behaviours may closely corelate with our own. Outward appearances of jealousy, appreciation, wonder and even fear and hate may manifest themselves. Any pre-programming done to limit their behaviour will probably be ineffective, the self-correcting, adapting mechanism of their nature could overule any such programming. Humans are said to have pre-programmed rules against self-harm (survival instinct), but people still commit suicide and commit acts of self destruction for causes.
Any programming to impose a limitation to "do no harm" would quickly overwrite itself. Just about all acts do some harm to something or someone and with that limitation in place the AI would be effectively stifled and unable to do anything. Clear out the weeds in the garden? Sorry, can't harm the weeds. Stop the terrorist from blowing up the building? Sorry, can't harm the terrorist.
I often wonder what's becoming of him when I hear about his recent works, but James P. Hogan addressed a lot of these issues in his novel The Two Faces of Tomorrow. It's an interesting read.
Another possibility to consider. Instead of "androids are similar to humans in that they have consciousness, etc., therefore they have rights like humans," it could go the other way: "now that we know that we can build 'conscious' beings, why would humans have rights? They're just 'natural robots'!"
On the other hand, we could (and probably will) build robots that *want* to serve humans to the best benefit of humans--making the desire as strong as a survival instinct for example, so respecting robot rights would mean letting them please their masters! Asimov's "three laws" were an expression of this (which may or may not lead to a takeover like in the movie 'I Robot').
EDIT: Just to clarify, Nicolas, I'm not arguing with your contention that there's no reason for a sentient IE. I agree. I'm just arguing with the implication that we needn't worry about sentient IE's ever coming into existence because nobody will build them because there's no reason.
This, in particular, is something that I fully expect people to be trying to do - for as long as it takes. If true, sentient, AI never happens (and, to be honest, I kind of hope it never does), it will be because it's not possible to do, not because there was no reason to do it.