It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: galien8
originally posted by: enlightenedservant
So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help? I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human conflict without harming either human?
The Laws can be interpreted that the robot or robots do not interfere with human affairs, to be passive if humans are fighting among them selves, consequently according to these Asimov Laws robots can also not fight along with humans against other humans (with their robots) in wars as soldier robots, or intermediate in gangs fighting gangs as police robots. We can let sentient robots fight with other sentient robots like gladiators, wouldn't that be funny
originally posted by: enlightenedservant
a reply to: galien8
I'm not trying to sound mean, but that doesn't answer the questions.
I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?
originally posted by: enlightenedservant
a reply to: galien8
I'm not trying to sound mean, but that doesn't answer the questions.
I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?
originally posted by: schuyler
originally posted by: enlightenedservant
a reply to: galien8
I'm not trying to sound mean, but that doesn't answer the questions.
I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?
I don't think you're quite into the spirit of the thing. Why should companies design robots that conform to the law? For the same reason auto companies are required to install air bags. It's the law. Asimov, writing in the 50's when he proposed these laws, was trying to get us all to think about the implications of AI. That we're still talking about the Three Laws of Robotics shows that he was successful. "I, Robot" is a perfect example of what happens when things go astray.
originally posted by: galien8
originally posted by: enlightenedservant
a reply to: galien8
I'm not trying to sound mean, but that doesn't answer the questions.
I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?
OK now your clear! Security firms, border patrol, military, police should not even think of using robocops.
You made me think about it again, OK maybe we should not give The Laws of Asimov a Canonical religious status or to regard them as God given or something, need to run the scenarios in my head
originally posted by: schuyler
Manufactured humans like those in "Blade Runner."
originally posted by: intrptr
a reply to: FamCore
Yah, the three laws are a utopia version of robotics. Like you said drones violate them, in fact every single weapons guidance system is directed to kill without question. There is no 'should or shouldn't I' programming included in the software of a warhead.
Morals of war and rules of engagement aside, once released they are designed to hit their target, period.
Amusing dilemma in the film, Dark Star. Arguing with a smart bomb.
I wonder if there will ever come a time when one can disarm a bomb with philosophy.
originally posted by: enlightenedservant
And even my form of pacifism includes the right to self defense. So theoretically, I'd be ok with artificial intelligence being able to protect itself from human attacks, just as I theoretically agree that all animals have the right to self defense. And by extension, I'd reluctantly agree with the idea of robot "guardians" using non-lethal attacks to protect a homeowner's home, to protect the children they're babysitting, or the clients they're protecting (like human bodyguards do).
originally posted by: Maxatoria
There were on occasions where robots were produced without the full 3 laws, such as when humans needed to enter a dangerous radioactive environment as the robots would see the human in there and obeying the 1st law would run in and kill themselves .... been many a year since i read the books but the rules were mathematical and thus could be adjusted if needed and some of the stories covered the problems when a robot would go awry due to the change in the program.
originally posted by: enlightenedservant
a reply to: galien8
If the robots are sentient/true AI, I think they should have equal rights as humans...
...So maybe we should just be limited to making robots that have limited functions.
originally posted by: LadyGreenEyes
a reply to: galien8
Every time there is a tale of killer robots, or computers, I wonder why they didn't use those laws. In reality, they'd likely be ignored. The best ideas usually are!
originally posted by: Maxatoria
It should be said that the rules are not absolute, a polite go jump off a cliff or play in the fast lane to a robot would be overridden by the 3rd law as it would understand the language use and act accordingly however a strong authoritative command to kill ones self would probably override the 3rd law as generally it always seemed to be a balance like a set of scales and when it couldn't work it out normally it would just shut off and basically die.
The 3 laws are a great starting point for robotic research as we include ethics into the mix as we consider ourselves above the robots in some ways almost like slave masters and how in real life would we consider a sentient robot.
originally posted by: andy06shake
a reply to: galien8
The best we can hope for really is that we teach our creations benevolence but with Man for a God i really don't see that happening.