It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: FamCore
a reply to: galien8
Regulating/governing powers have already made drones and robots who violate many of these rules. State governments have approved the use of weaponized drones for use against civilians,
and as we saw in Dallas, robots are now being used to "blow up perpetrators"
I agree, the laws Asimov described are logical and I would advocate for their implementation, however I don't see it happening.
At the VERY least robots should have a kill switch
originally posted by: Peeple
a reply to: galien8
Even the deadliest killing robot is less to fear than a human enjoying to kill.
I always felt Asimov is greatly overrated. An emotionless being programmed to do its job is always predictable. No matter what the job is.
originally posted by: Peeple
a reply to: galien8
You know that's just an illogical stance. That's like saying roomba is evil for sucking up the dirt. The one who programmed it is responsible. There's no way around it.
What do you mean with sentient in relation to a robot? That's a purely biological way of thinking. Even if you create an AI you won't be able to teach it feelings.
The decision whom to kill and who not will be an algorithm.
Humans are much more dangerous than a machine could ever be.
Must kill all humans. Bender Bending Rodriguez hehe
originally posted by: Maxatoria
such as when humans needed to enter a dangerous radioactive environment as the robots would see the human in there and obeying the 1st law would run in and kill themselves
They'll be reasonably safe for us, as the stories go on you get to understand that absolute freedom comes at a price, the spacers adjust their systems to be able to reproduce with no other input and live in an environment where they only 'skype' other spacers but everything they could desire is provided by robots and the normal earth based humans have a large battle and colonize the galaxy and its only at the end that we see again robots.
Robots don't become a part of the general society, just the same as nukes don't as its mentioned that some general who thought to nuke a planet was hung by his own men as the fact earth was reduced to a nuclear wasteland somehow still resonated in peoples thought.
originally posted by: Maxatoria
possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.
originally posted by: enlightenedservant
a reply to: galien8
Just for the sake of debate, why should robotics manufacturers, research labs, or individual inventors follow those 4 "laws"? Or to be even more blunt, why should people/organizations blindly follow rules they had no say in creating or developing?
originally posted by: enlightenedservant
So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help? I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human conflict without harming either human?