It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
“ 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Originally posted by MR BOB
so it took to enslave humanity to protect humans from themselves.
Originally posted by SaturnFX
The concept of artificial intelligence is that it learns through cross referencing, logic algorithms, and yep...good ole mimicing.
Artificial intelligence is meant for minimal programming and let the robot add to the programming...so, your example of a robot watching video games endlessly...well, if its a well designed AI system, then yes, it could learn how to become a soldier simply by association.
However...computers are logical beasts with priority levels...in AI systems, core programming cannot be circumvented...and so all you need is a base line in there saying no hurting/damaging of life or whatnot and your good.
Issic Asimov wrote a book long ago called I Robot that dealt with this issue and talked of 3 basic lines of code that had to be installed in all AI systems...this being:
“ 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So, problem addressed
”
[edit on 25-7-2010 by SaturnFX]
i argue that robots will never think for them selves because
every action they take will allways have to come from a human.
I Robot
“Asimov's robot laws need updating
Humans need to be responsible
By Nick Farrell
Fri Aug 21 2009, 10:21
ISAAC ASIMOV'S Three Laws of Robotics need a makeover, according to a couple of AI boffins.
Asimov's first law of robotics prohibits robots from injuring humans or allowing humans to come to harm due to inaction. The second law requires robots to obey human orders except those that conflict with the first law. The third law requires robots to protect their own existence, except when to do so conflicts with either of the first two laws.
Originally posted by SaturnFX
A robot must not impede the willful freedoms of humankind
and there ya go...enslavement senario nullified (however dont put AI on your locks...else one could simply demand the lock open else its impeding the freedom of them walking in..heh)