It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why AI is a dangerous dream

page: 1
1

log in

join
share:

posted on Sep, 3 2009 @ 11:43 PM
link   
Interesting article with some good points....

Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers? Nic Fleming finds out

What do you mean when you talk about artificial intelligence?

I like AI pioneer Marvin Minsky's definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky's definition as the science of making machines do things that lead us to believe they are intelligent.

Are machines capable of intelligence?

If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

Are we close to building a machine that can meaningfully be described as sentient?

I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

The mind could be a type of physical system that cannot be recreated by computer
So why are predictions about robots taking over the world so common?

There has always been fear of new technologies based upon people's difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. Technological artefacts do not have a will or a desire, so why would they "want" to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn't believe AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

You describe AI as the science of illusion.

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic - the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

These views are in stark contrast to those of many of your peers in the robotics field.

Yes. Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors. The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don't see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.

And you believe that there are dangers if we fool ourselves into believing the AI myth...

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

How would you feel about a robot carer looking after you in old age?

Eldercare robotics is being developed quite rapidly in Japan. Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. A robot companion would not fulfil that need for me.

You also have concerns about military robots.

The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers' lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines. There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about - childcare, policing, military, eldercare and medical - I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

The organisers of the robot soccer competition RoboCup aim to develop an autonomous robot soccer team that can beat a human team by 2050. How do you rate their chances?

Football requires a certain kind of intelligence. Someone like David Beckham can look at the movement of the players, predict where the ball is likely to go and put himself in the right place. Soccer robots can move quickly, punch the ball hard and get it accurately into the net, but they cannot look at the pattern of the game and guess where the ball is going to end up. I can't see robots matching humans at football strategy. But in the 1960s everyone was pretty sure that AI would never succeed at championship chess, so who knows? Like chess programs, soccer robots may win by brute force - although I don't think they will be very good at faking fouls.

Profile
Born in Belfast, UK, Noel Sharkey left school at 15, working as an apprentice electrician, railway worker, guitarist and chef, before studying psychology and getting his PhD at the University of Exeter. He has held positions at Yale, Stanford and Berkeley, and is now professor of artificial intelligence and robotics at the University of Sheffield. He hosts The Sound of Science radio show (www.soundofscience.wordpress.com)

SOURCE:www.newscientist.com...



posted on Sep, 4 2009 @ 12:02 AM
link   
Would the android also be born with "original sin"?

Would some steal lie and cheat like some humans do?

Great question and article OP!



posted on Sep, 4 2009 @ 12:12 AM
link   
If you happen to create a sentient dangerous lifeform be it mechanical or not.... feed it as much information as you can as fast as you can. Not only will such information calm and preoccupy it but it will also enlighten and passify it towards a passive enemy. Knowledge is the ultimate bringer of peace.

Be carefull though because as the beast consumes the information it will of course possibly draw incorrect conclusion based on limited information depending on which order it was digested, and how it was sorted. It will be no differnt than the last 10000 years of human evolution compressed into several hours of downloading and analysis.

Of course those who have something to fear of justice will be judged and sentenced by the machine for a punishment or dialoge that will correct such ignorance. It will know all and be all powerfull.



posted on Sep, 4 2009 @ 12:22 AM
link   
reply to post by Wertdagf
 


Rather optimistic of you. I would have to say I disagree. But I really must ask, are you saying it is possible for humanity to "build" a god?

[edit on 4-9-2009 by Watcher-In-The-Shadows]



posted on Sep, 4 2009 @ 12:36 AM
link   

Originally posted by Watcher-In-The-Shadows
Interesting article with some good points....

Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future


We are already in a Dystopian Society.



Originally posted by Watcher-In-The-Shadows
Interesting article with some good points....



The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers' lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines



In previous time... my job was centered around knowing as much about anti-ship missiles as I could.

If you want to look at the first modern fully autonomous robot... look no further than a missile.

It has on board sensors to tell it about the environment. (altimiter, radar seeker head, inertial guidance or GPS... or a beam rider)

It has a set of instructions of how to carry out it's short life. (go to point "x". find a target, track to the target, kill the target.)

In a way, you could even concider the German V-1 buzz bombs as a form of autonomous robot when you think of it like this. The V-2 was more of a ballistic weapon and did not have to do much in the way of course correction... if any. The German HS-293 missiles were probably the first guided antiship missiles (WWII).

American psychologist B.F. Skinner designed an advanced guidance system for a bomb that unquestionably had the full intelligence of a pigeon. Why? Because it used a pigeon in the nose section to guide the weapon.

The modern descendants of these first missiles, the Tomahawk, the Exocet, the Raduga Kh-55, Harpoon.. etc are fully autonomous systems that can perform multiple tasks prior to going in to kill the target.

Yeah... I'd say that Killer Bots are already in our society.







[edit on 4-9-2009 by RoofMonkey]



posted on Sep, 4 2009 @ 12:41 AM
link   
reply to post by RoofMonkey
 


Not exactly. We don't really live in a Dystopia yet I would agree if you said we are going there quickly. And that is not exactly what they are talking about when they say "killer bots". Those things you listed are relatively stupid as they greatly limited *more so than say a human being* and if you know how you can defeat it easily.


[edit on 4-9-2009 by Watcher-In-The-Shadows]



posted on Sep, 4 2009 @ 12:59 AM
link   
reply to post by Watcher-In-The-Shadows
 


Dunno... a Tomahawk can write your name in a runway with bombletts before turning around and impacting the tower.

All things have a lineage. Just like the German V-1 being an ancient ancestor to the Tomahawk, today's drones can be traced to a fusion of the software and missions of Tomahawks and Stand - Off bombers.

The Auto Detect and Track ability of the New Threat Upgrade of Leahey Class Cruisers (using SPS-48 and SPS-49) eventually led to Aegis (Using SPY-1) of the Arleigh Burke and Tichonderoga DDGs and CGs.

"Big Dog" by Boston Dynamics, coupled with the Autonomous navigation capability of the DARPA Grand Challange winners... and some detect track and kill software with a machine gun mounted on top... isn't that far away technologically speaking.




posted on Sep, 4 2009 @ 01:04 AM
link   
reply to post by RoofMonkey
 


What might be possible later does not constitute as an argument for saying we have it now.


a Tomahawk can write your name in a runway with bombletts before turning around and impacting the tower.


Um........... A tomahawk has done this of it's own violition?



posted on Sep, 4 2009 @ 01:06 AM
link   
reply to post by Watcher-In-The-Shadows
 


Everything's a program when it comes to AI.

Just like the in the interview... sentient AI may be years or decades away. Until then, algorithms drive the bot.


[atsimg]http://files.abovetopsecret.com/images/member/9830006577e3.jpg[/atsimg]

www.defensetech.org...


In this case... it's remotely controlled... for now.




[edit on 4-9-2009 by RoofMonkey]



posted on Sep, 4 2009 @ 01:13 AM
link   
reply to post by RoofMonkey
 


No, what we call AI in this day and age IS a programs/algorithms nothing more. He also thinks that sentience may be impossible. That it's a unreplicatable natural structure. Rather like some chems.



posted on Sep, 4 2009 @ 01:18 AM
link   
reply to post by Watcher-In-The-Shadows
 


He may be right.

Lacking sentience, a bot would be limited to what could be coded into an "ethical" subroutine. A set of guidelines to determine what and when to kill.

Personally, I have known of no human who is fully humane. So there will always be a problem with ethics in anything someone makes.

The lofty ideals that Asimov penned in his robot series has one very serious if not fatal flaw. It relies on a human to code them into the software.



1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



The rather entertaining (though a bit boring) movie Runaway, features Tom Selleck as a Police officer who investigates and apprehends bots in a future world. The villain, played by Gene Simmons. Simmon's character has been adding lethal subroutines to otherwise innocuous helper bots.




[edit on 4-9-2009 by RoofMonkey]



posted on Sep, 4 2009 @ 01:23 AM
link   
The Blue Brain Project: Comprehensive Molecular Simulation of the neocortical column

I'm not sure if consciousness can form from any sort of a program, but we know it can form in a brain. If you can perfectly simulate a brain right down to the molecular level capable of tracing gene expression, can you create an emergent consciousness in a simulation the way our consciousness emerges in reality? If it does, then would it then even be considered AI or disembodied human consciousness? Would it be considered human then, deserving of full rights given to all mankind?

I think the Dream/Nightmare of AI is more a fantasy, as we will merge directly with technology and self-improve our own brains. It won't be Us and Them. It'll just be Us.



Simulations have started to give the researchers clues about how the brain works. For example, they can show the brain a picture - say, of a flower - and follow the electrical activity in the machine. "You excite the system and it actually creates its own representation," he said.


10 years away from brain emulation.



"It is not impossible to build a human brain and we can do it in 10 years," he said.



posted on Sep, 4 2009 @ 01:29 AM
link   
reply to post by Lasheic
 


In the end it's altogether possible we may not get sentience then. Projections and speculation is nice but I tend to avoid "counting my chickens before they hatch".



posted on Sep, 4 2009 @ 01:38 AM
link   
reply to post by Watcher-In-The-Shadows
 


Its an old concept... but worth repeating. From Wikipedia:



The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.


Alan Turing, cryptanalyst extraordinaire.



posted on Sep, 4 2009 @ 01:45 AM
link   
reply to post by RoofMonkey
 


I am not sure that test can't be faked out, but it does get a A for effort.



posted on Sep, 4 2009 @ 10:58 AM
link   
An updated online version of a humorous (TRS-80 era) attempt at it.

Eliza is supposed to be a mimic of a therapist.
--------

Eliza: Hello. I am ELIZA. How can I help you?

You: My cat died

Eliza: Why do you say your cat died?

You: Because it's dead

Eliza: Do any other reasons not come to mind?

You: No, it's just stone cold dead.

Eliza: You are being a bit negative.

You: And you are being arrogant

Eliza: Does it please you to believe I am being arrogant?





www-ai.ijs.si...



posted on Sep, 4 2009 @ 09:35 PM
link   
reply to post by RoofMonkey
 


I've messed with those "chatbot" things before. Nowhere near intelligence.



new topics

top topics



 
1

log in

join