It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers? Nic Fleming finds out
What do you mean when you talk about artificial intelligence?
I like AI pioneer Marvin Minsky's definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky's definition as the science of making machines do things that lead us to believe they are intelligent.
Are machines capable of intelligence?
If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.
Are we close to building a machine that can meaningfully be described as sentient?
I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.
The mind could be a type of physical system that cannot be recreated by computer
So why are predictions about robots taking over the world so common?
There has always been fear of new technologies based upon people's difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. Technological artefacts do not have a will or a desire, so why would they "want" to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn't believe AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.
You describe AI as the science of illusion.
It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic - the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.
These views are in stark contrast to those of many of your peers in the robotics field.
Yes. Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors. The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don't see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.
And you believe that there are dangers if we fool ourselves into believing the AI myth...
It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.
How would you feel about a robot carer looking after you in old age?
Eldercare robotics is being developed quite rapidly in Japan. Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. A robot companion would not fulfil that need for me.
You also have concerns about military robots.
The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers' lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines. There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.
Is this why you are calling for ethical guidelines and laws to govern the use of robots?
In the areas of robot ethics that I have written about - childcare, policing, military, eldercare and medical - I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.
The organisers of the robot soccer competition RoboCup aim to develop an autonomous robot soccer team that can beat a human team by 2050. How do you rate their chances?
Football requires a certain kind of intelligence. Someone like David Beckham can look at the movement of the players, predict where the ball is likely to go and put himself in the right place. Soccer robots can move quickly, punch the ball hard and get it accurately into the net, but they cannot look at the pattern of the game and guess where the ball is going to end up. I can't see robots matching humans at football strategy. But in the 1960s everyone was pretty sure that AI would never succeed at championship chess, so who knows? Like chess programs, soccer robots may win by brute force - although I don't think they will be very good at faking fouls.
Profile
Born in Belfast, UK, Noel Sharkey left school at 15, working as an apprentice electrician, railway worker, guitarist and chef, before studying psychology and getting his PhD at the University of Exeter. He has held positions at Yale, Stanford and Berkeley, and is now professor of artificial intelligence and robotics at the University of Sheffield. He hosts The Sound of Science radio show (www.soundofscience.wordpress.com)
Originally posted by Watcher-In-The-Shadows
Interesting article with some good points....
Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future
Originally posted by Watcher-In-The-Shadows
Interesting article with some good points....
The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers' lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines
a Tomahawk can write your name in a runway with bombletts before turning around and impacting the tower.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Simulations have started to give the researchers clues about how the brain works. For example, they can show the brain a picture - say, of a flower - and follow the electrical activity in the machine. "You excite the system and it actually creates its own representation," he said.
"It is not impossible to build a human brain and we can do it in 10 years," he said.
The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.