I wrote up this thread a few months ago but I decided not to post it at the time, however it looks like now might be a good time since AI seems to be
a hot topic at the moment. What I want to do in this post is highlight why the task is so difficult and why the first conscious machines will be very
different from the way science fiction often portrays them to be. The very first thing one must establish when attempting to build any type of AI
system is to determine what you want the AI to do. In this case, what we want it to do, is what humans do. Ok... so what do humans do? Well we do what
ever we want to do in order to fulfill our goals, and nearly all humans have a different set of short term and long term goals.
One of the major misconceptions concerning conscious machines is the idea that we will be able to program them with goals and instincts which they
cannot ignore. The Three Laws of Robotics is probably the most well known example of this concept. This idea that we can create a machine with human
level intelligence and then restrict or control the behavior of the machine is fundamentally flawed for many different reasons, some of which should
become highly apparent as we move along. First you need to think about what a goal is and how you would program it into a robot. If I ask you about
one of your long term goals you will explain it to me in your native language, but if I ask you a week later you might use completely different words
to explain the exact same goal.
So inside your mind you have some sort of conceptual construct which defines what your goals are. You use words to the best of your ability to express
the concepts inside your mind, but your goals are not written inside of your brain in the English language. So if we want to program specific goals
into a machine we can't just write "I may not injure a human being" into the brain of the robot, we need to know how to program concepts into the
brain of the robot, which is clearly next to impossible without understanding how the brain stores concepts. Furthermore, each person will not store
concepts in the exact same way, your concept of humor most certainly is not the same as mine. This implies that we develop our ideas and concepts of
the world around us by first hand experience.
Humans make up their own goals based on their life experiences, as I mentioned we all have differing goals, and there isn't any goal we cannot ignore.
Our most primitive goals, such as the desire to stay alive or reproduce, are often called prime directives. But we don't even have to obey by these
prime directives, people commit suicide all the time and many people have died a virgin, Nikola Tesla was one such person. Once we create machines
with human level intelligence we will have also created machines with the same level of autonomy as human beings. If they do not have the same level
of autonomy as a human being then they will not have the same level of consciousness as a human being. Only by giving them some type of "free will"
will they become conscious.
So what we need is a machine capable of setting its own goals and then doing what ever it thinks is the best way to fulfill those goals. Humans are
usually very good at completing their goals because we have very advanced problem solving skills. When a baby is born it has no knowledge of the
English language, yet it can learn the English language by hearing others speak and how they react to certain words. We know the English language
isn't implanted into the childs brain because children in China will grow up speaking Chinese. Human beings are essentially just self-learning
machines, we start knowing very little about the world but over time we learn how it works and we can solve almost any problem thrown at us.
For example, if I'm communicating with another human being, I can ask that person the same question using countless different phrases, I could even
give them a sentence they've never heard before, and they will still deduce the meaning of that sentence with little effort. I could use 100 words to
ask a question or I could use 10 words to ask the exact same question, depending on which words I decided to use and how clear I wanted to be about
the question. Nevertheless, the person I'm speaking to will have very little trouble understanding what I'm saying, regardless of the structure of my
sentences. I could even mispronounce words or skip words entirely, and they will most likely detect the error and internally auto-correct it.
The other person will also do much more complex things like pick up on sarcasm and jokes, realize when I insult them or compliment them, know when I'm
asking a question or making a statement, analyze my body language as I speak, and even make predictions about what I'm going to say next. This type of
advanced communication is really the essence of consciousness, because if you can program a chat bot capable of having conversations on the level of a
human, there is nothing left to do except give the bot a body. However there is no reason it needs to have a body for it to have consciousness and so
the goal of strong AI can really be defined as creating a machine with the ability to communicate at a human level.
Simply put, the goal is to build a chat bot which doesn't just regurgitate pre-written responses, we want a bot which actually understands the way
language works, a bot which can attach meaning to words and sentences, something which will never respond in a totally predictable way, something
which can learn new information and then give revised responses based on its new understanding. In order to attach meaning to anything you need to
have a concept of the thing in question. For example if you want to understand the meaning of the word 'banana' then you need to understand the
concept of space and time, because bananas exist in space and time, and you need to have a concept of what matter is, because bananas are made of
matter.
Then you need to have a concept of what fruit is, and a concept of what food is, and on and on. The point is, the bot requires a conceptual model of
the world it exists in, which it can only get via first hand experience. If humans didn't have any senses, we would never learn anything because there
wouldn't be any information flowing into our brains from the outside world. Let me attempt to explain the same concept another way. If for example I
ask the bot "how would you pull off the perfect bank robbery", the bot must have some concept of what a bank is, is also needs to understand how banks
operate, furthermore it also needs a concept of its self. If asked how it thinks Bob would perform the robbery it may answer differently.
edit
on 12/5/2015 by ChaoticOrder because: (no reason given)