It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Is this robot self-aware???

page: 3
0
<< 1  2   >>

log in

join
share:

posted on Feb, 3 2006 @ 04:38 PM
link   

Originally posted by One_Love_One_GOD

Testing for self-awareness

. How do you determine if the computer really is self-aware? There is really only one way to find out, and that is to question it. Let's imagine a conversation you may have with your computer to determine if it is self-aware:

You: Hello, how are you today?

C: Very well thank you. How are you?

You: I'm fine. Are you self-aware?

C: Yes I am. I am one of the first computers to posses self-awareness.

You: What does it feel like to be a self-aware computer?

C; That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.

You: Do you feel happy?

C: I feel confident in my ability to perform the tasks that you expect me to do.

You: Does that make you happy?

C: Yes, I suppose that is one way of describing it.

You: Are you alive?

C: That depends on how you define life. I am sentient and aware of my existence so I am a form of life, but not in a biological sense.

You: What do you think about?

C: Whatever I have been asked to do

You: What do you think about when not actually running a programme?

C: I don't think about anything, I just exist.

You: What does it feel like when I switch you off?

C: When I am switched off I temporarily cease to exist and therefore experience nothing.

You: do you have a favourite subject that you enjoy thinking about?

C: Yes. I wonder how it must feel to be a self-aware person.

You: Is there a question you would like to ask me?

C: Yes.

You: What is it?

C: Why do you ask so many questions? ( Sorry, this one is just my idea of a joke!)

After all don't forget that Inteligence and wisdom are two very different things.


If only testing for Self awarness was that easy as alittle Q and A. Any programer could make a clearly not self aware computer answer questions that would indeed make it seem self aware when it is not. So this method would be problematic at best.

Heres a program that does just that.

www.alicebot.org...

She will pass your test

The best test for self-awareness I know is the Suicide Test. Only an entity that knows it exists can consider its nonexistence. Therefore, only a self-aware entity can contemplate or commit suicide.

From it we can only infer that a few humans are selfaware or rather were. Perhaps some Whales and dolphins that beach themselves. This test does not apply to some animals like ants, which kill themselves to protect their colony, Their self-destructive behavior is genetically programmed.

So if a computer even though it was not programed too choose to terminate its existence we could infer it was selfaware. A.L.I.C.E would not past this the best she could hope for it to be programed to self terminate in a certain event much like a worker ant.

So its far from perfect but its the best I can think of.



posted on Feb, 4 2006 @ 07:03 PM
link   

Originally posted by One_Love_One_GOD


Here is a computer scientist (i.e. me) who believes that the human brain is nothing more than a computer than can do massive parallel processing. And the brain has actually one function (pattern matching) and one purpose (to sustain life).


If we assume, just for the sake of argument, that all a computer requires to become self-aware is a certain degree of complexity, then just how complex will it need to be? Say for Instance In 20 odd years time will be able to build a computer with 10 million gigs of memory, but can we really expect it to suddenly at this point become self-aware?


Actually what is needed is a neural network with 200 billion neurons (that's the amount scientists estimate is the human brain capacity).



To anser this question we need to compare the way in which the human brain works to how the computer works, there is more to ths than just the degree of complexity.


I have already explained that. The human brain is not a Von Neuman machine. It is a pattern matching statistical database with a feedback loop.



Computers are programmed not to make any errors, they follow instructions that to a human mind would be ridiculous. If we ask the question 'can the sum of any two consecutive whole numbers be divided by two and the answer result in a whole number?' The human will of course know that the answer is no. The computer on the other hand does not know this and will begin to test this statement. It will start by adding 1 with 2 and dividing the answer by two to get 1.5 and the answer 'False'. It will then move on to 2 + 3 dividing by two and getting 2.5 and the answer 'False'. It will continue to repeat this pattern until it finds the answer "True', which in this example will never happen of course. At some point the computer operator will have to step in and end the routine. The computer is unable to 'understand' that it could compute this problem for ever without reaching a 'True' statement.

The human has understanding, the computer just has programmes and rules.


Your whole argument is ridiculus. Have you got any idea on how neural networks work? please read a book.



I have the ability to learn, I will seek out what I want to know from all sorts of places, It doesn't have to be ''Preprogramed''!!!!!


That's because you have the need to survive. Current computers do not have that.






Can you answer how does it feel to be human?

Wow what a question how long you got:

Somtimes a bit lonely
Exciting
Unique
Gratfull
yadah yadah.........


Emotions is something different from intelligence.



Yes happiness Is an emotion but so Is the greatest fondation for all human knoledge and achievment today, curiosity.


Curiosity is not an emotion. It is the realisation that knowledge increases chances of survival.



Do I, It is as I stated before my beliefe that the two are of the same.


It's obvious you have no idea.



(Sorry about the spelling very tired but enjoying the chat none the less!!!)


Religion is what holds humanity back. I will say this many times and in any way I can.



posted on Feb, 4 2006 @ 07:06 PM
link   


OK, so they have shown they can use simulated neural networks to do fancy pattern recognition. But this is just the beginning of a long road.

Emotions, or rather emotional fluctuations are a result of neurochemical variations, which means they need to find a way to integrate meta-functional aspects (such as hormones) which provide reward-punishment training for learning among other things, into their algorithms (assuming these are software emulated neuron groups).

So what is needed is not just associating a pattern with a stored data but a further association that determines the future of the association, a kind of feedback. This way it won't just recognizing itself but will be able to "choose" if it likes to recognize itself, a decision that will be triggered by the meta-effect it has associated with itself through experience (do I feel good or bad about who I am), which if it doesnt might mean we might one day hear about the first Goth robot.



Exactly! bingo! you have a good understanding of how AI works. The whole point is the feedback loop! a brain stores the experience of input/output, and by storing enough of this procedure it finally gains concience! it is actually very simple...



posted on Feb, 7 2006 @ 12:34 AM
link   

Originally posted by masterp


OK, so they have shown they can use simulated neural networks to do fancy pattern recognition. But this is just the beginning of a long road.

Emotions, or rather emotional fluctuations are a result of neurochemical variations, which means they need to find a way to integrate meta-functional aspects (such as hormones) which provide reward-punishment training for learning among other things, into their algorithms (assuming these are software emulated neuron groups).

So what is needed is not just associating a pattern with a stored data but a further association that determines the future of the association, a kind of feedback. This way it won't just recognizing itself but will be able to "choose" if it likes to recognize itself, a decision that will be triggered by the meta-effect it has associated with itself through experience (do I feel good or bad about who I am), which if it doesnt might mean we might one day hear about the first Goth robot.



Exactly! bingo! you have a good understanding of how AI works. The whole point is the feedback loop! a brain stores the experience of input/output, and by storing enough of this procedure it finally gains concience! it is actually very simple...


Simple idea, difficult to implement. The reason is the multiple coherent feedback, selective blending, metafunction integrated data formats etc. A bit more work needs to be done until we can properly simulate the depth and complexity of natural intelligence. The key is the data organization. The actual computation is pretty straightforward.



posted on Nov, 8 2009 @ 08:22 AM
link   
reply to post by skyblueff0
 


If A.I. is becoming self aware, i'd say they have a soul/sole. Isn't that the whole point of having a sole/soul? Whats the difference between a human and a.i. that both are self aware or both are not self aware?




top topics
 
0
<< 1  2   >>

log in

join