posted on Apr, 21 2012 @ 11:38 AM
I posted a new thread on some of Alan's papers on code breaking being released and it reminded me of something I've been thinking on for awhile
dealing with Turing in the computer field.
My thoughts are on the Turing test for AI. I think the Turing test is not an accurate way to determine AI.
The reasons behind these thoughts is that an AI would not turn out like a person (using person in a broad sense to mean human like thought patterns.)
Because the AI would not be exposed to the environmental and nurturing factors of a human child, Thus it's thought processes would not grow in the
same way as a humans. It would end up developing it's own unique processes. Of course this could be counter acted by programming the AI with false
memories of growing up as a human. but then would it truly be an AI or just a simulation of a human?
I am hoping others can expand on this, providing points I may have missed both for or against this hypothesis. And I am looking forward to a nice
discussion of the subject.
edit on 21/4/2012 by ArMaP because: added a missing 'T' on the title