If some of the things in this post sound remotely familiar it may be because I’ve posted my thoughts previously on a number of other A.I. threads
over the past year or 2. There may even be sections where I’ve simply copy/pasted from previous posts. No need to reinvent the wheel. At any rate,
it’s still just my own 2 cents, whether right or wrong.
Sean Carroll, a cosmologist at Cal Tech, once made an insightful observation about the human condition when he said, “We are part of the universe
that has developed a remarkable ability: We can hold an image of the world in our minds. We are matter contemplating itself.” That has always stuck
with me. In a nutshell, I think what he said is what it means to be sentient.
I would guess this level of consciousness is an emergent property of matter under certain conditions and configurations. I’m not sure, though, how
far along we are in truly understanding just what those conditions are. Consciousness includes various states falling under a couple of main branches;
namely objective and subjective, each having it’s own set of properties. I think of sentience (self-awareness, feelings, etc.) as being part of the
set of states falling under the subjective branch, each state having it’s own set of properties and contributing to our subjective awareness. I
think the combination of our objective awareness of the events and things we sense around us, along with our subjective interpretation (sentience) of
these events, constitutes the foundation of our perceived reality.
From things I’ve read, the impression I get is that most of us fear machines may become a threat once they attain sentience/sapience. We’re leary
about the possibility of machines becoming conscious, self aware, having “feelings”, forming “impressions”, etc. In other words, being a
little bit too much like ourselves. And while that’s a legitimate concern, I’m not so sure that level of consciousness will arise in machines as
soon as some have predicted.
For that matter, I’m not so sure machines will ever become truly self aware, or “feel” things as we do. Emotions are an intangible that may
elude all attempts at programming. I do think that machines will become quite good at mimicing human behavior and characteristics, though. So good
that for all intents and purposes machines may become indistinguishable from the rest of us. They will be able to carry on intelligent conversations,
read our facial expressions and body language well enough to acurately determine our moods and emotions, and react accordingly. In the form of
humanoid robots they will be able to move about the environment with smooth, continuous motion and be nearly indistinguishable from the rest of us.
Since we’re a pretty gullible bunch, machines will not have to achieve sentience, or feelings, or self-awareness in order for us to form full-blown
emotional attachments to them. As long as they can halfway decently mimic us, and intelligently respond to us, that’s all that’s necessary for
them to qualify as good buddies, soul mates, sex partners and, yes, marriage material. We humans are so easy, as witnessed by the responses to the
video in the OP. At this stage machines will probably not pose any real threat, since they will still be pretty much under our control. I doubt it
will be before the 2050-2100 period, or later, that machine superintelligence, and all that comes with it, finally arrives.
It’s at this stage when things might start to get a little dicey. Once computers can effectively program themselves and reproduce (make other
machines) with improvements incorporated into each new generation (machine evolution), a technological intelligence explosion could conceivably occur
and proceed at an exponential rate. At this point, the characteristics that would concern me more than machine sentience/self awareness would be those
of self-preservation and goal-seeking. These are things more likely to be programmable. It’s hard to imagine the extreme and ridiculous lengths a
goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter.
With machines that can outwit us in a fight for resources and self-preservation, things could get ugly fast.
Don’t get me wrong. I love technology. I make my living as a software system developer/analyst, and love it. I’m not an authority on AI, but I do
think I can read the writing on the wall. Superintelligent machines are more of a likelihood than not at some point in our future. I just hope when it
happens we’re intelligent enough to hang on to the controls.
In closing, there’s a reasonable chance that what I’ve just stated is pure BS and that I’m hopelessly misguided. It’s just my personal view
and take on it...
Great thread,
EnigmaAgent. I think it causes us to pause and think about where we’re headed. Discussions like this one are important for us
to have as we plunge headlong into the next paradigm of our being. Thanks!
Have fun...