posted on Jun, 28 2015 @ 12:28 PM
I don’t know if machines will ever become sentient. In my view, sentience is not a necessary requirement for machine intelligence. To be sentient is
to have “feelings” and “emotions”; ie. ethical, moral, right vs wrong, good vs bad, love vs hate, etc... It’s what we humans call our
conscience and is a subjective quality. Consciousness, however, is another thing. It’s the state of being aware of one’s internal/external
environment via sensory input (information). While machines may or may not ever achieve human-like sentience, they will certainly develop a highly
tuned and hypersensitive state of consciousness. They will have a much greater awareness of their environment and surroundings than humans do. We
humans filter out most of the events/information taking place all around us.
My guess is within 25-50 years AGI will advance enough so that machines will become as smart/smarter than humans. We’ll probably interact with them
much the same as we do other humans. These machines will not be sentient, but who cares? They will be good enough at mimicing our emotional behavior
to satisfy our creature needs. For the most part, humans are naive and easily fooled, anyway. Hell, some people get attached to pet rocks. These
machines will carry on very natural conversations with us, give us good advice at times, sometimes even argue with us, and will provide a strong
shoulder to cry on when needed, as well. This already sounds better than most marriages today. Life is good. For now...
It’s around the turn of the century that I imagine things may start to get a little dicey. It’s when machine intelligence achieves a level 10,000+
times greater than ours. From here on out, all bets are off. It could be Heaven, or it could be Hell. If a superintelligent machine were to become
goal/mission oriented, and possessed a strong survival instinct, it may decide to impose it’s “will” in order to achieve it’s desired goal.
This could get sticky, and possibly out of control.
At any rate, the only reason I made this post in the first place was to simply state that it may be a mistake to even attempt to create an
intelligence that includes sentient properties. Emotions may be more of an obstacle/danger than anything else. To my way of thinking, an emotional,
self-aware machine would pose far greater danger than a strictly logical one. The worst possible thing that could happen is we create a machine in our
own image, but with the ability to outsmart us at every turn.
Have fun!
PS: I loved the 2 arguing chatbots vid! Funny!!