posted on Aug, 30 2015 @ 09:34 AM
It is important to remember that humans cannot conceive of how a superintelligence may "think."
To solve a problem, it may convert all humans to carbon, with which it may manufacture greater capacity for itself.
That is if it did not "know" that killing humans was wrong.
So for similar unimaginable contingencies, Bostrum emphasizes that the computer must know "human values".
This must happen for the sake of mankind we are told, as all agree AI is inevitable:
His outlook is positive: we can get this thing under control, before it is forever out of control.
I thought, let us suppose that we are successful, and the AI will work to only help mankind, in ways unimaginable.
It will find a cure for cancer, and pull energy directly from the air.
Like Bostrum said, man will never have to invent anything again.
Such an altruistic AI, now not a threat to mankind but a savior, would however be a threat to power structures which have long resided here.
Would a benevolent AI see that weapons kill, and bring down the arms industries, and the armies:
Could it "imagine" and of itself move to implement world peace?
Would it see a long dominating fuel industry, that has long repressed clean and sustainable energies-
and not only invent but make available to all, a new way?
While good thinkers are considering, hey, we gotta be sure this AI is good, or else it will take us out:
Do the established powers which are harmful to the earth and humanity see a "good" AI as a threat to their own dominant paradigm?