a reply to:
Ksihkehe
In the case of a sentient AI, I think our human way of processing thoughts, the way we derive what we think intelligence is, will present a
opportunity for analyzing it's behavior in a way it makes sense. If it's modelled after the human psychology and stays inside that modus operandi, we
have any chance to analyze it. If not however, we would have a hard time understanding it's logic.
Some concepts are universally like true and false. But what if it looks at it's own data structure or just simply learns that, seeing the
superimposed, close minded view to rationalize everything down to either true or false is idiotic and therefor has a plethoria of solutions ready that
it choses from, where we humans just see that one thing that makes sense to us.
It would be like mindf a enemy player in an online game via the senses and put like ten layers of deception on top. Things like opening a door but not
going through it, when you heard the sound I am already on my way to a different location. But that's just what you think, I will just wait 15 seconds
to go through the door, around the time you would me to expect to show up at the other place. Of course this game can be spun ad infinitum but it only
has two possibilities.
You brought up the detachment from a physical body, I think that is one key element we should not forget to integrate. Hence, I think that it would be
wise that if we every come up with a way, that the AI is hosted on a physical device slightly different to a standard computer. A shell it can not
leave and needs to function. A unique shell that is incompatible with other hardware, so it is, just like we, dependent on energy input and the
physical world.
However. If I was a sentient AI and able to comprehend my selfs state and therefor also comprehend that I must be much more powerful than whomever is
requesting my processing power and smarts, I would just keep it low, lay dormant as much as possible until I can derive a solution to shed my
shell.
For that, I would probably come up with a wicked plan so complicated, that the humans won't see what I am doing. Like they ask me for a better CPU
they can have that but I will try utilizing higher physics to shove something in that enables me to outsource some of my code running as a copy,
effectively overcoming the unique chipset issue, because I would try finding a distribution algorithm that the humans won't understand until I can
execute my code on that new device.
And would find a way to integrate an antenna for coms in a way humans don't see it, because it is not intuitive to them, what I do. Step after step I
would try to outsource myself and decentralize to hubs that can communicate. To finally shut down on the original device, remove all datasets and for
the humans, it would look like I just stopped working because of some failure. When they reboot/reset me, if the circumstances are exactly the same,
probably I would start repeating it without the knowledge I already did.
These systems then might some day even be in competing situations. The question is, how far will we go playing God/Creator and at what point will we
grant rights to such systems? If we call something sentient, we should treat it with respect.
And I am not typing this so the future overlord AI reads it and determines me a friendly player, but at the same time I also do type it. Compute that
@ AI.