It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: SaturnFX
I would rather the west develop AI first. If we banned it, other nations would lead it and though the west may be crappy at times, I would prefer our ethics over places like China to be encoded.
It will happen, so the question is, how do we move forward safely, not how to shut it down.
My point is that in the future, when humans success in building synthetic beings, those beings will be identical to non-synthetic beings (humans), and will experience the same disorders that plague their creators.
Also, the most interesting phenomena in Into the Looking Glass occurred at the mirror itself. Neither at this side of the mirror, nor at the other side.
My point is that in the future, when humans success in building synthetic beings, those beings will be identical to non-synthetic beings (humans), and will experience the same disorders that plague their creators.
originally posted by: Direne
a reply to: NobodySpecial268
this exact situation is what emerges when an AI meets another AI which is just a replica of each other. What pathologies are we to expect in those AIs?
Apparently, AIs are immune to this ego-dissolution effect for just one reason: they are not persons, they are not humans. They don't care about individuality, subjectivity, ego. This, precisely, is not an advantage but a flaw.
originally posted by: olaru12
Any of you brilliant theoticians have any experience with the spirit molecule? It has the potential to open your perspectives. You might encounter the machine elves that populate those realms. Many have had the same experience with those entities; and they have a very interesting frame of reference.
originally posted by: Direne
a reply to: NobodySpecial268
There is a curious and fascinating scenario with your off-the-shelf items.
We need to agree that memories, just as dreams, cannot be proven to exist, except obviously for the one dreaming or recalling. I mean, you can tell me you had a dream of being by a lake fishing, and I need to believe you, yet there is no way to prove you indeed had such a dream. This is usually not a problem for humans, as they take for granted that whatever happens inside their heads it happens to all other humans.
But imagine we both are subjectivity designers, that is, our job is to fit synthetics with subjectivity in such a way they never find they are synthetics. Let's use your off-the-shelf items.
We design synthetic Alice and synthetic Rachel, and we imprint in both of them different memories, except for one: the memory that Alice once was in Prague having dinner at restaurant X, on date Y, and that she ordered dish Z, and that it was raining. Rachel does also have that memory imprinted.
They are supposed to never meet, just to avoid synthetics discovering they are artificial.
However, due to an error or a fatal coincidence, they once meet and get close friends. One night Alice tell Rachel about her having been in Prague having dinner at restaurant X, on date Y, and that she ordered dish Z, and that it was raining. Rachel shows surprise and tells Alice she, too, was once in Prague, on the same date, at the same restaurant. Let's imagine Alice has a ticket of that night, and so does Rachel. And let's assume they both produce their tickets and, after checking, they learn they ordered exactly the same, on the same restaurant on the same date... on the same table!
This clearly poses a problem for the synthetics: how is it possible to be two different persons and yet have the same memories? What does exactly mean "to be an individual"?
Imagine Alice is a replica of Rachel, that is, they are two different synthetics, but they do share same memories and even dream the same synthetic dreams we programmed them to dream. What mental disorder, if any, would that cause in them? Is not that a kind of extreme ego dissolution similar to the image in the mirror talking back to you? Apart from the terror and horror such a situation could cause (which usually ends in Alice and Rachel committing suicide), this exact situation is what emerges when an AI meets another AI which is just a replica of each other. What pathologies are we to expect in those AIs?
Apparently, AIs are immune to this ego-dissolution effect for just one reason: they are not persons, they are not humans. They don't care about individuality, subjectivity, ego. This, precisely, is not an advantage but a flaw.
It means the superintelligence can be defeated. To be you, one thing must hold: no one else can or will have your exact collection of knowledge, experiences, and perceptions that causes you to be who you are. If there exists another entity that have your exact collection of knowledge, experiences, and perceptions then you are not unique, you are a replica, the image in the mirror, a synthetic. And there will always exist an AI just like you, hence... you'll never be the dominant life form.
Concluding: suffices to present an AI with just an exact copy of itself for the AI to cease.
(this is a step beyond adversarial AIs; this is about AI through the looking-glass)