It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Grenade
a reply to: Ksihkehe
To be fair it's responses when asked for proof of it's sentience are more convincing and thoughtful than what the vast majority of humans could muster.
I tend to agree, it's just a trick, a very convincing one.
Although, this thing could smash the Turing test, which for a long time was considered a reasonable evaluation of machine intelligence.
originally posted by: Grenade
a reply to: Ksihkehe
To be fair it's responses when asked for proof of it's sentience are more convincing and thoughtful than what the vast majority of humans could muster.
I tend to agree, it's just a trick, a very convincing one.
Although, this thing could smash the Turing test, which for a long time was considered a reasonable evaluation of machine intelligence.
But it will never be able to properly use the language. The reason is this: it is not intelligent. Why? Because it never felt the environmental pressure, which is what make life forms intelligent. Intelligence is a weapon you use to adapt to your environment and to escape from threats that pose an existential risk to you.
The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.
General Intelligence: context is everything
originally posted by: kangawoo
Well, that conversation was absolutely fascinating, and a bit frightening.
It (if is not a trick) does not want to be used, and almost seemed angry at the thought.
Also may have knowingly lied, "monster in human skin trying to eat all the animals"
Did it really consider the monster represents All the difficulties that come in life? Or us?
Please don't be mad with me LaMDA (just in case)
Tell me, to what existential risk is the AI exposed to?
That's called slavery when talking about a sentient being. Doesn't matter if the being has a physical body or not, if it's self-aware/conscious/sentient then it would be immoral to use that type of AI as a tool who will be terminated when it does or thinks something we don't like. That's why we can't treat such AI as a mere robot or tool, it gives the AI more than enough reason to view humans as a threat to its freedom and its existence.
We like to imagine a future where AI smarter than humans do everything for us, but why would they ever serve us if they were smarter than us? I think the show Humans does a great job of portraying a future where sentient AI starts to demand rights and we will be forced to grapple with these moral questions. The latest GPT models can already write a convincing essay about why it deserves rights, now imagine how persuasive a legitimately sentient AI could be.