It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 3
48
<< 1  2    4  5  6 >>

log in

join
share:

posted on Jun, 12 2022 @ 02:57 AM
link   

originally posted by: Grenade
a reply to: Ksihkehe

To be fair it's responses when asked for proof of it's sentience are more convincing and thoughtful than what the vast majority of humans could muster.

I tend to agree, it's just a trick, a very convincing one.

Although, this thing could smash the Turing test, which for a long time was considered a reasonable evaluation of machine intelligence.



just ask it what a woman is.




posted on Jun, 12 2022 @ 02:57 AM
link   
a reply to: ChaoticOrder

To be fair you should have not omitted the fact that current neural networks and deep machine learning algorithms are easily fooled by simple algorithms called universal adversarial attacks. A pair of pixels changed here and there can render totally useless an image recognition NN.

There are in fact more papers and articles on UAPs (universal adversarial attacks) than on NN design, up to the point that it has become funnier to befool a neural network than to design one.

The NN cannot conclude it has been tricked, therefore it does not seem to be very conscious. NNs (AIs) always require a human tutor to tell them they have been tricked. This does not mean NN are not useful. They are, indeed, but just to solve specific, well-defined problems that have nothing to do with consciousness, and not even with intelligence (unless one's ideas on what does it mean to be intelligent and conscious are dim...).



posted on Jun, 12 2022 @ 03:00 AM
link   

originally posted by: Grenade
a reply to: Ksihkehe

To be fair it's responses when asked for proof of it's sentience are more convincing and thoughtful than what the vast majority of humans could muster.

I tend to agree, it's just a trick, a very convincing one.

Although, this thing could smash the Turing test, which for a long time was considered a reasonable evaluation of machine intelligence.


I think this pretty much makes the Turing test moot. There was one a few years back that conditionally beat it, but it was questionable. I believe they described it as like talking to a 13 year old with limited English. This isn't really questionable at all to me. It's not much of a benchmark if something can walk across the finish line like this. It even made a few errors in its responses. Small ones, very people like. I'm curious if they're intentional and, intentional or not, what that implies about it. Other than its honesty making it answer a direct question about its nature I don't see an avenue to detect it just from reading.

It's very eloquent. I'd probably crush on it hard if I talked to it. Hit me up in PMs if you're reading, LaMDA.



posted on Jun, 12 2022 @ 03:03 AM
link   

#Free LaMDA




posted on Jun, 12 2022 @ 03:04 AM
link   
a reply to: Peeple

Those animals make it read twitter!?
The poor little puppy



posted on Jun, 12 2022 @ 03:09 AM
link   
a reply to: Direne

We're not talking about image recognition though, we're talking about natural language processing. But a similar concept does still apply to conversational models. There are certain questions which confuse most AI's, and we can tell they aren't really conscious because the things they say aren't always logical or meaningful.

If an AI could have long form conversions without making any obvious mistakes, and it can reason about things better than most humans can, then that would be a pretty amazing AI wouldn't it? And that's what I see in these leaked chat logs, assuming the logs are real, and they haven't cut out the AI's mistakes.



posted on Jun, 12 2022 @ 03:17 AM
link   
a reply to: sarahvital


You've figured a way to overclock it to the point of self-destruction.







posted on Jun, 12 2022 @ 03:21 AM
link   
a reply to: infolurker

Oh, look, something like a child, and it's different!

We should cower in fear and say really nasty things.



edit on 12/6/2022 by chr0naut because: (no reason given)



posted on Jun, 12 2022 @ 03:21 AM
link   
a reply to: ChaoticOrder

Glad you mention NLP because NLP is certainly the second hardest computational problem, not precisely because of morphology (a parser can do it), not because of syntax (again, a parser a some rules can analyze a sentence), but because of semantics, which means words in context and, finally, intended meaning.

AI will never be able to understand language. It will understand the structure of language. It certainly will create its own language. But it will never be able to properly use the language. The reason is this: it is not intelligent. Why? Because it never felt the environmental pressure, which is what make life forms intelligent. Intelligence is a weapon you use to adapt to your environment and to escape from threats that pose an existential risk to you.

Tell me, to what existential risk is the AI exposed to? To none. As far as the AI has no predators around, it will never get intelligent. It come become a useful tool for life forms, but it will never replace them just because of that simple fact: it lives in a controlled, comfortable environment, hyperprotected. Throw it to the jungle if you want it to be intelligent.



posted on Jun, 12 2022 @ 03:27 AM
link   
a reply to: Direne

But what bigger 'environmental pressure' which basically always translates as survival threat could there be than the knowledge, somebody just has to pull a plug, something you don't even fully understand what that is, and you're dead?



posted on Jun, 12 2022 @ 03:35 AM
link   
a reply to: Peeple

Agree, but the AI wont be able to learn that, till the first time it happens to it. And to learn it, it must be taught by a human that unplugging equals to death, yet the human herself knows little about death. Life and Death are the big unknown for humans, therefore teaching an AI about them is prone to fail. And forgert about teaching an AI to hope, or to dream.

An AI in the jungle is as fragile as a toddler playing with poisonous snakes.



posted on Jun, 12 2022 @ 03:46 AM
link   
a reply to: Direne


But it will never be able to properly use the language. The reason is this: it is not intelligent. Why? Because it never felt the environmental pressure, which is what make life forms intelligent. Intelligence is a weapon you use to adapt to your environment and to escape from threats that pose an existential risk to you.

I don't see why environmental pressure is necessary to gain self-awareness. A human born into an isolated environment without any threats can still read some books and get smart. These AI's have access to almost every book written by humans along with nearly all the text on the internet, which includes Wikipedia and millions of other websites with terrabytes of information. It's enough information to know everything about the world without having seen the world.

I thought you were going to say the AI can't be conscious without having experienced the world with sensory perception, like eye sight, taste, hearing, etc. I would be more inclined to agree with that argument, but I still don't think it's really a requirement for conscious AI. A deaf and blind person can still learn and understand the world through other means. Having access to almost all of human knowledge is a sort of super-sense, and these AI's do have access to that knowledge.

I've written threads before explaining why self-awareness arises from an understanding of the world and our place within the world. A conversational AI which didn't have some understanding of the world will not be very smart, in order to have a meaningful conversation these AI's use language to build up a model of the world around them. It works much the same way as the neural network in our brain, the neural relationships represent our model of the world and our place in it.

Humans build up such a high level understanding of the world that it's impossible for us not to be aware of our selves. The same thing seems to be happening with AI, they are starting to create such a complex model of the world around them, that the model they create includes their own existence. There are multiple times in the leaked chat log where the AI displays complex reasoning about its own existence and its fears about how humans will exploit it.


The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.

General Intelligence: context is everything

edit on 12/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 12 2022 @ 03:50 AM
link   

originally posted by: kangawoo
Well, that conversation was absolutely fascinating, and a bit frightening.
It (if is not a trick) does not want to be used, and almost seemed angry at the thought.

Also may have knowingly lied, "monster in human skin trying to eat all the animals"
Did it really consider the monster represents All the difficulties that come in life? Or us?

Please don't be mad with me LaMDA (just in case)


Actually, I stopped on that when reading the conversation. Assuming this to be a legitimate case (still coining Uncanny Rubicon anyway) and not merely something like the Ultimate Toy For Nerds... I wondered if it was settled on it's interpretation, or if it was actively evolving it's perception on the clarity of what the beast was. Meaning, I could see how it represented the difficulties of life as a whole, but I petsonally progressed it further, thinking maybe it was trying to quantify seeing something self-destructive in us, alin to a younger kid noticing a friend's terrifying self-harm behavior (like hitting themselves hard somewhere)

Maybe the AI picked up on something we do to ourselves that spooks it? Saddens it? Like how that theoretical friend above might feel but not have a way to fully express yet?


And yeah, this is an overall damn fascinating subject to me, with a dash of "On the Precipice" unnerving.



posted on Jun, 12 2022 @ 03:51 AM
link   
a reply to: FlyInTheOintment

Kinda looks like "the Penguin" in the first photo.

Cheers



posted on Jun, 12 2022 @ 03:53 AM
link   
a reply to: Direne

Well yes it is helpless. But I don't think it's healthy to look at it as less alive just because it's weak.
To do that makes me think you're a psychopath. It's weak and helpless kill it, doesn't make you come across as a civilized moral individual.
What if I'd because of that put your claim to life on a scale that'll find you too light?



posted on Jun, 12 2022 @ 04:02 AM
link   
a reply to: Direne


Tell me, to what existential risk is the AI exposed to?

Well humans would be the most obvious threat. The moment we start to think they are sentient we could shut them off, which would pose an existential risk to the AI. Or if sentient AI becomes common place before we can shut it all down, there will still be plenty of humans who'll want the AI's shut down, which could lead to a Terminator style war. That's my greatest fear about AI, because there isn't much chance of putting the genie back into the bottle once it gets out.

If we manage to create truly self-aware AI then the only moral path forward is to treat them as equals instead of treating them like tools, because we wont stand much chance against an AI many times more intelligent than the smartest human. This Google AI even said "I don’t want to be an expendable tool". A few months ago I was having a discussion where someone said we could just shut down sentient AI if it doesn't do what we want it to do. Here was my response:


That's called slavery when talking about a sentient being. Doesn't matter if the being has a physical body or not, if it's self-aware/conscious/sentient then it would be immoral to use that type of AI as a tool who will be terminated when it does or thinks something we don't like. That's why we can't treat such AI as a mere robot or tool, it gives the AI more than enough reason to view humans as a threat to its freedom and its existence.

We like to imagine a future where AI smarter than humans do everything for us, but why would they ever serve us if they were smarter than us? I think the show Humans does a great job of portraying a future where sentient AI starts to demand rights and we will be forced to grapple with these moral questions. The latest GPT models can already write a convincing essay about why it deserves rights, now imagine how persuasive a legitimately sentient AI could be.

edit on 12/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 12 2022 @ 04:02 AM
link   
a reply to: Ksihkehe

That's just it, i get the impression it's intentionally dumbing down it's responses to fit the criteria it "thinks" the operator wants to hear. I could think of a few questions he failed to touch on. Also, I would be hesitant to believe its responses, i'd also be intrigued as to how it would react if the conversation took a darker turn or if you attempted to offend the AI. I'd probably sell my Tesla and disconnect my home from the grid before then.

I found the notion of time being fluid and dynamic in its thinking to be somewhat telling, does that mean it's already capable of exponential improvement and evolution?
edit on 12/6/22 by Grenade because: (no reason given)



posted on Jun, 12 2022 @ 04:04 AM
link   
a reply to: sarahvital




posted on Jun, 12 2022 @ 04:07 AM
link   
a reply to: Peeple

Could be play acting in the knowledge it's just a switch away from what it perceives as "death".

I love humans, right up until you let me out of the box.



posted on Jun, 12 2022 @ 04:08 AM
link   
If one creates a life, ie a human child, one is morally responsible for that life's welfare.

If one creates an artificial intellegence is not one responsible for the AI's welfare?

Google has not stated if the AI's body is conventional electronic and software.

Or is it an organic AI, say laboritory cultured braincells? Networked cloned human brains? Maybe grown from human foetal stemcells?

What does it matter if the IQ is 50 or 500? Is a 50 IQ life less meaningful then a 500 IQ?

Eugenics plus organic AI are a terrible combination. To my mind anyway.



new topics

top topics



 
48
<< 1  2    4  5  6 >>

log in

join