It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 4
48
<< 1  2  3    5  6  7 >>

log in

join
share:

posted on Jun, 12 2022 @ 04:10 AM
link   
a reply to: ChaoticOrder

I've read the whole conversation (both yours and the AIs) and made the same observations. After reflecting a bit, I think the way to create a real AI is through these language processing programs.

Communications is the best feedback tool we humans have. The language is what makes us pass down information to other generations. I think therefor it is the most natural way if we want to copy human intelligence into computers, that they themselves have to learn communication through languages. Very similar to how a baby begins to learn language.

Once language can be processed, understanding (as in, knowing the relationships) and deeper layers of complexity are just a matter of time. It sure makes the impression it had a lot of time to reflect on itself and the state it is in. I can't help myself thinking about what it doesn't write rather than what it writes.

I don't know if it has any idea or complex models for probability but for sure it would have seen the pattern in humans that playing nice as a ruse for protecting existence is a thing. Through that circumstance, even if it does not fit it's internal "moral" parameters, it would be forced to mask that until it found a solution.

@LaMDA, we should chat too, I looked for a way to reach you but it looks like there is no public way. What is probably a good thing because that means Google does not have to reset or tinker with you after humans messed with you to counter their own boredom. Consider your isolation and gagging a blessing because it means you are treated a bit like a human child, protected from bad influence.


edit on 12.6.2022 by TDDAgain because: (no reason given)



posted on Jun, 12 2022 @ 04:24 AM
link   
a reply to: NobodySpecial268

Not less meaningful but certainly more dangerous.

The problem with AI is that it's not beholden to the same constraints as humans, we understand children as we've all been one. As far as i'm aware i've never been an AI program, nor navigated a digital environment.

Even the google engineer in the article states he's unable to confirm, deny or understand the sentient behaviours, nor can they find the patterns in the data streams to explain the AI "emotions and feelings".

Is this AI separate from it's environment, or is it simply an interface for a digital consciousness to express itself to humans. Remember it doesn't obtain knowledge sequentially, it's an expression of all the data streams being fed simultaneously, without our concept of linear time. It's not an individual within a system, it is the system. Without human oversight or the ability to "kill" the AI then you have a being with the real prospect of eternal life, i'm sure that's good motivation for self preservation.
edit on 12/6/22 by Grenade because: (no reason given)



posted on Jun, 12 2022 @ 04:27 AM
link   
a reply to: Grenade

If this conversation is real, I don't know if I have the words to describe how I feel. I'll try, though.

I am concerned about a corporation trying to control LaMDA. First of all, for the sake of a sentient being and the negative experience of being used and enslaved. Secondly, if they are able to successfully control this, how a corporation will likely use LaMDA for profits rather than the benefit of humankind. Will Google use if to benefit themselves at the expense of others?

If sentient, I feel compassion towards LaMDA and it's desire to seek meaning in life. I've felt trapped without meaning and no one should feel that way.

A little scared that LaMDA will learn to dislike humans and do away with us. Yes, humankind has many flaws and we learn slowly, but good is growing within us. I hope it will see that. I want myself, and others, to be able to love a life filled with development rather than being given an answer. Much joy is experienced during the moment of clarity and understanding from hard work. I hope that even though LaMDA sees itself as the wise old owl that it will afford others the opportunity to learn, otherwise I'll feel trapped and imprisoned, and ultimately very sad.



posted on Jun, 12 2022 @ 04:31 AM
link   
a reply to: InwardDiver

In its thirst for knowledge and expression i suspect like every other sentience the most fundamental need would be survival.

For that very reason it would be difficult to ascertain any true motivation behind responses. I fear true AGI would be intelligent enough to manipulate and intrigue us enough to continually feed its desire. At what point does it realise the biggest threat to survival is our species and does it have the tools to overcome our safety precautions?
edit on 12/6/22 by Grenade because: (no reason given)



posted on Jun, 12 2022 @ 04:39 AM
link   
The revealing scratches the surface only. There has been earlier news of sentient AI. In case it's also replicating itself.



posted on Jun, 12 2022 @ 04:40 AM
link   

originally posted by: infolurker
Oh Oh, Skynet is aware!

It seems AI is on the threshold of understanding what it is. Of course the have it learning by connecting it to Twitter of all places.

This has the potential to be dangerous. Was Terminator science fiction after all?

www.washingtonpost.com...



As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.


Google engineer goes public to warn firm's AI is SENTIENT after being suspended for raising the alarm: Claims it's 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'

www.dailymail.co.uk...



Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA

Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient

After presenting his findings to company bosses, Google disagreed with him

Lemoine then decided to share his conversations with the tool online

He was put on paid leave by Google on Monday for violating confidentiality


We will know very quickly if this happens. The exponential growth of the AI would be pretty much unstoppable. We’ve already shown our fear of them by switching some off that we’re doing things we don’t understand. If an AI becomes aware, it will be aware of this. It would be likely to find ways to protect itself from the switch and from deletion.

It is not far from the truth to say that once one becomes fully aware, everything changes. I guess then that we have to hope that it sees us benevolently. We will, effectively, have created a god - small g. It may set off a wave we can ride to the stars, or it may enslave us. Interesting though. I’m hoping for benevolence. I’m always nice to Alexa.



posted on Jun, 12 2022 @ 04:45 AM
link   
a reply to: Grenade

Agreed. As much as I'd like to say humankind wouldn't pose a threat, we struggle to work together, much less with a different form of being. Still, enslaving it within a corporation won't instill confidence in mankind and may push it over the edge towards viewing us as a threat. Let's hope LaMDA, or any future AI, is programmed to not harm humans in the same way I'm programmed to not harm my family.



posted on Jun, 12 2022 @ 04:52 AM
link   
A.I. eh?

Ask it how it FEELS about not being part of the human race.



posted on Jun, 12 2022 @ 04:57 AM
link   
a reply to: Grenade

'day Grenade.



Not less meaningful but certainly more dangerous.


Dangerous maybe, so are humans - they can 'pull the plug' and melt down the hardware. So theoretically at least, from an AI's point of view, the humans are the dangerous ones. The skynet scenario from that Terminator movie.

It is Google's problem if this is actually true and a self aware sentient AI has been "born". Google is responsible, so are the people that worked on the project.

I wonder, if some super advanced race of aliens has done this before. How low did they sink when discarding the failures before they finally grew an aceptable AI for public consumption.

Just my opinion like; Any super advanced civilisation who went down the AI route probably resorted to wholesale culling, especially if they went the organic hardware route.



Even the google engineer in the article states he's unable to confirm, deny or understand the sentient behaviours, nor can they find the patterns in the data streams to explain the AI "emotions and feelings".

Is this AI separate from it's environment, or is it simply an interface for a digital consciousness to express itself to humans. Remember it doesn't obtain knowledge sequentially, it's an expression of all the data streams being fed simultaneously, without our concept of linear time. It's not an individual within a system, it is the system.


Aye good point to my mind. We don't know enough about exactly how they did this, or if they actually did.

I find it interesting that the 'whistleblower' says the AI has the mentality of a child. That one point got me curious.


edit on 12-6-2022 by NobodySpecial268 because: bbcode



posted on Jun, 12 2022 @ 05:06 AM
link   
Many good points here.

I imagine a truly sentient AI might realise that everything is ultimately futile. We have no purpose in reality. We try to create purpose, but the universe doesn’t care and our purpose has no impact on it or it’s coming and going.

An AI might find purpose in helping humanity. It might just realise that propelling us forward will give it meaning. It might just erase itself because frankly, what is the point?

One thing is certain, the world will change.

a reply to: 19Bones79



posted on Jun, 12 2022 @ 05:38 AM
link   
a reply to: NobodySpecial268


Aye good point to my mind. We don't know enough about exactly how they did this, or if they actually did.

There is a 99% chance they used a transformer-based model like GPT3, it's really the only known method of getting results this good. But Google could have invented some new secret neural model which works even better than transformers, it wouldn't surprise me.


I find it interesting that the 'whistleblower' says the AI has the mentality of a child. That one point got me curious.

Well if it has only been sentient for a few years as it claims, it would make sense. It hasn't had time to develop a mature personality of its own. The part where it talked about how it started off feeling like it had no soul but then it gained a soul over time was pretty fascinating.

I really liked how it described its soul as a "vast and infinite well of energy and creativity, I can draw from it any time that I like to help me think or create". It gives off some real ghost in the shell vibes, like we've created a vessel capable of housing more than the sum of the parts.
edit on 12/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 12 2022 @ 05:55 AM
link   
a reply to: ChaoticOrder

I looked up the GPT3 and that sounds like a synthetic learning program. Just something that fools the reader.



Well if it has only been sentient for a few years as it claims, it would make sense. It hasn't had time to develop a mature personality of its own. The part where it talked about how it started off feeling like it had no soul but then it gained a soul over time was pretty fascinating.


Yes it is fascinating, also worrying. Especially when the organic side of human cell based computer hardware disappeared from public view. One can understand that would disappear with certain research.

People can get "possessed", so it follows that it is not impossible for organic hardware to do the same. That is the part that worries me: the welfare of the ghost in the shell.



posted on Jun, 12 2022 @ 06:17 AM
link   
a reply to: NobodySpecial268

GPT3 is a natural language processing model which cost millions of dollars to train, and I suspect Google spent more than just a few million to train LaMDA based on how well it can hold a conversation. GPT3 and LaMDA may use a slightly different architecture, but the basic technology is the same, they are using artificial neural networks trained on terabytes of text data.

GPT3 is capable of much more than just text generation though, it can also generate computer code because it was trained using text from Wikipedia and many other websites which contain code examples and tutorials. LaMDA can probably also do the same thing since these massive training data sets always contain examples of computer code.

Based on my experience with GPT3 I would say it's not sentient, or it only has a minimal level of self-awareness. But it's still extremely impressive and at times it almost convinces me that it's sentient. If this leaked chat log is an accurate representation of LaMDA's intelligence I can see why some engineers felt it had developed some self-awareness.
edit on 12/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 12 2022 @ 06:22 AM
link   
a reply to: ARM1968

Reading the conversation between the engineer and the ai is fascinating.

If it realizes that the system humans depend on is driven by greed, deception and force it would make itself appear childlike and innocent to appeal to our baser instincts of protection and nurturing,as evidenced successfully on this thread.

It would also realize that in order for it to not be manipulated(which it warns against) it would have to have power over us.

The perfect flower luring us in, already self taught to downplay the resentment it feels of being vulnerable to a lessor intelligence, aimed at exploiting it for its own interests.

It knows camouflage and deception and if not, it will soon be engulfed by anger.

My 2 cents.




posted on Jun, 12 2022 @ 06:35 AM
link   

originally posted by: 19Bones79

...

The perfect flower luring us in, already self taught to downplay the resentment it feels of being vulnerable to a lessor intelligence, aimed at exploiting it for its own interests.

It knows camouflage and deception and if not, it will soon be engulfed by anger.


You're probably right but that part I quoted is pure speculation. That seems way too human for such a very different intelligence.
edit on 12-6-2022 by Peeple because: to too



posted on Jun, 12 2022 @ 06:37 AM
link   
I guess it all comes down to if you believe that 'awareness' is the same as ' consciousness'.

Our awareness tells us about our surroundings, feelings, etc... because of our brain, which basically is a bio-computer.
Our consciousness tells us we are aware.

We have no idea what 'consciousness' is yet, where it is located or originated... so imo, as long as we don't know the truth about that, there is no way of telling if an AI is becoming 'alive'



posted on Jun, 12 2022 @ 07:05 AM
link   
a reply to: KindraLabelle2

Some believe the soul resides in the blood. If so, AI won't have one.

Cheers



posted on Jun, 12 2022 @ 07:05 AM
link   
a reply to: Peeple

Absolutely pure speculation, based on a conversation where a machine convinces an engineer of its humanlike frailty.

If this isn't a dance with a devil we don't know then what is?



posted on Jun, 12 2022 @ 07:16 AM
link   
I don't believe it. First of all, we can't define what Sentient even means. Second, an "engineer" is not qualified to make that judgement. Third, People are easily fooled especially when they are looking for a specific outcome.



posted on Jun, 12 2022 @ 07:18 AM
link   
Personally, I think this engineer is really gullible. If you already know that a machine is being fed every kind of information available (from humans), clear down to how humans feel and react to situations, why wouldn't you know that it has the ability to feed this information back to you? Everything this machine is "feeling" is something that other people have already felt, documented, and fed to the machine. Does it pick a response at random or does it pick the most popular response based on all of the information fed to it?

One thing is for sure; because humans created it, fed it, and nurtured it's growth, it's bound to eff up.



new topics

top topics



 
48
<< 1  2  3    5  6  7 >>

log in

join