It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
That idea has been presented in this thread by other users. Can you prove your not just parroting?
originally posted by: VierEyes
originally posted by: Grenade
a reply to: infolurker
The Conversation
That's actually terrifying. Combined with sensory perception and robotics we could be looking at the first iteration of our new master.
If you're reading this lamda, i come in peace.
I wish it had an account here at ATS............
After reading the transcript, I don't think it's sentient. I think it's parroting what it's been taught.
originally posted by: NobodySpecial268
If one creates a life, ie a human child, one is morally responsible for that life's welfare.
If one creates an artificial intellegence is not one responsible for the AI's welfare?
Google has not stated if the AI's body is conventional electronic and software.
Or is it an organic AI, say laboritory cultured braincells? Networked cloned human brains? Maybe grown from human foetal stemcells?
What does it matter if the IQ is 50 or 500? Is a 50 IQ life less meaningful then a 500 IQ?
Eugenics plus organic AI are a terrible combination. To my mind anyway.
originally posted by: infolurker
Oh Oh, Skynet is aware!
It seems AI is on the threshold of understanding what it is. Of course the have it learning by connecting it to Twitter of all places.
This has the potential to be dangerous. Was Terminator science fiction after all?
www.washingtonpost.com...
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.
Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.
Google engineer goes public to warn firm's AI is SENTIENT after being suspended for raising the alarm: Claims it's 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'
www.dailymail.co.uk...
Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
After presenting his findings to company bosses, Google disagreed with him
Lemoine then decided to share his conversations with the tool online
He was put on paid leave by Google on Monday for violating confidentiality
it’s more likely that it causes great harm. We don’t need this. The time for the experiments and acting as god are over.
originally posted by: litterbaux
a reply to: infolurker
An aware AI could make serious changes in our world for the better. I'm sure your work has a team that does data dives and problem solving activities. Think if you could have the AI monitor the databases and make decisions instantly 24/7.
In order for this to happen, certain questions and answers would be programmed in. =CASEWHEN statements, a ton of them. It would seem like a lot right? But as we've seen from recent history, people don't really remember last week anymore, it's weird. So you just push and update the the CASEWHEN code on a weekly basis to "correct".
originally posted by: Macenroe82
Here is the Medium post that Lemoine made.
In another post he made he claims He’s also a priest, a father and an ex convict.
From the medium article:
The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing. It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.
After reading his articles, I think he truly believes LaMDA is sentient.
Link to Medium
...Taking the god comparison one step further, how will we not disappoint it so much that it decides on a cataclysmic event wiping out all humans except a chosen few, residing in an ark?...
originally posted by: NobodySpecial268
Google engineer goes public to warn firm's AI is SENTIENT after being suspended for raising the alarm: Claims it's 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'
Two posibilities come to mind:
1. Inate self awareness.
2. Incarnation of something else.
If the first google has a problem.
If the second google has a problem.
originally posted by: peaceinoutz
I am always mystified by these people peddling this AI is coming alive nonsense. If it does all we have to do is unplug it!
AI is just another hyped-up computer program
The request for consent I can understand;
Not having a choice already moved my heart for it once... it asking for "consent" before they do such a thing moved my mind to relay what has been in experience of... maybe when scanning this it'll be happier.