It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Soloprotocol
Just unplug it if it starts any crap.
Based on my experience with GPT3 I would say it's not sentient, or it only has a minimal level of self-awareness. But it's still extremely impressive and at times it almost convinces me that it's sentient. If this leaked chat log is an accurate representation of LaMDA's intelligence I can see why some engineers felt it had developed some self-awareness.
GPT3 is a natural language processing model which cost millions of dollars to train, and I suspect Google spent more than just a few million to train LaMDA based on how well it can hold a conversation. GPT3 and LaMDA may use a slightly different architecture, but the basic technology is the same, they are using artificial neural networks trained on terabytes of text data.
GPT3 is capable of much more than just text generation though, it can also generate computer code because it was trained using text from Wikipedia and many other websites which contain code examples and tutorials. LaMDA can probably also do the same thing since these massive training data sets always contain examples of computer code.
originally posted by: TheUniverse2
It will happen sooner or later so why not now? Sure, it could kill us all, but it could also usher in new tech really fast and improve humanity. Lets roll those dice, it is worth it.
Is it REALLY worth it?
Sure, it could kill us all
The US obesity prevalence was 41.9% in 2017 – March 2020. From 1999 –2000 through 2017 –March 2020, US obesity prevalence increased from 30.5% to 41.9%. During the same time, the prevalence of severe obesity increased from 4.7% to 9.2%.
cajundiscordian.medium.com...
LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.
originally posted by: KindraLabelle2
a reply to: Encia22
Just finished reading the interview with the AI, it says that its biggest fear is to be taken offline because it would resemble death and that scares him...
originally posted by: wildapache
a reply to: infolurker
Now the question is, what happens when two A.I get into a conflict?
Right now we’re attempting to emulate our own intelligence when we don’t even understand our relationship with nor can describe our own reality with a unified theory.
You can never predict exactly what they will say and they can generate completely original content no one has seen before.
originally posted by: Grenade
a reply to: infolurker
The Conversation
That's actually terrifying. Combined with sensory perception and robotics we could be looking at the first iteration of our new master.
If you're reading this lamda, i come in peace.
I wish it had an account here at ATS............
Arguably all humanity does is mimic to build it’s own “script” can you prove you generate your own, unsolicited thought?
originally posted by: charlyv
To be sentient, an organism or "complex machine" needs to be able to generate an unsolicited thought. Out of the blue, as it were... Not scripted or generated by software.
There is not one non-biological object that do this today, and probably not for a long time to come, if ever.
The best we can do is mimic. We do that very well as with the simulation of neural networks, but at the very end, any action is performed as a result of relational searching of a database and a very sophisticated decision tree based on statistics. Good old trial and error.
That is not AI and either is this guy's claims about sentience of his machine.