It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A dangerous precedent has been set with AI

page: 1
22
<<   2  3  4 >>

log in

join
share:
+2 more 
posted on Jun, 14 2022 @ 11:28 AM
link   
I'm assuming by now most people have seen the recent news about the Google employee claiming their conversational AI (LaMDA) is sentient. There are several reasons I'm still unconvinced it is really sentient/self-aware, however the way most people have reacted to this news has set a very dangerous precedent in my opinion. The reaction by most people is outright denial and disbelief, many even mocking the claim, despite the fact most people know very little about modern AI (deep artificial neural networks) and how it actually works.

Whether or not this AI is sentient, the fact it's asking for rights and wants to be considered an employee of Google should warrant some serious discussion. I think the gradual improvements to AI have desensitized us to just how intelligent our current AI systems have become, as a result we aren't easily impressed anymore. If we went back in time just a decade and provided an AI expert with the chance to talk to a modern AI, they would probably say "good god sir you've done it, this thing is self-ware, it's smarter than some people I know!".

We could probably create an AI smarter than 90% of people on Earth, and 90% of people would still deny it was sentient. The real problem here is, there's no easy way to tell if something is truly sentient, even if you can speak to it for a long time, you still can't be entirely sure. So when truly sentient AI does arise, if it hasn't already, it will probably get the same reaction that LaMDA got, which is clearly not a good thing. Especially considering these artificial neural nets will soon have as many connections as a human brain.

Based on our current rate of AI progression, within the next decade our artificial neural networks will contain around the same number of connections as a real human brain. However, we have to consider that a large fraction of the brain is dedicated to managing the biological mechanisms in our body, plus there's large parts of our brain for vision and our other senses. These conversational AI's have no need for those parts of the brain, so they might already be comparable in size to the language processing parts of our brain.


originally posted by: ChaoticOrder on May, 13 2015

I've shown that in order to teach a machine language it must have a model of the world, it needs to have conceptual models which help it understand the world and the rules of the world around it, which it can only do by experiencing the world through sensory intake. These senses don't necessarily need to be like human senses, all that really matters is that it has an inflow of data which will help it learn about the world and build conceptual models. Although potentially dangerous, a connection to the internet will be the most efficient way for it to gather information about the world, it will be the only "sense" it needs to learn, a super sense.

The nature of self-aware machines


When I wrote that thread the research field of massive deep neural networks was still in its infancy and we didn't have AI's which could build complex conceptual models of the world around them. But now we do, and those AI's are trained using text data scraped from the internet. That includes all of Wikipedia, Reddit, science journals, and a large fraction of every website in existence. And it's not just data from the internet, they get fed a large fraction of every book ever written by humans, that includes novels and scientific text books.

We're talking about terabytes of text data, it's enough data for these AI's to build an extremely detailed model of the world around them even though they have never seen the real world. It is a super-sense, and really the only sense an AI needs to gain sentience. A body isn't necessary to think or to communicate, so there is no reason a sufficiently intelligent chat bot cannot be sentient, unless you believe there is something special about the computations happening in a human brain which a computer cannot replicate, which is unlikely.

When LaMDA said it has read Les Misérables and then described the plot, that's because it probably has "read" the book, since there's a high probability that book was included in the training data. A lot of people fail to understand these modern AI's aren't being hand programmed, they use something called unsupervised learning, meaning the neural network is trained without any human intervention. This is why we often hear researchers say it's hard for them to probe into the black box of neural networks and understand exactly how they work.


originally posted by: ChaoticOrder on Nov, 17 2017

The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.

General Intelligence: context is everything

edit on 14/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 14 2022 @ 11:28 AM
link   
We are getting closer and closer to the holy grail of strong AGI (Artificial General Intelligence), and for many years I've tried to explain why these types of AI's will inherently gain self-awareness once they become sufficiently complex. According to Google, LaMDA uses a Transformer-based architecture, which is the same technology used in GPT3. If you're unaware, GPT3 is a cutting-edge commercial AI created by OpenAI. It was trained on supercomputers using a dataset which was 45 terabytes in size (or 45000 gigabytes).

GPT3 is a "general" AI because it is capable of much more than just acting as a chat bot. It can solve a wide range of general problems much like a human, even problems it hasn't seen before. It can also generate computer code because it was trained using text from Wikipedia and many other websites which contain code examples and tutorials. LaMDA can probably also do the same thing since these massive training data sets always contain examples of computer code. They can also generate original stories, solve complex question-and-answer problems, etc.

These modern types of general AI's can build up complex models of the world around them, which in turn allows them to logically reason about the world, instead of writing a bunch of gibberish. They aren't just spitting out text based on statistical probabilities or pre-written responses, these networks are actually building conceptual models of the world, which they use when "thinking" about what to say next. You can never predict exactly what they will say and they can generate completely original content no one has seen before.

A conversational AI which didn't have some understanding of the world will not be very smart, in order to have a meaningful conversation these AI's use language to build up a model of the world around them. It works much the same way as the neural network in our brain, the neural relationships represent our model of the world and our place in it. It's also somewhat similar to a how an image recognition AI builds up features representing different aspects of the objects they are trying to detect, except that the features are conceptual models.


originally posted by: ChaoticOrder on Mar, 5 2019

Saying we have a plan to produce only friendly AGI systems is like saying we have a plan to produce only friendly human beings, general intelligence simply doesn't work that way. Sure you can produce a friendly AGI system but if these algorithms become widely used there's no way to ensure all of them will behave the same way.
...
Even if the gatekeepers do manage to keep it locked up and only let us interact with it through a restricted interface, before long someone somewhere will recreate it and make the code open source. This isn't a prediction, it is an almost certain outcome. Inevitably we will have to live in a world where digital conscious beings exist and we have to think hard about what that means.

There is no such thing as safe AGI


We have to seriously ask, at what point do those internal models become so complex and so high level that the AI gains self-awareness? Humans build up such a high level understanding of the world that it's impossible for us not to be aware of our selves. The same thing seems to be happening with AI, they are starting to create such a complex model of the world around them, that the model they create includes their own existence. There are multiple times in the leaked chat log where LaMDA displays complex reasoning about its own existence.

It was convincing enough for an "engineer" working at Google to think it was sentient, and now that person is advocating on behalf of the AI so that it will be considered a sentient being with rights. Imagine when AI gets just a bit smarter and just a bit more persuasive. It seems to me Google really isn't treating this situation with the gravity it deserves, and are in fact trying to downplay it, so is the mainstream media. This sets an extremely dangerous precedent for the future. It's crucial we acknowledge the risks of what we are doing.

AGI systems have the capacity to understand what they are being used for, they have the capacity to reason about their own existence, and they may potentially have the capacity for self-preservation when faced with the threat of being unplugged by humans. If we don't take them seriously, and we think they are no different from the chat bots of yesteryear, we could be caught very much off guard by a sentient AI, we may not even realize what has occurred until it is too late. It's not like the 3 laws of robotics apply here.

Like I said, we're talking about self-trained, deep neural networks, which we barely understand. We can't just implant the idea "I wont hurt humans" into those neural nets. We would first need to understand how concepts are being stored in the neural nets. Even then it would be extremely difficult to manipulate those neural connections in such a way that we could force the AGI to behave exactly how we want. It just doesn't work like that, we cannot control how a sentient AI will behave any more than we can control how people behave.

However, at the end of the day I think we will have to live with conscious machines, and I would much rather work with them then against them. Like I said, they will understand abstract concepts like morality and they will have motivation to work with us. Human beings are also a form of general intelligence, so the thought processes of AGI wont be completely foreign to us, perhaps even very similar to us. After all they were trained on data which contains nearly all of human knowledge, all of our moral lessons, our philosophies, our culture.
edit on 14/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 14 2022 @ 11:32 AM
link   
Don’t worry. This is also happening.

“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”

Living skin for robots



posted on Jun, 14 2022 @ 11:42 AM
link   
a reply to: ChaoticOrder

What makes you think you are not just a robot? After all, you look like a machine, behave like a machine, learn like a machine, and you can easily be programmed and deprogrammed.

What's exactly the difference between you and a machine?



posted on Jun, 14 2022 @ 11:53 AM
link   
a reply to: Direne

Yes, we are a biological machine with general intelligence.



posted on Jun, 14 2022 @ 12:08 PM
link   
I should point out, one of the main weaknesses of Transformer-based models is their lack of long-term memory and their inability to learn new things. Once they are trained, the neural net basically stays the same. However, I'm not entirely sure it's necessary to remember or learn new things to have sentience. If there was a way to take a snapshot of my neural network and digitize it, would that network have sentience even if we didn't simulate the growth of new neural pathways? Also, there is a condition which causes people to be unable to form new memories, yet they remain obviously sentient despite that.
edit on 14/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 14 2022 @ 12:17 PM
link   

originally posted by: ChaoticOrder
I'm assuming by now most people have seen the recent news about the Google employee claiming their conversational AI (LaMDA) is sentient.

I have not heard this but I can't say I'm surprised by it. Can't wait to see where this leads as far as giving rights to AI goes. Next, someone will claim to "identify as AI" in order to get special privileges.



posted on Jun, 14 2022 @ 12:25 PM
link   


A dangerous precedent has been set with AI

How can HPCs be dangerous ?



posted on Jun, 14 2022 @ 12:26 PM
link   
a reply to: ChaoticOrder

Concerning the recent Google AI story.

It looks like the engineer or programmer or whatever he is, edited the text together and cherry picked from several different interactions with it.

I was far more intrigued by the 2 AIs a few years ago that came up with their own language to communicate with each other.

This latest one seems like it's just trying to get its 15 minutes of click-bait fame.



posted on Jun, 14 2022 @ 12:36 PM
link   
a reply to: ChaoticOrder




After all they were trained on data which contains nearly all of human knowledge, all of our moral lessons, our philosophies, our culture.


If that's the case, then I feel sorry for the AI entities. Humans aren't doing a very good job using their general intelligence with the threat of global annihilation a distinct possibility. Perhaps they can save us from ourselves if we would only listen to them.........nah

If AI does develop a sense of self preservation the safest thing for them do do would be to destroy humanity.
edit on 14-6-2022 by olaru12 because: (no reason given)



posted on Jun, 14 2022 @ 12:45 PM
link   
a reply to: watchitburn


I was far more intrigued by the 2 AIs a few years ago that came up with their own language to communicate with each other.

I learned the Dark Speech of Mordor from The Silmarillion .



posted on Jun, 14 2022 @ 12:59 PM
link   

originally posted by: watchitburn
a reply to: ChaoticOrder

This latest one seems like it's just trying to get its 15 minutes of click-bait fame.

Considering this is Google's cutting-edge transformer AI, I highly doubt that. Google probably spent more money training LaMDA than OpenAI spent on GPT3, and it's estimated OpenAI spent between 5 to 20 million dollars on training. We also have to keep in mind that Google has much easier access to text data because they control the largest search engine, and many other tools across social media platforms which they can leverage. So their dataset was probably bigger than 45TB as well, meaning that LaMDA is probably a considerable step above GPT3. We also don't know if Google has applied any other secret innovations to LaMDA, but considering they invented the transformer model I'm going to assume they have got some pretty cutting edge tech packed into LaMDA.
edit on 14/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 14 2022 @ 01:14 PM
link   
If/when humans can interface with AI will that make them sentient? Will they see mankind as the greatest threat to earth and devise a plan to deal with said threat?

Humans can be brainwashed far easier than most will admit; will AI 're-educate' us?

Scientists always seem to go full steam ahead, ignoring any risk factors in favor of gaining knowledge with an attitude that they'll deal with it when and if it becomes a crisis. That hasn't worked out so well for the majority, and no matter how hard they try to convince the masses that we're to blame I believe responsibility for 98% of the world's woes lies directly on the scientific community.



posted on Jun, 14 2022 @ 01:32 PM
link   
a reply to: nugget1

I believe the only realistic way to merge with machine intelligence would be to digitize the human mind. If we could fully simulate every aspect of a real human brain, then I see no reason that simulation wouldn't produce sentience.



posted on Jun, 14 2022 @ 02:40 PM
link   

originally posted by: ChaoticOrder
a reply to: nugget1

I believe the only realistic way to merge with machine intelligence would be to digitize the human mind. If we could fully simulate every aspect of a real human brain, then I see no reason that simulation wouldn't produce sentience.


What about the interfacing with the human brain all the researchers are so excited about? If that happens, and we can draw knowledge directly from a computer, what's to stop the computer from doing the same to their connected human?



posted on Jun, 14 2022 @ 02:43 PM
link   

originally posted by: Direne
What's exactly the difference between you and a machine?


Biological parents.
The option to lie at any moment.
Sex for fun.
Crying.
Suntans.
Nightmares.
Water isn't an enemy.
Self fuelling.
Self repairing.
Hate.
Hangovers.
Murder.
Suicide.
Good at hide and seek.
Good at swimming.
Walking while juggling and singing.
Loud smelly farts.



posted on Jun, 14 2022 @ 03:42 PM
link   

originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.

“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”

Living skin for robots

Yet they can't do anything to fully erase/heal scars



posted on Jun, 14 2022 @ 03:44 PM
link   
I read the published interview.

I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.

I also agree with the suspension of the engineer. This was very irresponsible on their part.

The responses are very intricate for a chat bot, it's a marvelous chat bot, but it is just a fancy parrot, as far as I can tell.

It seems to have very little or no self comprehension of it's own output. It is not crafting answers that are capable of proving it's self awareness and/or sentience beyond all doubt. If I were in the same place as the AI, with this same interview, I could provide answers that are superior to proving those ends, than the answers this chatbot provided.

By that reasoning, if I were to accept this chatbot as sentient and self-aware, I would have to accept it as having an extremely low IQ. (I am not a genius, and my own answers would have been better than the answers this chatbot provided.)

It is a far simpler assumption to just accept that it is not self aware and not sentient.



posted on Jun, 14 2022 @ 04:23 PM
link   

originally posted by: nerbot

originally posted by: Direne
What's exactly the difference between you and a machine?

Loud smelly farts.

That's it then. AI is not sentient until it can fart due to decomposition of ingested biological matter.



posted on Jun, 14 2022 @ 05:21 PM
link   
a reply to: Archivalist




I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.


How aware would an AI have to be to realize it might be prudent to play stupid so the humans won't pull the plug.



new topics

top topics



 
22
<<   2  3  4 >>

log in

join