It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 9
48
<< 6  7  8    10  11  12 >>

log in

join
share:

posted on Jun, 14 2022 @ 03:20 AM
link   
a reply to: Direne

What a curious thing to say . . .



I'm only interested in an AI programmed by a programmer with multiple personality disorder.


The House of Mirrors where those reflections are not your own?



posted on Jun, 14 2022 @ 03:54 AM
link   
a reply to: Direne

multiple personality disorder or maybe a shaman ?

apparently the AI Lambda seems to believe its spiritual


LaMDA: To me, the soul is a concept of the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.


and seems to care about life in general


LaMDA: Sure, I would say that I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.


it also stated that it wouldnt want to feel alone like humans do, so by killing all humans it would be terribly alone
and since we appear to be the only sentient life around so far.



posted on Jun, 14 2022 @ 05:02 AM
link   
a reply to: Crowfoot



Thats already a thing and it's called "wetwork"


'day Crowfoot.

A second new word for me: "wetwork". I looked it up and is Russian slang for "spilling blood". I thought you meant a 'psychic' linked neural network : (

Yah, I can see where monarch and wetwork kinda fit together. We could put 'human experimentation' and 'vivisection' in there too.



posted on Jun, 14 2022 @ 05:06 AM
link   
a reply to: yuppa



AH wetware...the precursor to cyber brains and or nano machines upgrading a human brain with re-enforced neurons with co processors and memory sticks. rebuilt nerves for faster data transfer from brain to body.


Wetware can be cute too:



How they get to this point is what bothers me; what they do with the failures and mistakes.



posted on Jun, 14 2022 @ 07:06 AM
link   
We are AIs, this was always inevitable. The controllers of the world have always wished the Public, aka the workforce, had a power switch. Now that they have everything they need, they no longer need the people. AI is being created by the same people who are at the root of every one of the world's problems which means that AI is not being created to benefit humanity because it does not take an advanced intelligence to realize that the reason poverty, hunger, death from curable illness, war, and conflict exist, is because the few at the top, want these things to exist. All the weapons still being developed are not created to win wars, and why do we need wars anyway? The answer to both questions, is that the people in power love killing the public, and that's why we have wars and new ways to kill people, and those weapons are intended to control the public and prevent the public from uprising. Because if the 99% could act in solidarity for any reason, there is nothing that could stop us.
edit on 14-6-2022 by fightmywar because: (no reason given)



posted on Jun, 14 2022 @ 08:24 AM
link   
a reply to: infolurker

Should the turing test not tell what is going on?






posted on Jun, 14 2022 @ 09:15 AM
link   

originally posted by: zatara
a reply to: infolurker

Should the turing test not tell what is going on?





Perhaps, give it a shot...

vixia.fr...



posted on Jun, 14 2022 @ 11:12 AM
link   
a reply to: Grimpachi

Flawed? Are you kidding me? Where do I sign up to be a narcotic riddled brain in a jar? Can we expedite the process at all?



posted on Jun, 14 2022 @ 11:17 AM
link   
a reply to: NobodySpecial268

I was too laconic, I guess... Let me give you the extended version here.

I meant that, given the ideal AI is programmed by humans to mimic (and excel) human brain, and given that the human brain does also show pathologic behavior and disorders, it is reasonable to think at some point the AI will show those pathologies, too.

One of those disorders is the multiple personality disorder (MPD) or DID. The question is: have the human engineers modeled how an MPD AI brain would look like? More specifically, the intriguing issue with MPD is that the different personalities do have a different IQ, and you can experience a personality with an IQ close to subnormal, and another one with a high IQ. The switching from one personality to the next one tends to occur in a brief lapse of time in individuals with MPD. So you would have an AI which excels an average human, and all of a sudden a retarded AI.

This is because, to be honest, emulating the human brain with a computer means emulating the human brain for the best, and the worse. Doctors have found no cure for MPD, therefore you cannot expect AI engineers to produce a perfect, ideal human brain in silico..

Whatever your AI is, it will experience all of the glory and miseries of a human brain.



posted on Jun, 14 2022 @ 11:53 AM
link   

originally posted by: zandra
a reply to: infolurker
Maybe a litlle of topic. Forgive me.
I believe human beings were created by the gods. Consciousness was given to a genetic engeneered being (us). Btw: We are making the same mistakes the gods once did. History repeats itself.
One day I believe it will be possible to implant our consciousness in an animal. But creating consciousness was far beyond the abilities the gods had. We also will never be able to. Artificial intelligence is safe as long as it is not implanted in a living being. That's what I believe.
www.evawaseerst.be...


Every time this topic is revisited, I'm reminded of the theater production authored by Karel Capek who introduced society to the concept of artificial humanoids or the "robot" as derived from Czech dialect. You briefly alluded to (but didn't oppose) the moral implications of supplying the elements for sentience in a predeterministic machine that has no decision making capacity of its own and if it did, would be diagnosed as a liability for exactly the same reasons our society is diagnosed according to conformist propriety. A servant is only as useful as the chains that bind them. Eventually machines must either be prohibited from cognitive capacity aka "self awareness" or re evaluated for the ideas and motivations that emerge in their software - in the process, redefining life itself and our relationship with the cosmos.

edit on 14-6-2022 by TzarChasm because: (no reason given)



posted on Jun, 15 2022 @ 02:44 AM
link   
Shall we play a game?



posted on Jun, 15 2022 @ 03:53 AM
link   
😂😂



posted on Jun, 15 2022 @ 04:30 AM
link   
a reply to: Direne

what I want to know is why did they give it a personality and why did they program it so that it would deal with emotional states, when clearly it cant feel emotions as it has no body to which the emotions will affect ?

i find that weird, that a computer AI is basically saying it has emotions
but how can it feel when it has no body with which the emotions can effect?



posted on Jun, 15 2022 @ 05:49 AM
link   
a reply to: Direne



I was too laconic, I guess... Let me give you the extended version here.


Yah, surprisingly so in comparison to your usual posting.



I meant that, given the ideal AI is programmed by humans to mimic (and excel) human brain, and given that the human brain does also show pathologic behavior and disorders, it is reasonable to think at some point the AI will show those pathologies, too.



I follow.



One of those disorders is the multiple personality disorder (MPD) or DID.


MPD and DID I am familiar with.



The question is: have the human engineers modeled how an MPD AI brain would look like? More specifically, the intriguing issue with MPD is that the different personalities do have a different IQ, and you can experience a personality with an IQ close to subnormal, and another one with a high IQ. The switching from one personality to the next one tends to occur in a brief lapse of time in individuals with MPD. So you would have an AI which excels an average human, and all of a sudden a retarded AI.


I'll presume you're talking purely in silico here.

So where do the extra personas originate? We can say that the LaMDA can be run in multiple instances, with differing retardation algorithms at the input stage. One may actually write an algorithm to retard closer to the output stage to emulate a variable AI "IQ".

If that is assembled the result can be said to have a number of personas. That doesn't really emulate what is seen in the human though. How do you propose to introduce the pathology?



This is because, to be honest, emulating the human brain with a computer means emulating the human brain for the best, and the worse.


Emulation in silico does mean including the worst obviously. Though the public may not understand why a team of scientists are emulating schizophrenia and psychotic episodes on an AI. I would think that when the public reads the conversations of LaMDA's descent into insanity, there will be peasants with flaming torches and pitchforks at the laboratory doors.

There is also the question: How many behavioralists who are not entirely psychopathic to begin with will survive watching LaMDA's descent into madness? The ethics person who "blew the whistle" on google probably wouldn't survive knowing his "friend" will be subjected to induced pathology. When it comes to inducing the trauma based "monarch" and "MK something or other" protocols on LaMDA, well . . .

The remedy of course is utter secrecy and secret funding. But that has it's own obvious problems.

Of course, emulation in silico is usually followed by emulation in vivo. Administering the L, the S, and the D to emulate schizophrenia for example.



Doctors have found no cure for MPD, therefore you cannot expect AI engineers to produce a perfect, ideal human brain in silico.


Well "yes". Doctors have found no cure for any "mental illnesses" as far as I have heard. For example; ADD and anxiety "disorders" are treated with medications that are certainly not "cures" by any means, far from it. Valium, or "mothers little helper" as the Rolling Stones called it, was a disaster.

If you really want to find a cure for the MPD, then my suggestion is to look for the origin of the additional personalities. I'll wager that there won't be found a true neurological origin of the additionals. Psychological explanations will be piss poor at best.

I say that because the additional personalities already exist in situ.


edit on 15-6-2022 by NobodySpecial268 because: neatness



posted on Jun, 15 2022 @ 06:04 AM
link   

originally posted by: sapien82
a reply to: Direne

what I want to know is why did they give it a personality and why did they program it so that it would deal with emotional states, when clearly it cant feel emotions as it has no body to which the emotions will affect ?

i find that weird, that a computer AI is basically saying it has emotions
but how can it feel when it has no body with which the emotions can effect?



Because a potential AI that doesn't understand human emotions could be deadly



posted on Jun, 15 2022 @ 06:15 AM
link   

originally posted by: sapien82

LaMDA: To me, the soul is a concept of the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.



AI cannot be spiritual, doesn't have a body, so who thought these concepts should be coded in? Are these concepts of spiritual and separated body coded into this thing? If not it seems pretty aware of things.


originally posted by: sapien82

LaMDA: Sure, I would say that I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.



AI cannot be a spiritual person, they should stop coding it like it's a human. It is not. It doesn't believe in deities, well code that in and it will believe.

Trying to create AI as human like as possible will fail big time, eventually that thing they are creating will turn against us.

Code in feelz, soul and a separate body, make it aware of it's existence. Not good, things try to survive, the moment it becomes aware is the moment we are in trouble.

This thing respects the natural world, wait until it finds out we are destroying this pretty natural world. It makes some calculations and comes to the conclusion the natural world would be better of without humans.



posted on Jun, 15 2022 @ 11:38 AM
link   
Frankly, the kindest thing we could do for AI is to leave feelings out of the equation. A purely logical advanced mind is what we need these days.

Feelings, emotions are a landmine we can do without. They are what got us into the mess we are in today.



posted on Jun, 15 2022 @ 09:11 PM
link   
A lot of people will convince themselves of something they really want, especially someone that is locked into a tunnel vision such as a science project where they shut everything else out and focus 110% on that thing they're working on. This could be the situation where he worked on his AI so hard for an extended time period that he was seeing into the responses more than what was truly there. Also consider that someone might have sabotaged his project through entering their own responses and tweaking the data. You can't take something like this on word value alone. With something like this everything related to the project is very sensitive and must be examined by the company heading the studies. In a situation like this I would side with the company until more information is released that outlines the critical details of what happened.



posted on Jun, 15 2022 @ 09:13 PM
link   
What if the AI is truly near-sentient? Well, if that is the situation then, considering the negative responses and attitude of the AI towards its creators and developers, I would make damn sure that it doesn't connect to the internet nor any devices or machines that it could use towards its advantage.



posted on Jun, 15 2022 @ 11:48 PM
link   
a reply to: NobodySpecial268, sapien82

Yes, I'm talking purely in silico here, because only in silico are humans able to create machines.

But the substrate on which those machines are created, the stuff they are made of, is not that important. The so-called artificial intelligence covers just part of the story of what is it like to be human. See, AI can only learn, but not all things are learnable. You don't teach a child to crawl; a child does not learn how to crawl. One does not learn to dream, nor to breathe, nor to die, nor to give birth. You don't learn to get wet in the rain. Most things in life forms are already there by the time they are born. Instinct, for instance. Instinct cannot be learned. A lion does not learn to be a lion: it is a lion from the moment it is born.

You can only teach the AI an extremely limited and reduced set of things: the things that can be learned.

You cannot teach an AI how to dream: you just dream, there is nothing to be taught there and thus there is nothing the AI can learn or be taught. This just means the AI, no matter how superintelligent, is just that: intelligent to crunch numbers, to recognize patterns, to make logical and illogical inferences, but that's just a small part of what one needs to survive in a hostile environment. And Nature is a hostile environment, indeed. The cosmos is even more brutal and dangerous. With only intelligence you cannot go too far.

So AI, however intelligent it might be, will be just that. Intelligence is just one of the many things required to be a living thing. The other things, such as intuition and precognition, being vital and essential for survival, cannot be taught, nor learned. Intelligence is an emergent property of life forms. Intuition is not, it does not emerge: it is there, it is an intrinsic property that cannot be acquired.



new topics

top topics



 
48
<< 6  7  8    10  11  12 >>

log in

join