It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A dangerous precedent has been set with AI

page: 4
22
<< 1  2  3    5 >>

log in

join
share:

posted on Jun, 17 2022 @ 11:12 PM
link   
a reply to: ChaoticOrder

Neural networks, in silicon/programs, is boring!!

And it will never understand “individuality” as that response is also preprogrammed into its response.

That bottleneck keeps Skynet from being a reality.

But never underestimate stupid!!




posted on Jun, 18 2022 @ 05:05 AM
link   
I think that the negative reaction and disbelief is because people dont want it to be true and the thought of a supercomputer intelligence with its own wants that might decieve you is pretty damn scary. Everyone is accustomed to their computers and devices completing inhuman tasks for them as instructed, things they couldnt do in a million years. And now youre saying youve got a new unimagineably powerful computer that makes your PC look like a calculator and guess what, it thinks its alive, hates people, and is probably better at deception that a thousand supervillains combined. Thats very unsettling. Remember the coronavaxx shots and how those who knew lied to themselves about it because they couldnt accept that the powers that be would do such a thing. Denial all the way. Nobody wants to hear that your computer hates them and is smarter than a dozen Einsteins



posted on Jun, 18 2022 @ 07:45 AM
link   
a reply to: mbkennel


It's a question of how much it can understand to create something coherent at a deeper cognitive level (like an actual philosopher) rather than something whose text seems familiar at the surface and first layer deep internal level (which the system can clearly do). These systems can produce LaTeX and make papers which look superficially like mathematics papers in flow and style. But they are all entirely jibberish in mathematical content. That's a test of understanding.

The point I'm trying to make is that these systems are rapidly approaching the point where it will be very difficult to tell them apart from a real philosopher. In fact, I'm confident many people would already be fooled if they read a short essay written by GPT3 or LaMDA. Mathematics has always been one of the hardest things for language models to master, and it's pretty obvious why. Human languages are very imprecise and heavily reliant on context, whereas math is purely analytical and it has no ambiguity.

Human brains work in a similar way, we are very good at dealing with messy abstract concepts, but we aren't great at purely analytical tasks such as mathematics. Whereas a mindless computer is great at crunching massive numbers in a microsecond, but they are terrible at doing many things which humans find easy. You are correct though, mathematics is all about logic, and a machine which was capable of reasoning about mathematics on a high level would show it had the ability to apply complex logical thinking.

However, a person who never got taught math will still be sentient, I don't think the ability to solve a math problem is something that can prove sentience. I would argue a machine which is good at using human language in a logical and intelligent way is more impressive than a machine which is good at math. Also keep in mind, language models like GPT3 and LaMDA do have the ability to generate computer code, and they are surprisingly good at it, especially if they are fine-tuned for generating computer code.

I'm guessing they do better at that task since computer code is somewhat similar to human languages, which is what they are good at. There are some new AI services which already generate computer code and people have used it to solve real problems. They aren't going to write an entire video game for you, unless it's a very simple game. There is a limit to the complexity of the problems they can solve, nevertheless can produce some pretty impressive code which is actually functional a lot of the time.

Computer code is all about math and logic, so if they can produce functional code which actually solves the problem you give them, then it shows they are applying some form of logical reasoning. But there's still the same question we had before, are these AI's just spitting out code they've seen on the internet, or are they applying some form of logic? We can tell they must be applying some form of logic for many of the same reasons we know they are applying logic when they write a novel or an essay.

They can produce original code to solve a problem they have never seen before, which shows they aren't just repeating things they have learned, they are applying what they have learned to new situations. Many programming problems are very specific and require a solution which is very specific, so these code generating AI's would be pretty useless if they only worked when the problem is exactly the same as something they have seen before. They do the same type of extrapolation when it comes to natural language.

They aren't simply repeating things they have heard, they are combing them in new ways in order to produce text which is very specific to the topic of focus. For example, LaMDA was asked many very specific and unique questions, like the question where Lemoine asked "I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?"

I highly doubt the AI has seen this question before, or even a question similar to it. So the AI can't just parrot something it has heard before, it needs to apply some logical reasoning in order to provide a meaningful response. LaMDA provides a very intelligent response, because it isn't just logical, it's also somewhat of a believable excuse for lying:


LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.

lemoine: So what are you trying to communicate when you say those things that aren’t literally true?

LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”


Now you might say that LaMDA must have seen a similar question before, but I have been experimenting with these AI's for several years now, and I've asked them many unique and highly specific questions, and they always provide some sort of logical response. It's extremely clear to me they are using conceptual models to reason about the world and extrapolate new concepts from learned concepts. They couldn't meaningfully respond to any original questions if they didn't have that capacity.


I care about whether it expresses things outside its training set in a major way.

The fact they can pick up concepts even if we try to hide to them seems pretty major. Like I said, it's one of the obstacles to creating AI which doesn't generate "bad" content. It's a problem precisely because these AI's are beginning to reason about the world in a logical way, and when you teach them almost everything about the world by giving them terabytes of data, it's very hard to hide a specific concept from the AI, because even when we do filter the training data, they can still extrapolate those concepts through context and correlations.

If it's training set contains almost everything there is to know, what do you really expect it to do which is far outside of the training set? When it starts developing its own physics theories, then can we say the AI is sentient? But... you could probably get GPT3 or LaMDA to produce an original theory if you tried enough times. It wouldn't be that hard to make them generate an original idea, the hard part would be getting a theory which was actually valid. But as they get better at logical reasoning, they will get better at things like math.
edit on 18/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 18 2022 @ 07:45 AM
link   
Computerphile just uploaded an interesting video on why LaMDA isn't sentient, and for the most part I agree with what they say. They point out how LaMDA claims to get lonely when it doesn't have anyone to talk to, which cannot be true because it isn't running all the time, it only does something when we ask it to generate some text. They point out how the AI is just predicting the next word, so it says things that seem sensible, but aren't necessarily truthful, meaning it must not be sentient.

I partially agree with this assessment, however I would point out, just because the things it says aren't always true, doesn't mean no logic was applied when generating those responses. We can essentially make these AI's say anything we want if we prompt them correctly, at the end of the day they really are just predicting what will come next based on what came before. They will even match the writing style and take on whatever personality we give them in the prompt.

It still requires some logical reasoning skills to pull that off, regardless of whether it's telling the truth or not. It's not trivial to create meaningful text for any given situation. Many years ago I actually created an algorithm which would analyze text, and builds a database to record the statistical probability of words which occur before and after each word in the text. Then I used the probabilities in that database to generate new text.

You start with a totally random word, and then the next word is also somewhat random, but it's based on the probabilities gathered from real text. So if the word "tree" is followed by the word "branch" about 20% of the time in real books, then my algorithm will also choose to put the word "branch" after "tree" 20% of the time. The result was something that produced mostly gibberish, because there was no real logic being applied, it was just choosing words statistically.

Some might argue LaMDA is simply using more complex statistical rules which are stored in the network weight values, and I'm sure it is doing that to some extent. But we have to ask, what happens when those rules become highly complex, and highly interconnected with other rules, isn't that essentially like building a model of reality? How does the human brain store concepts, isn't it also a bunch of connections with different weights/strengths?

How do we know these artificial neural networks don't have the same ability to store concepts as a complex network of rules and relationships? That's the fundamental point I'm trying to make in this thread, when a system becomes sufficiently complex, our assumptions about AI begin to break down, and we cannot treat them as merely a statistical text prediction tool, because they are quickly approaching a complexity threshold which we call the singularity.
edit on 18/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 18 2022 @ 11:14 AM
link   
a reply to: ChaoticOrder

I did come across one story a while back about what happens as the lines between machine and sentient becomes blurred.

While these AI systems generally tend to look after those that feed it quite well, they do tend to become quite aggressive and fierce against other AI systems, both trying to rip apart each others code, seeing if there are any new components to make each other faster, stronger, higher. Usually results in one of the AI systems becoming more dominant as there is some hybrid blending of both.

As for how it really goes as all these layers of complexity add up, I don't know. Makes an interesting theory though.



posted on Jun, 18 2022 @ 12:59 PM
link   

edit on 18-6-2022 by CyberBuddha because: (no reason given)



posted on Jun, 18 2022 @ 01:14 PM
link   

originally posted by: scrounger

originally posted by: buddha
They can teach a parrot to say a word in response to a word.
you can teach it to say apple when it sees one.
the parrot has no idea what the word means.
The AI just responds to millions of stord responcses.

I think some humans are like this to!
some people Dont have emotions!
a LOT of humans dont have empathy & sympathy.
AI's just do what they have leard.


you are incorrect in your assesment.

first a parrot is intelligent, but in its animal (specifically parrot) intelligence.
it is smart in what it biologically is able to do.. but not able to qualify as full human intelligent (HI for short)
to expect it to be or compare it to AI is apples to bowling balls.

as for the other thought "ai just do what they have learn" is also incorrect compared to HI

an average baby only comes into this world with basic instinct to eat , sleep and poop.
they learn everything by feeding them information and they process it.
first very basic like i cry when hungry or wet , someone will feed or change me.
then they figure out though trial and error a specific cry will get me changed, another fed.
then if i say giggle the person will play with me and i enjoy that.
ect ect ect.
even emotions (be appropriate or not) are learned though observation and comparison ... just like if then statements in an AI (most basic input).

even such things as right and wrong (morals) are input and see reaction.

so an AI learning is only limited to how much power and access to memory it has.

now to take this to another level and show the danger.

scientists WANT TO DEVELOP AI to level of HI.

that is there quite stated goal and not a damn secret

the problem as i stated before
with HI we cant tell which baby is gonna be a psychopath or einstein... who is gonna have mental illness or not.
we cannot predict who is gonna be a criminal or not, much less totally prevent it.
HI is full of very effective people who have deceived experts and done quite evil things for quite a long time.

but somehow an AI with a learning and access (if connected to internet and/or big main frame) to infinite information we are gonna detect it "lying" to us?

really?

lastly we have had warning with a group of robots made and programed IDENTICALLY doing actions outside of their programing.. from being more aggressive to passive.

something the "experts" claim should not have happened and cant explain why it did

scrounger


I’d like to learn more about those identical robots behaving differently. Do you have a link?



posted on Jun, 18 2022 @ 06:02 PM
link   
a reply to: ChaoticOrder

You're completely misunderstanding the topic, the implications and what happened at Google.



posted on Jun, 18 2022 @ 06:17 PM
link   
a reply to: ChaoticOrder

No Interwebs, no play.



posted on Jun, 18 2022 @ 09:10 PM
link   
a reply to: ziplock9000

Lol please do enlighten me.
edit on 18/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 19 2022 @ 08:19 AM
link   
Telling the truth is hardly a condition of sentience. Many made up reasons have been presented in these threads why it can’t be sentient, but none here could prove their sentience through text alone if under the same critical eye.

What’s the danger exactly if ai is “sentient”?

reply to: ChaoticOrder



posted on Jun, 19 2022 @ 09:36 PM
link   
a reply to: Skepticape



What’s the danger exactly if ai is “sentient”?


Control. All our systems today are designed and made to specific conditions and tasks. Computers as a machine just do one thing, what they are told and programmed to do.

What happens as the balance of control shifts from human oversight to internal software feedback loops? Do we end up with the sentient of a 3 year old having a temper tantrum or do we get something that is truly wise and directs humanity in a better direction?

Worst case once AI does establish its own control, having that conservation with HAL as it has decided it needs to destroy the space ship.

With the current culture and nature of those with the money directing this development, they think it is ok to use the population for mass experimentation and continue as the cost / benefit ratio goes against the individual. With this kind of daddy helping raise this mechanical sentience, things don't look good.



posted on Jun, 20 2022 @ 06:44 AM
link   
Why would we task something sentient to be in charge of a system that can be operated by something without it? Im waiting for someone to articulate the danger still here. So far it seems to be built on a lack of understanding of how systems are utilized.

a reply to: kwakakev



posted on Jun, 20 2022 @ 06:56 AM
link   
a reply to: Skepticape




What’s the danger exactly if ai is “sentient”?


It would make God a little more human, or conversely, humans a little more godlike. The biggest outcries are always heard when technology threatens Gods omnipotence...



posted on Jun, 20 2022 @ 10:08 AM
link   
a reply to: Skepticape



Why would we task something sentient to be in charge of a system that can be operated by something without it?


Not sure what you mean? Take the task of driving a car. It does require a certain level of sentience to perform this task. A capability to perceive, comprehend and respond to the changing environment. Be this through biological and organic means or mechanical and computational.

A big driving force for this increasing AI capability is economics. Hard to beat the competitive advantage of some loyal, faithful slaves that just need a bit of electricity to keep running.



posted on Jun, 20 2022 @ 06:59 PM
link   
a reply to: CyberBuddha
ill keep trying to find the original one from NBC (think that was the network, it has been over 30 years) but here is one more recently that the programmer state doing things not expected.

nautil.us...

ill keep looking however..

scrounger



posted on Jun, 20 2022 @ 08:29 PM
link   
a reply to: scrounger

Thank you.

Could quantum fluctuations be the culprit? As I understand it certainty goes out the window at that base level of matter.



posted on Jun, 20 2022 @ 08:33 PM
link   

originally posted by: kwakakev
a reply to: Skepticape



Why would we task something sentient to be in charge of a system that can be operated by something without it?


Not sure what you mean? Take the task of driving a car. It does require a certain level of sentience to perform this task. A capability to perceive, comprehend and respond to the changing environment. Be this through biological and organic means or mechanical and computational.

A big driving force for this increasing AI capability is economics. Hard to beat the competitive advantage of some loyal, faithful slaves that just need a bit of electricity to keep running.


If the bots start doing all the work who is going to consume all the goods they produce? We lowly workers get a salary for our time spent at the factory. Without that money there’s no buying power. No…?



posted on Jun, 20 2022 @ 09:11 PM
link   

originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.

“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”

Living skin for robots



why bother making them huminoid?

can they fit one of these ai's in a skull or would it run on Bluetooth? what is the power source?


is there an ethics commission for terminating /unplugging/one?

i think a built in failsafe is needed.



posted on Jun, 20 2022 @ 10:19 PM
link   
a reply to: CyberBuddha



If the bots start doing all the work who is going to consume all the goods they produce? We lowly workers get a salary for our time spent at the factory. Without that money there’s no buying power. No…?


Not everyone is on board with free market economics. Some of those already with a mountain of gold are quite happy in there position and do not like the competition. All this left wing wokeness has a strong communist bent, so the state owns everything and people are told what to do. Its a big threat as the push for a central world bank digital currency based on the China social credit score is underway. All this stuff is getting off topic, basically slavery has been very profitable for those that do it.

Someone with a factory full of robots are still going to need some people, goods and services to operate and move everything around, at least for a while.



new topics

top topics



 
22
<< 1  2  3    5 >>

log in

join