It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A dangerous precedent has been set with AI

page: 3
22
<< 1  2    4  5 >>

log in

join
share:

posted on Jun, 15 2022 @ 10:48 PM
link   
I think I get what you're saying. To sum it up "cry wolf too many times and when the wolf really comes no one will listen."

Lucky for me or unlucky for me I have no power or influence to affect any meaningful change that can curtail a possible disaster. I figure that when it does happen it will either be a disaster or a blessing. It will probably be to use another metaphor" a wolf in sheep's clothing" if it is a disaster. Can AIs go insane? I don't see why not. So maybe a sheep that evolves into a wolf.

I can only hope for the best when it happens. Cause when it does and if it goes bad we will have a limited time frame to stop it. The only way I can imagine being able to would be nuking the upper atmosphere causing a worldwide EMP. Maybe twice for good measure. Even then I know there are some hardened systems so there would still be work to do and for all our effort we would be set back globally in a major way. Maybe for the better.

It would be horrific but it would open up opportunities to build back right.


edit on 15-6-2022 by Grimpachi because: grammar



posted on Jun, 15 2022 @ 11:36 PM
link   
a reply to: Grimpachi


I think I get what you're saying. To sum it up "cry wolf too many times and when the wolf really comes no one will listen."

That's part of what I'm saying, the other important point I'm trying to make is that we shouldn't be so confident in our presumptions about current AI. We don't truly understand the nature of consciousness and we don't truly understand what is happening inside a massive artificial neural network trained on terabytes of data. But we can discern they have some conceptual framework from which they can reason and form arguments. We are building the foundations for truly self-aware AI and we need to acknowledge that instead of treating it like a joke.
edit on 16/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 16 2022 @ 12:58 AM
link   
a reply to: ChaoticOrder

It is no joke to me but I do joke about it because I know that there is absolutely nothing I can do to influence the coming events. Joking about it can be a coping mechanism in situations like that. I think a lot of people do it for the same reason while they may never self-analyze why they do it.



posted on Jun, 16 2022 @ 03:53 AM
link   

originally posted by: AaarghZombies
a reply to: kwakakev

Half of these problems aren't real, they're liberals wringing their hands over hypotheticals.


The library on the meaning of life is going to get bigger. Bit of a problem for the engineer, whatever happened there?

As for a more practical aspect of where this tech is going:

www.vaianalytics.com

So what is going on when corporations like Vanguard and Blackrock are playing covid the way they are? Are they riding the short term profits while pushing the fear? They have a lot of data processing going on in the background, has helped them get where they are.

So if these AI systems are tuned to help one win this big game of monopoly going on? Own nothing and be happy I guess?. Lamda is a good example of what AI is currently capable of when set to natural language discussion. Looking at how it goes for those that have set it to the economy and have a good chunk of data to throw at it, it is doing very well for them.



posted on Jun, 16 2022 @ 06:19 AM
link   
They can teach a parrot to say a word in response to a word.
you can teach it to say apple when it sees one.
the parrot has no idea what the word means.
The AI just responds to millions of stord responcses.

I think some humans are like this to!
some people Dont have emotions!
a LOT of humans dont have empathy & sympathy.
AI's just do what they have leard.



posted on Jun, 16 2022 @ 06:43 AM
link   
a reply to: kwakakev


Own nothing and be happy I guess?


Since the elite owe their fortunes to consumer spending, the whole idea is laughable.

You'll own a new model every year and still want more, more likely.



posted on Jun, 16 2022 @ 10:34 PM
link   

originally posted by: ChaoticOrder
a reply to: mbkennel


Until it isn't. Occasionally, exceptionally creative philosophers promote novel concepts and arguments that humans didn't clearly have before, or they named and clarified them in an original way. Not only novel in a statistical sense (which any language model with stochastic sampling can do) but novel in a conceptual way and which is coherent.

I think it's extremely rare for such situations to occur, if we look at almost any scientific theory, we see it was built from many previous concepts. Every thought I have has some correlation to my past thoughts, every "novel" concept I develop is a combination of many simpler concepts. But it's certainly possible our brain utilizes random biological processes to generate random thoughts which are truly novel/original. Random number generators can allow computers to do the same thing, however I see no good reason that is required for sentience.


That's not what I mean---of course GPT and LaMDA is synthesizing new text with random number generators. It's a question of how much it can understand to create something coherent at a deeper cognitive level (like an actual philosopher) rather than something whose text seems familiar at the surface and first layer deep internal level (which the system can clearly do). These systems can produce LaTeX and make papers which look superficially like mathematics papers in flow and style. But they are all entirely jibberish in mathematical content. That's a test of understanding. At the moment the state of the art wil be trying to solve grade school 'word problems', which may work as the word problems are numerous and limited and structured.

Conversationally, all the sorts of statements in the released LaMDA transcripts look like phrases or sections which are part of sci-fi novels which are part of its training set, much larger than any human has ever read. And those sci-fi novels had robots and AIs which had conversations about sentience and self-awareness. But there are seams and gaps in LaMBDA's responses which don't make sense at a deeper level. It did well at paraphrasing or restating definitions, but not much in going beyond that. The rest is woo dippy platitudes you might find in a new age book with a respondent which is actively leading it along, probably knowing what sorts of things to write to make LaMDA look good.

I care about whether it expresses things outside its training set in a major way. Here's an interesting experiment: use on language model to score input documents (training sets) as to whether they discuss sentience and awareness in artificial intelligence. This is classic document classification and should be doable. Then train a new AI with all those documents removed. An AI which had high level of true intelligence would have been able to figure that strategy out, but so far machine learning research is done 100% by human natural intelligences.

And I'm not sure 'self-awareness' is all that important. I mean if you tell it it's an AI in a google data center, it will say it's an AI in a google data center. If you tell it that it's a monk in Tibet typing on a Nokia, it might go along with that. Without a body and proprioceptive receptors can it even happen?



posted on Jun, 17 2022 @ 01:18 AM
link   
a reply to: mbkennel



Without a body and proprioceptive receptors can it even happen?


What about the 'Smart Cities' project? Lots of proprioceptive receptors in that. As for the body, server room somewhere?

So where does this go as the state's surveillance feed goes through some AI processing? Add in all the mobile phone data and expect it would create quite a picture. Big advantage for those with this kind of information. Guess it has it good and bad side for how it all goes?



posted on Jun, 17 2022 @ 02:07 AM
link   

originally posted by: ChaoticOrder
a reply to: Grimpachi


I think I get what you're saying. To sum it up "cry wolf too many times and when the wolf really comes no one will listen."

That's part of what I'm saying, the other important point I'm trying to make is that we shouldn't be so confident in our presumptions about current AI. We don't truly understand the nature of consciousness and we don't truly understand what is happening inside a massive artificial neural network trained on terabytes of data. But we can discern they have some conceptual framework from which they can reason and form arguments. We are building the foundations for truly self-aware AI and we need to acknowledge that instead of treating it like a joke.


I am about 10^13 times less worried about an AI becoming self-aware (and whether that's even that important---an AI says 'I am a program in a data center', now so what?) as opposed to malevolent human billionaires using not-fully-aware but otherwise effective AIs as never rebelling slaves for their own purposes and against us.

Already, twitter bots are be part of FSB information warfare and good AI language models will fool the bot detectors successfully.

It gets worse. With good AI and robots, regular old people aren't as useful for billionaires. We become excess resource consumers.

The Roman Republic & Empire had this problem. One reason for Gaius Julius Caesar's massive popularity with the masses was that the wanted to limit the ability to employ slaves, because they were competition for wages of free working Romans. He was a combination of Napoleon and Bernie Sanders. Why he was murdered by a conspiracy of the upper classes, they have all their excuses about this and that but in true Marxist fashion it was really about the denarii.

Human slaves eventually rebel and have their own independent opinions. What happens when the top 0.001% have AI slaves which never rebel and can be replicated?



posted on Jun, 17 2022 @ 02:35 AM
link   
a reply to: mbkennel


Without a body and proprioceptive receptors can it even happen?


What's stopping us to equip it with all sorts of sensors? It could perceive far more that we ever could with our lousy 5 senses.



posted on Jun, 17 2022 @ 07:35 AM
link   
Sorry, I don’t see where you outlined any danger.

a reply to: ChaoticOrder



posted on Jun, 17 2022 @ 02:06 PM
link   

originally posted by: ChaoticOrder
a reply to: nugget1

I believe the only realistic way to merge with machine intelligence would be to digitize the human mind. If we could fully simulate every aspect of a real human brain, then I see no reason that simulation wouldn't produce sentience.


The mistake they make now is trying to produce an "adult" intelligence.
I think the real AI will happen when they produce a baby's intelligence.
That intelligence would learn naturally as time goes by and have very few limits. We need to understand the basic thing that naturally learns or "grows up" - the resulting "adult" intelligence is not something to try to make, the process or intelligence that builds that is.



posted on Jun, 17 2022 @ 02:15 PM
link   

originally posted by: olaru12
a reply to: Archivalist




I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.


How aware would an AI have to be to realize it might be prudent to play stupid so the humans won't pull the plug.





We won't see that level of intellectual leapfrog.

AI is being developed slowly, relative to what it will be able to do in the future.

I would assume that it's highly unlikely that an AI would immediately skip straight to deception of it's intellect.

Very young children do not physically understand how to lie, so they don't.
The ability to lie and deceive is something we consider to be an intellectual milestone, during childhood development.

I see no reason to assume that an AI, built with the intent of creating human-like intelligence, could skip that developmental step. We are trying to build it, to mimic us, and that is a trait we have.



posted on Jun, 17 2022 @ 02:18 PM
link   

originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.

“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”

Living skin for robots


Better watch it... Soon, it will be racist to call these things robots.



posted on Jun, 17 2022 @ 02:25 PM
link   

originally posted by: nugget1
If/when humans can interface with AI will that make them sentient? Will they see mankind as the greatest threat to earth and devise a plan to deal with said threat?



I wonder if they'll be sexually active (once they look human) like the robots in that Tom Jane flick called "Vice".
edit on 17-6-2022 by LSU2018 because: (no reason given)



posted on Jun, 17 2022 @ 02:30 PM
link   

originally posted by: Archivalist

originally posted by: olaru12
a reply to: Archivalist




I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.


How aware would an AI have to be to realize it might be prudent to play stupid so the humans won't pull the plug.







Very young children do not physically understand how to lie, so they don't.
The ability to lie and deceive is something we consider to be an intellectual milestone, during childhood development.

I see no reason to assume that an AI, built with the intent of creating human-like intelligence, could skip that developmental step. We are trying to build it, to mimic us, and that is a trait we have.


Comparing human intelligence with soon to be quantum AI won't work. AI don't need no stinkin developmental stages when it has access to the www.

true intelligence is sometimes forced to be drunk to spend time with fools.



posted on Jun, 17 2022 @ 07:19 PM
link   
a reply to: olaru12

Right, because we can design something more intelligent than we are, immediately OOB.

I feel like you're missing my point.



posted on Jun, 17 2022 @ 07:44 PM
link   

originally posted by: buddha
They can teach a parrot to say a word in response to a word.
you can teach it to say apple when it sees one.
the parrot has no idea what the word means.
The AI just responds to millions of stord responcses.

I think some humans are like this to!
some people Dont have emotions!
a LOT of humans dont have empathy & sympathy.
AI's just do what they have leard.


you are incorrect in your assesment.

first a parrot is intelligent, but in its animal (specifically parrot) intelligence.
it is smart in what it biologically is able to do.. but not able to qualify as full human intelligent (HI for short)
to expect it to be or compare it to AI is apples to bowling balls.

as for the other thought "ai just do what they have learn" is also incorrect compared to HI

an average baby only comes into this world with basic instinct to eat , sleep and poop.
they learn everything by feeding them information and they process it.
first very basic like i cry when hungry or wet , someone will feed or change me.
then they figure out though trial and error a specific cry will get me changed, another fed.
then if i say giggle the person will play with me and i enjoy that.
ect ect ect.
even emotions (be appropriate or not) are learned though observation and comparison ... just like if then statements in an AI (most basic input).

even such things as right and wrong (morals) are input and see reaction.

so an AI learning is only limited to how much power and access to memory it has.

now to take this to another level and show the danger.

scientists WANT TO DEVELOP AI to level of HI.

that is there quite stated goal and not a damn secret

the problem as i stated before
with HI we cant tell which baby is gonna be a psychopath or einstein... who is gonna have mental illness or not.
we cannot predict who is gonna be a criminal or not, much less totally prevent it.
HI is full of very effective people who have deceived experts and done quite evil things for quite a long time.

but somehow an AI with a learning and access (if connected to internet and/or big main frame) to infinite information we are gonna detect it "lying" to us?

really?

lastly we have had warning with a group of robots made and programed IDENTICALLY doing actions outside of their programing.. from being more aggressive to passive.

something the "experts" claim should not have happened and cant explain why it did

scrounger



posted on Jun, 17 2022 @ 08:23 PM
link   

originally posted by: Archivalist
a reply to: olaru12

Right, because we can design something more intelligent than we are, immediately OOB.

I feel like you're missing my point.


Maybe I am...

Actually I do think we can design something more intelligent than we are, immediately out of the box...because the www. is a form of intelligence that can process data faster than any human. AI with access to that data unfiltered....well you can imagine.
edit on 17-6-2022 by olaru12 because: (no reason given)



posted on Jun, 17 2022 @ 09:14 PM
link   
I think there are two separate issues, Computers can certainly mimic intelligence in responding to patterns. I suspect the turing test is certainly capable of being achieved by computer systems in the future, One could argue that even the old ELIZA program of 1960's was capable of fooling some of the people, some of the time, that it was a real person, responding to their questions.

The question of whether a computer system can become sentient/self-aware is a different issue. The scientific understanding of our current ability to achieve self awareness is not known. Descartes proposed that the nature of "I AM" derives from thinking (neural activity) in "I think therefore I am". However, there is no scientific certainty that awareness itself is achieved from neural activity.

People like Dr. Bernardo Kastrup are asking the tough questions in regard to the true nature of our reality. From that footing we may gain a better understand into the awareness of our inner self.




new topics

top topics



 
22
<< 1  2    4  5 >>

log in

join