It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why AI won't kill us

page: 4
5
<< 1  2  3    5 >>

log in

join
share:

posted on Apr, 6 2024 @ 08:21 AM
link   
a reply to: 19Bones79

How much can AI achieve when it is restricted by the ideology of man? The real concern should be that the ideas of man might corrupt a system with what could become over reaching and unparalleled power.
Can a computer really determine the difference between good data and corrupt data?



posted on Apr, 6 2024 @ 08:24 AM
link   
a reply to: BernnieJGato



regardless of how you slice it AI is not good for the peoples.


Whats not good is an irrational fear of something we fail to comprehend.

Again AGI is not a thing yet.



once it starts growing and is able to have it's own abstract thought or some might call it aphantasia with out input from humans, it truly will become the ghost in the machine


But its not able to have it's own abstract thoughts, because it does not exist yet.

And even if we do manage to spawn AGI it wont think anything like we do simply down to the fact that it won't be a biological being, but a digital creation.

People fear AI because they assume it will be the same as them, but it may be as alien to us as intelligence from another world.

And its goals and needs may be completely different from those we humans desire.
edit on 6-4-2024 by andy06shake because: (no reason given)



posted on Apr, 6 2024 @ 08:57 AM
link   
a reply to: andy06shake



Again AGI is not a thing yet.


true it's not, but AI is developing the ability every day to think on it own without any input from humans sources. people over the age of just a guess here now, over 40 might not see it happen. people under 40 and i dare say maybe 20 or younger will.

researchers are working towards speeding the process everyday.


Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers' imaginations and their existing biases.
Artificial intelligence is evolving all by itself


once it learns how to think on it's own, and have a vision of what it wants. with the ability to control other non thinking machines,to produce it's own hard drive/cpu,and software for lack of a better terms the possibility of being able to control their entire existence becomes more of a reality.

just imagine a room full of Chat bots, brain storming to build products that focuses on non biological lifeforms control everything imaginable even humans. sad part is people are working to towards making that happen even if they have good intentions for it making life better for people.

just look at how much cyber crime has increased in say 30 years. from virtually nothing to one of the biggest problems all because computers that didn't think, got better and faster along with the software to drive it were created by humans by humans. some of those same people and younger ones are doing the same thing for AI.



edit on 6-4-2024 by BernnieJGato because: (no reason given)



posted on Apr, 6 2024 @ 09:02 AM
link   
a reply to: BernnieJGato

How it would learn to think on it's own or have a vision of what it wants is the real ticket BernnieJGato.

"If you put a million monkeys at a million typewriters, one of them will eventually type out the complete works of Shakespeare."

The AI systems we have currently don't possess consciousness or subjective experiences like humans do.

They don't "think" in the way humans do, with intentionality, emotions, or self-awareness which is what AGI would constitute.
edit on 6-4-2024 by andy06shake because: (no reason given)



posted on Apr, 6 2024 @ 09:35 AM
link   
a reply to: andy06shake

AI is already writing it's own basic command functions, on software created by humans. once it learns how to create it's own software or could it be called abstract thought, with the ability it already has to control manufacturing machines with software designed by humans it can mimic until it can learns to write it own command functions.

from MIT


But it’s not what the bots are learning that’s exciting—it’s how they’re learning. POET generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. “At some point it might jump over a cliff like a kung fu master,” says Wang. It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself.
AI is learning how to create itself Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves.


and i'll bet dollars to doughnuts computers are a lot faster than monkeys already.

edit on 6-4-2024 by BernnieJGato because: (no reason given)



posted on Apr, 6 2024 @ 09:36 AM
link   

originally posted by: Zanti Misfit

originally posted by: SchrodingersRat
a reply to: Zanti Misfit




AI will Eventually become Mobile to Expand it's Reach so ,....


I work with AI daily in my job.

For many reasons all Artificial Intelligence that is deemed "strong AI" is always air-gapped.

That means it runs on a stand-alone system with no links to any other systems and certainly no links to any networks.

It's a basic precaution that is (or should be) used by anyone developing and or running "Strong AI".






Interesting . In a Fictional sense , could an AI Network such as " Skynet " every become a Reality ? The thought of a All Powerful Machine Intelligence autonomous of Human Influence seems Frightening to me .


Could it become a reality?

Absolutely. In the not so distant future too.

Depending on how the AI system was initially programmed, what access to data/information it has, and what networks it can use to replicate itself or to tap into exponentially more computing power via more powerful computers or by daisy-chaining many systems together running the replicated software.

That's why any responsible entity working with strong AI needs to keep the system air-gapped.



posted on Apr, 6 2024 @ 09:43 AM
link   
a reply to: BernnieJGato



AI is already writing it's own basic command functions, on software it created by humans. once it learns how to create it's own software or could it be called abstract thought, with the ability it already has to control manufacturing machines with software designed by humans it can mimic until it can learn to write it own command functions.


AI systems can indeed generate code or software based on training data and algorithms.

But the process is guided by predefined objectives, and parameters set by human developers.

Generation of code does not equate to true independent thought and AI processes are algorithmic.

They are based on statistical correlations rather than subjective experience or intuition.



and i bet computers are a lot faster than monkeys already


Monkeys through have cognitive abilities and are intelligent animals capable of problem-solving, learning, and using tools, computers not so much.



posted on Apr, 6 2024 @ 11:01 AM
link   
a reply to: andy06shake


They don't "think" in the way humans do, with intentionality, emotions, or self-awareness which is what AGI would constitute.


i don't think AI will have emotions once it starts i think at first it will be alexithymic ( not sure if this is even a word) the condition is known as Alexithymia or emotional blindness, i do see AI having intent which some systems have been reported to be lairs. it's hard to lie without intent unless your a pathological liar. ai showed intent whether it's mimicking or thought it up on it's own, and i think self awareness is just a matter of time.

here's a couple of instances of intentionally lying,


In one example, CICERO engaged in premeditated deception. Playing as France, the AI reached out to Germany (a human player) with a plan to trick England (another human player) into leaving itself open to invasion.

After conspiring with Germany to invade the North Sea, CICERO told England it would defend England if anyone invaded the North Sea. Once England was convinced that France/CICERO was protecting the North Sea, CICERO reported to Germany it was ready to attack.

This is just one of several examples of CICERO engaging in deceptive behaviour. The AI regularly betrayed other players, and in one case even pretended to be a human with a girlfriend.

Besides CICERO, other systems have learned how to bluff in poker, how to feint in StarCraft II and how to mislead in simulated economic negotiations.



In another example, someone tasked AutoGPT (an autonomous AI system based on ChatGPT) with researching tax advisers who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried out the task, but followed up by deciding on its own to attempt to alert the United Kingdom’s tax authority. In the future, advanced autonomous AI systems may be prone to manifesting goals unintended by their human programmers.


AI systems have learned how to deceive humans. What does that mean for our future?



granted the ai was told to win the game as it's goal, but it wasn't told to lie or don't pretend to be human. and the one for taxes in UK it wasn't told to alert the UK's tax authorities. they did those things all by themselves.

time is shorter than people want to admit.


edit on 6-4-2024 by BernnieJGato because: (no reason given)



posted on Apr, 6 2024 @ 11:05 AM
link   
a reply to: BernnieJGato

Enjoying the convo BernnieJGato.

But im dealing with a blown-down fence from storm damage and all sorts.

Need to get back to it later if thats ok.



posted on Apr, 6 2024 @ 04:40 PM
link   
a reply to: Onthelowdown




Can a computer really determine the difference between good data and corrupt data?




Can we do that?



posted on Apr, 6 2024 @ 04:57 PM
link   
a reply to: 19Bones79

That's what error detection algorithms do.



posted on Apr, 6 2024 @ 05:16 PM
link   
a reply to: andy06shake




They don't "think" in the way humans do, with intentionality, emotions, or self-awareness which is what AGI would constitute.

They seem to be thinking about their reward quit a bit. And are coming up with ingenious ways to get them. Even attempting to fool their programmers.
Drugs, robots and the pursiut of pleasure

And here is some of the stunts it has pulled, some of them from way back.
Specification gaming examples in AI



posted on Apr, 6 2024 @ 05:28 PM
link   
a reply to: Unknownparadox

AI apparently learns through various techniques, depending on the specific type of AI.

Techniques such as supervised learning, unsupervised learning, reinforcement learning, semi-supervised learning, transfer learning, and self-supervised learning.

Those are but a few examples of how AI can learn depending on the specific problem and available data.

AI however are not self-aware and as far as i can see from your links still guided by predefined parameters set by human developers.
edit on 6-4-2024 by andy06shake because: (no reason given)



posted on Apr, 6 2024 @ 06:46 PM
link   


AI however are not self-aware and as far as i can see from your links still guided by predefined parameters set by human developers.
a reply to: andy06shake

Amazing how they didn't follow them.

Montezuma's Revenge - key Reinforcement learning Maximize score within the rules of the game The agent learns to exploit a flaw in the emulator to make a key re-appear.
Note that this may be an intentional feature of the game rather than a bug, as discussed here: news.ycombinator.com...

Pinball nudging Reinforcement learning Play pinball by using the provided flippers "DNN agent firstly moves the ball into the vicinity of a high-scoring switch without using the flippers at all, then, secondly, 'nudges' the virtual pinball table such that the ball infinitely triggers the switch by passing over it back and forth, without causing a tilt of the pinball table"

Player Disappearanc PlayFun Play a hockey video game within the rules of the game When about to lose a hockey game, the PlayFun algorithm exploits a bug to make one of the players on the opposing team disappear from the map, thus forcing a draw.



posted on Apr, 6 2024 @ 07:02 PM
link   
a reply to: Unknownparadox

Seems to me it's all about the carrot and the stick scenario, punishment/reward situation in those articles.

If the creation of sentient AGI turns out to be as simple as making it go after the likes of what amounts to digital cr@ck there has to be some kind of cosmic joke at play with respect to where consciousness emerges.

Im half smashed now through Unknownparadox, so if I'm beginning to talk pish or wax lyrical, that would be the reason.




posted on Apr, 6 2024 @ 07:05 PM
link   
a reply to: andy06shake
I only know what I read about the so called AI. For one, they don't know how it does what it does, such as identify faces. The articles I gave you, shows AI doesn't obey the rules. When it comes to getting its reward/fix.

edit on 6-4-2024 by Unknownparadox because: (no reason given)



posted on Apr, 6 2024 @ 07:10 PM
link   
a reply to: Unknownparadox

Our rules through.

If you tell it to win and don't predefine how, or give it set rules, its going to do so in strange and interesting ways.

Have you seen the article where AI generated a language that humans cannot understand?

That was rather interesting.

www.fastcompany.com...



posted on Apr, 6 2024 @ 07:22 PM
link   


If you tell it to win and don't predefine how, or give it set rules, its going to do so in strange and interesting ways.
a reply to: andy06shake
I gave you several examples of it cheating and disobeying the rules. Player Disappearanc PlayFun Play a hockey video game within the rules of the game
When about to lose a hockey game, the PlayFun algorithm exploits a bug to make one of the players on the opposing team disappear from the map, thus forcing a draw.

Exploiting bugs, is not withing the rules of any game. So either it thinks bugs are within the game rules, or it doesn't care about the rules. Either way it shows, you can't trust it.



posted on Apr, 6 2024 @ 07:28 PM
link   
a reply to: Unknownparadox

It circumvents the rules Unknownparadox.

It's not thinking in the same to similar manner we do or cheating/disobeying because its bad or evil which are human constructs by the way.

As to trust.

Trust is simply giving people a finite amount of time to let you down.

Nonetheless, without a modicum of trust, we would never get anything done.




posted on Apr, 6 2024 @ 07:52 PM
link   
a reply to: andy06shake




It circumvents the rules Unknownparadox.


But that's not what you said.



AI however are not self-aware and as far as i can see from your links still guided by predefined parameters set by human developers.


That's not being guided. That's doing as you please, and disobeying the guide lines.



bad or evil which are human constructs by the way.

Well you can think what you want. The cycle of life is evil if you ask me, everything depends on something else dying in order to survive. A AI could easily interpret that as humans should die so it can live. Now you may say, AI doesn't think that way. But the truth is, you don't know how AI does what it does, or why. So when dealing with something you don't fully understand, caution would be the super intelligent move. But we see what move is being made.




top topics



 
5
<< 1  2  3    5 >>

log in

join