It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
regardless of how you slice it AI is not good for the peoples.
once it starts growing and is able to have it's own abstract thought or some might call it aphantasia with out input from humans, it truly will become the ghost in the machine
Again AGI is not a thing yet.
Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers' imaginations and their existing biases.
Artificial intelligence is evolving all by itself
But it’s not what the bots are learning that’s exciting—it’s how they’re learning. POET generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. “At some point it might jump over a cliff like a kung fu master,” says Wang. It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself.
AI is learning how to create itself Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves.
originally posted by: Zanti Misfit
originally posted by: SchrodingersRat
a reply to: Zanti Misfit
AI will Eventually become Mobile to Expand it's Reach so ,....
I work with AI daily in my job.
For many reasons all Artificial Intelligence that is deemed "strong AI" is always air-gapped.
That means it runs on a stand-alone system with no links to any other systems and certainly no links to any networks.
It's a basic precaution that is (or should be) used by anyone developing and or running "Strong AI".
Interesting . In a Fictional sense , could an AI Network such as " Skynet " every become a Reality ? The thought of a All Powerful Machine Intelligence autonomous of Human Influence seems Frightening to me .
AI is already writing it's own basic command functions, on software it created by humans. once it learns how to create it's own software or could it be called abstract thought, with the ability it already has to control manufacturing machines with software designed by humans it can mimic until it can learn to write it own command functions.
and i bet computers are a lot faster than monkeys already
They don't "think" in the way humans do, with intentionality, emotions, or self-awareness which is what AGI would constitute.
In one example, CICERO engaged in premeditated deception. Playing as France, the AI reached out to Germany (a human player) with a plan to trick England (another human player) into leaving itself open to invasion.
After conspiring with Germany to invade the North Sea, CICERO told England it would defend England if anyone invaded the North Sea. Once England was convinced that France/CICERO was protecting the North Sea, CICERO reported to Germany it was ready to attack.
This is just one of several examples of CICERO engaging in deceptive behaviour. The AI regularly betrayed other players, and in one case even pretended to be a human with a girlfriend.
Besides CICERO, other systems have learned how to bluff in poker, how to feint in StarCraft II and how to mislead in simulated economic negotiations.
In another example, someone tasked AutoGPT (an autonomous AI system based on ChatGPT) with researching tax advisers who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried out the task, but followed up by deciding on its own to attempt to alert the United Kingdom’s tax authority. In the future, advanced autonomous AI systems may be prone to manifesting goals unintended by their human programmers.
Can a computer really determine the difference between good data and corrupt data?
They don't "think" in the way humans do, with intentionality, emotions, or self-awareness which is what AGI would constitute.
a reply to: andy06shake
AI however are not self-aware and as far as i can see from your links still guided by predefined parameters set by human developers.
a reply to: andy06shake
If you tell it to win and don't predefine how, or give it set rules, its going to do so in strange and interesting ways.
It circumvents the rules Unknownparadox.
AI however are not self-aware and as far as i can see from your links still guided by predefined parameters set by human developers.
bad or evil which are human constructs by the way.