It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: intrptr
a reply to: TerryMcGuire
I've read a lot of sic fi, too. Soared with Eagles in Silicon Valley early days, too.
The thing about machines is they are just that. They just run programs. No matter how intelligent we think it may appear to be to us, it only is utilizing the programs placed into the code by people.
Machine language is essentially a mass of number crunching, ones and zeros in endless streams, there is no intelligence there. There is only a difference engine selecting choices presented to it. A real simple analogy is turning a light switch on and off billions of times a second.
It will never know that it knows.
It will never be allowed to harm its maker.
A good example of this is military application; Warheads in missiles are guided to their target autonomously, but a friend or foe system is in place to prevent "friendly" casualties. The self test routine the warhead runs before launch prevents any "mistakes".
I don't care how AI a computer seems to be, it will never be allowed off that leash.
But I can play along, too. It gets old trying to convince people that streaming ones and zeros aren't 'alive' or 'sentient'. "When are you going to let me out of this box?" -- Proteus
a reply to: dominicus
It can't "hack" what it doesn't have access to.
"Capitalist forces will drive incentive to produce ruthless maximisation processes. With this there is the temptation to develop risky things," Shanahan said, giving the example of companies or governments using AGI to subvert markets, rig elections or create new automated and potentially uncontrollable military technologies.
"Within the military sphere governments will build these things just in case the others do it, so it's a very difficult process to stop," he said.
A University of Florida scientist has grown a living "brain" that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.
"If engineers at Clemson University and the Georgia Institute of Technology have their way, the power grid of tomorrow will be governed by a network of living neurons, grown in a Petri dish, and attached to a computer. For now, the researchers have successfully used a simulation of the power grid to “teach” the living neurons, and then used their new-found mastery of power generation and transmission to control electric generators attached to a real power system.
It will never know that it knows.
I once cradled my thoughts in line with Asimov's three laws though now no longer hold to the motivation for the general good as I once did and find it reasonable enough to assume that should the study of AI go beyond the binary scope now in development (such as this memresister) that that trust in the developers may well be a futile endeavor.
originally posted by: intrptr
a reply to: dominicus
It will never know that it knows…
Yet most living species on the planet do not possess it. Of the hundreds of animals tested so far, only 10 animals (to date) have been proven to have any measurable degree of self awareness. These are:
Humans, Orangutans, Chimpanzees, Gorillas, Bottlenose Dolphins, Elephants, Orcas, Bonobos, Rhesus Macaques,
European Magpies
that's ridiculous. You are thinking in terms of some very huge limits Its just a matter of time before A.I. becomes self aware and it may need neurons of some sort to do so.
originally posted by: ketsuko
We are pretty limited by our organic nature. I think any AI would understand this pretty quickly, and as soon as it had surpassed us would take off for the stars and all those places it knew we couldn't go.
There would be a long, long time before there was any need for the two of us to come into conflict.
And by then, our AI would be so far beyond us that there would be no question of any conflict.