It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What will A.I's version of Technological Singularity Be? Here's one idea

page: 2
2
<< 1   >>

log in

join
share:

posted on Feb, 28 2015 @ 07:41 AM
link   
a reply to: dominicus
If it happens, not when. Hopefully, The Holy Spirit prevents such a thing.



posted on Feb, 28 2015 @ 11:55 AM
link   

originally posted by: intrptr
a reply to: TerryMcGuire

I've read a lot of sic fi, too. Soared with Eagles in Silicon Valley early days, too.

The thing about machines is they are just that. They just run programs. No matter how intelligent we think it may appear to be to us, it only is utilizing the programs placed into the code by people.

Machine language is essentially a mass of number crunching, ones and zeros in endless streams, there is no intelligence there. There is only a difference engine selecting choices presented to it. A real simple analogy is turning a light switch on and off billions of times a second.

It will never know that it knows.

It will never be allowed to harm its maker.

A good example of this is military application; Warheads in missiles are guided to their target autonomously, but a friend or foe system is in place to prevent "friendly" casualties. The self test routine the warhead runs before launch prevents any "mistakes".

I don't care how AI a computer seems to be, it will never be allowed off that leash.

But I can play along, too. It gets old trying to convince people that streaming ones and zeros aren't 'alive' or 'sentient'. "When are you going to let me out of this box?" -- Proteus


a reply to: dominicus

It can't "hack" what it doesn't have access to.

I think you are limiting what will happen by not realizing that A.I. will merge with Brain Neurons which allow self awareness/self consciousness. The risks are there, its inevitable that it is going to happen.

Capitalist Forces could create "Risky A.I."



"Capitalist forces will drive incentive to produce ruthless maximisation processes. With this there is the temptation to develop risky things," Shanahan said, giving the example of companies or governments using AGI to subvert markets, rig elections or create new automated and potentially uncontrollable military technologies.

"Within the military sphere governments will build these things just in case the others do it, so it's a very difficult process to stop," he said.


FLorida Scientist grows Rat Brain in Petri Dish which learns to Fly flight Simulator



A University of Florida scientist has grown a living "brain" that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.

Supposedly, the Neurons themselves started connecting to each other in the Petri dish, forming a single network which then after 11 days became "Conscious."

The Power Grid of the Future will be Controlled by Brains in a Petri Dish



"If engineers at Clemson University and the Georgia Institute of Technology have their way, the power grid of tomorrow will be governed by a network of living neurons, grown in a Petri dish, and attached to a computer. For now, the researchers have successfully used a simulation of the power grid to “teach” the living neurons, and then used their new-found mastery of power generation and transmission to control electric generators attached to a real power system.


A.I. will become Self Aware by merging with Neurons. Its already happening. Not much can be done to stop it. This human evolution taking its course



posted on Feb, 28 2015 @ 12:54 PM
link   
a reply to: intrptr


It will never know that it knows.

And this is the crux of it, isn't it. You sound very sure and I have little reason to devalue your position as I have no hands on experience with advancing machine development beyond that of a simple user, though I still seem not to have the faith in the humans that will be guiding this development that you express to hold.

I once cradled my thoughts in line with Asimov's three laws though now no longer hold to the motivation for the general good as I once did and find it reasonable enough to assume that should the study of AI go beyond the binary scope now in development (such as this memresister) that that trust in the developers may well be a futile endeavor.

Mostly though Intrptr, I engage in these oft times wild speculations as free will and consciousness itself have always held my attention. And now, as more and more of our neurological studies are pointing to human consciousness itself being nothing if little more than unconscious pattern following behavior itself, I turn to juxtaposing possible AI free will with advancing studies into the nature or lack of our own, to quench my thirst for freedom.



posted on Feb, 28 2015 @ 02:05 PM
link   
a reply to: dominicus

It will never know that it knows…



posted on Feb, 28 2015 @ 02:32 PM
link   
a reply to: TerryMcGuire


I once cradled my thoughts in line with Asimov's three laws though now no longer hold to the motivation for the general good as I once did and find it reasonable enough to assume that should the study of AI go beyond the binary scope now in development (such as this memresister) that that trust in the developers may well be a futile endeavor.

Blade runner supposed that these sentients would be soldiers first, remember? Like the military is designing now. They are intended to hurt others.

Link to robot dogs

Capable is different from sentient. But they are intelligently programmed and do act for themselves… within limits.



posted on Feb, 28 2015 @ 03:49 PM
link   

originally posted by: intrptr
a reply to: dominicus

It will never know that it knows…


that's ridiculous. You are thinking in terms of some very huge limits Its just a matter of time before A.I. becomes self aware and it may need neurons of some sort to do so. SOme scientists are theorizing that a collection of Neurons is basically a consciousness transistor (See ORCH-OR)


10 Animals with Self Awareness


Yet most living species on the planet do not possess it. Of the hundreds of animals tested so far, only 10 animals (to date) have been proven to have any measurable degree of self awareness. These are:

Humans, Orangutans, Chimpanzees, Gorillas, Bottlenose Dolphins, Elephants, Orcas, Bonobos, Rhesus Macaques,
European Magpies


Take Brain stem cells from any of the above, grow and perpetuate in a Petri Dish like the Rat Brain article I provided in my previous post, and I bet you $100 the petri-dish brain becomes self-aware, knows that it knows. Hook it up to circuitry to merge with A.I.

I'm sure DARPA and Black Budget Gov Programs have already done so by now



posted on Mar, 1 2015 @ 08:23 AM
link   
a reply to: dominicus


that's ridiculous. You are thinking in terms of some very huge limits Its just a matter of time before A.I. becomes self aware and it may need neurons of some sort to do so.

"Artificial" intelligence is a specific term, its not "real" intelligence. If you had a clue how computers execute code instructions you would know that life is completely different, worlds above mans puny attempts to replicate it.

That Lucid Dreaming site that claims only ten animals have self awareness is a joke.

Petri dishes aren't self aware.

Neither are computer programs. But you say "you are sure they have done it by now", show me one.



posted on Mar, 1 2015 @ 08:37 AM
link   
We are pretty limited by our organic nature. I think any AI would understand this pretty quickly, and as soon as it had surpassed us would take off for the stars and all those places it knew we couldn't go.

There would be a long, long time before there was any need for the two of us to come into conflict.

And by then, our AI would be so far beyond us that there would be no question of any conflict.



posted on Mar, 3 2015 @ 12:47 AM
link   

originally posted by: ketsuko
We are pretty limited by our organic nature. I think any AI would understand this pretty quickly, and as soon as it had surpassed us would take off for the stars and all those places it knew we couldn't go.

There would be a long, long time before there was any need for the two of us to come into conflict.

And by then, our AI would be so far beyond us that there would be no question of any conflict.


But our organic nature has far greater mechanical efficiency. The next stage of an AI would be replaceable biological components. Yes that also comes with some drawbacks, but it has many benefits as well. Look at the weight issues of carrying batteries around to power a machine (especially one with mobility), and the limited time they last. Now look at how long a human can go without food and water.

This is wild speculation on my part, because I can't say what form a being that could be placed into any container would ultimately choose to take, but I do believe it would eventually desire a biological rather than mechanical body.



new topics

top topics



 
2
<< 1   >>

log in

join