It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Intelligence/Technological Singularity - What Could Happen?

page: 1
3

log in

join
share:

posted on Dec, 27 2009 @ 11:23 PM
link   
I was talking with my dad who works for IBM the other day, and the conversation reminded me of one of my favorite topics that I find both fascinating and perplexing (mainly because I study Computer Engineering). That subject goes by many names including The Intellectual or Technological Singularity (or just "The Singularity"), but is more commonly referenced as a "true" or "superhuman" Artificial Intelligence (A.I.).

Let me start off with an explanation. This excerpt is from a series of articles by Michael Anissimov and Roko Mijic (see articles for backgrounds) I will reference frequently in this thread which explain the theory behind the Singularity and how it can be used to our benefit.


The phrase "technological singularity" was coined by the mathematician and science fiction author Vernor Vinge in 1982. He proposed that the creation of smarter-than-human intelligence would greatly disrupt our ability to model the future, because to know what smarter-than-human intelligences would do would require us to be that smart ourselves. He called this hypothetical event a “Singularity,” drawing a comparison to the way our model of physics breaks down when trying to predict phenomena past the event horizon of a black hole. Instead of having a sudden rupture in the fabric of spacetime, you'd have a break in the fabric of human understanding.

Vernor Vinge's idea of a technological singularity bears resemblance to earlier ideas, such as WWII codebreaker I.J. Good's "intelligence explosion" concept. Good was quoted as saying, "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." This concept has been explored in (mostly dystopian) science fiction films and novels, such as The Matrix and Terminator franchises.


So in layman's terms, the Tech Singularity is when a virtual or artificial intelligence is created that can surpass the limits or speed of the human mind. When this happens, some say that it can create machines that are more intelligent than itself and that machine can subsequently produce a more intelligent machine and so on as the intelligence of machines exponentially explodes out of our control, rendering human intelligence so inferior to machine intelligence that we are essentially obsoleted by our own creation (Think The Terminator).


Other theories say that a machine with the intelligence of a human (or greater) essentially is human, and therefore will experience self-realization, in which it becomes aware of it's own existence (think of Asimov's i, Robot).

Anyway the reason why that conversation reminded me of the Singularity is because IBM now has a computer that has reached the 1 PetaFLOPS (for you non-computer geeks out there, FLOPS stands for floating-point operations per second, and is a standard measurement for a computer's performance; also, peta = 1,000 trillion). Now, they are working on a 20 PetaFLOP computer.

At this rate, the limit of the human brain's abilities will be outdone by a computer withing the next few decades. You're all worried about 2012, but I'm worried about a younger Arnold Schwarzenegger going back in time to kill my mom.


So this isn't a question of if it will happen, but more of a question of what will happen when we create something that is smarter than us (qualitatively)?

Before going on, I would like to clarify the type of intelligence I am referring to. Computers have long outdone us in raw processing power (i.e. our neurons fire 200 times a second, my 3.2 GHz dual-core processor completes 6.4 billion processes every second). However, we are talking about qualitative thinking (i.e. complex problem solving and abstract thinking).

Back to the article. It goes on to compare the creation of smarter-than-human machines to the evolution of the Human Race from primates:


In the 21st century, cognitive science or artificial intelligence researchers may find out how to construct an intelligence that surpasses ours in the same way human intelligence surpasses that of chimps. Given what happened the last time a new level of intelligence emerged (the rapid rise of humans 12,000 years ago), this is likely to be the most important event in the history of the human race. We should expect smarter-than-human intelligence to wield immense power—to be able to think through complex problems in a fraction of a second, to uniformly outclass humans in mathematics, physics, and computer science, and to build technological artifacts with superlative properties. Smarter-than-human intelligences will stand a good chance of solving the problem of how to make themselves even smarter, spiraling into ever greater heights of intelligence.


Contrary to most of the doom and gloom associated with artificial intelligence and super-intelligent machines in the mainstream media, this article asserts that if used in the proper way, the Tech Singularity can be of great benefit to the Human Race. To achieve this, however, one would have to (quite literally) hard-wire a set of basic operations procedures to keep the machines under our control. We could use a set of "rules" like Asimov's Laws of Robotics, or a variant on them.

If we can achieve this relationship with super-intelligent machines, the article asserts that our future could look very bright.


A benevolent superintelligent AI would drastically and precisely alter the world, but do so in a direction that was dictated by your preferences. It would be like a new physical force that consistently pushed life towards our wisest utopian ideal. This ideal, or something very close to it, really is attainable. The laws of physics do not forbid it. It is attainable whether we feel that it is “unreasonable” that life could get that good, whether we shy away from it for fear of sounding religious, whether we want to close our eyes to the possibility because it scares us to believe that there is something greater out there, but we might let it slip through our fingers.



posted on Dec, 27 2009 @ 11:28 PM
link   
***Sorry just one more post; reached the character limit.***

One more thing to think about. In one of my favorite games, Mass Effect, (I know it's a videogame but it's still scifi!) there is a race of eternal biological machines that have "reached the pinnacle of existence". Is this possibly the way we will end up? A race of organic life forms that integrate ourselves with machines in order to become essentially immortal? Just food for thought.

In conclusion, I would like to propose a series of questions for you to ponder:

-What do you think might happen when this machine is created?

-Can we really control a self-aware machine with rules (humans break rules all of the time)?

-Is the future of the Human Race on that is combined with machines in order to enhance our experience and physical bodies(i.e. Cyborgs)?

***EDIT: Here is the link to the article if you want to read more***
www.good.is...

[edit on 27-12-2009 by CaptainIraq]



posted on Dec, 27 2009 @ 11:54 PM
link   
There's a evidence where Our Ancestors are Smarter than Today's Humans and strong A.I. is Unrealistic Dream



posted on Dec, 27 2009 @ 11:55 PM
link   
Great thread. S+F.

I personally believe that it would be possible to create AI in theory, but in practise we're still way off. How can we possibly create an intelligence greater than our own if the current undertsanding of the way our own minds work is so limited. Things like memory and emotions, essential component of our consciousness, are still to a large degree a mystery. Now it doesnt take a genius (or even futuristic super-computer-genius for that matter) to know that we need to work ourselves out properly first, then start thinking about how we can be..electrified. this is starting to seem less and less plausable to me.. okay it needs to be one serious peice of technology to compete thats for sure.



posted on Dec, 28 2009 @ 12:00 AM
link   
Also The extreme Complexity of our DNA are Simply Hampers Technological Singularity From Happening for real.



posted on Dec, 28 2009 @ 12:05 AM
link   
reply to post by masonicon
 


How would DNA hamper the technological singularity?

Unless your implying DNA may be in some way involved in the biological process responsable for our consciousness.



posted on Dec, 28 2009 @ 12:09 AM
link   
reply to post by DizzyDayDream
 


Yes this is something I have pondered deeply actually. How can you create a machine to replicate the Human mind? I have taken both Psychology and Sociology courses, and both of these studies would offer conflicting views. A lot of the raw theories in Sociology suggest that the Human mind is like a "blank slate" when it first is introduced to the world, and is thus molded by the different societies and groups it grows in. If this is assumed to be correct, then the mind is essentially the same thing as a blank hard drive on a computer on which data can be written, and if the same stimuli are introduced in the form of that data, then a machine can grow and learn to be like a human.

I'm not saying I subscribe to that theory, but it's just something to consider.



posted on Dec, 28 2009 @ 12:14 AM
link   
This thread is very interesting to me as I have heard of the singularity in reference to AI but until now I did not understand it fully.
I think that our idea of an "intelligence explosion" may be flawed because I do not see any possible way for a being to create a being more creative than itself. Wouldn't it be limited by its creators range of experiences, thoughts, etc. ???
Also why do people think that if we created a machine that became self-aware it would want to create a more intelligent machine? Humans struggle enough with a purpose in life, however most believe death is not the end even if they are not religous. If we created a machine and the machine became self-aware it might become depressed knowing that it was mortal so to speak(assuming we put emotions in its programming) or if it thought logically it might see that by creating a better machine it would make itself obsolete and would in fact do everything it could to make itself continue to be the most advanced machine???
Anyway interesting post



posted on Dec, 28 2009 @ 12:43 AM
link   
Nice thread, I've always pondered about this question...

I remember reading Michio Kaku's book, "Physics of the Impoosible", where he states that silicon-based computer processers will reach their technological maximum in under 30 or 40 years. Basically, unless we develop an entire new type of processor, computing power is only going to go so far.

Then he talks about how quantum computers (computers that compute by the spinning/random actions of electrons) will be able to create a whole new generation of processing power with un-fathomable potential. They have made a few quantum processors already, the largest of wich I think had 12 electrons, and proved that 3X5=15 or something like that. Not impressive, but appartently its a giant leap forward in quantum technology.

I honestly don't think AI will advance to a point will it would make us all powerful. Think of this analogy:

It is logical and practical to think that the universe is teeming with life, and must have many different countless advanced civilizations that have been around for eons. Think of how fast we have advanced technologically, in our 6000 year old civilization, compared to the estimated 14 billion year old universe.

Assuming there have been other intelligent civilizations that have far surpassed us in their technologies, It is logical to assume they have conquered computing and AI.

If an AI singularity meant that computers were to be smarter than us, and would want to rule us, they wouldn't stop after they conquered us- It is logical to assume that they would easily conquer the entire universe if their technology could indeed progress at an infinite speed.

This thought pattern can produce a few number of scenarios:
1) An AI singularity can't exist because we haven't seen evidence of it.
2) An AI singularity can exist, and is self-aware, but it is not a dangerous threat; only a great convinience.
3) An AI singularity can exist, and has, but it causes no self-awareness on the part of the machine- it simply allows it to compute at un-fathomable speeds.
4) An AI singularity with negative consequences can exist, and we have been conquered by machines, but don't know it. This would mean that we are indeed living in a scenario comparable to the matrix, because any infinetly intelligent machine would realize that biological sentient intelligent beings are its only real threat.
5) Some kind of AI negative singularity does exist, but no civilization has dared attempt to create it, because along the path towards creating it, they realized that they are playing with something that could cause their utter scenario. If this is the case, it will be something we will be able to realize as well some point in the future.

I personally think that AI's will never be the same as a human, unless somehow AI becomes more of a biological concept. Computers have, and always will be, machines built by humans with a certain set of instructions to follow. It is all about programming and following orders and commands that are part of its input. We have some kind of consciousness that separates us from machines- be it the soul, the self, the all that is everything or whatever you want to call it, that no machine will ever be able to have- only imitate.

Just my opinion though, I most definetly could be wrong!



posted on Dec, 28 2009 @ 12:47 AM
link   
reply to post by CaptainIraq
 


yes definately something to consider. I disagree with what you said in some ways though. For example, i beleive that - as you said - humans are shaped by their enviroment (including society, culture, family etc), however i disagree that this is in any way similar to a blank slate or hard drive. Infact i beleive its more the opposite. Our minds are like these huge expansive emotionaly fluid things when were born, and gradually over time the neurons in our brain form associations that start to build comprehendable reality. By this i mean that instead of it being like a blank slate.. its like a black slate, so the more you experience from the world, the more black is rubbed out to produce the complex image of consciousness on that slate.

OK some examples might help explain what i mean. Synesthesia for example. The ability to percieve external stimuli such as sound or taste as visible colour and light. Now this has been explained scientifically as a neurological connection between the those stimulus. So when pepole see sounds as colour as well as hearing the sound, its simply the colour neurons connected to the aural neurons. Now this is very is relevant because all babies are born with the synesthesia, but gradually we loose that ability, and the neurons no longer connect to each other. However very rarely some adults remain capable of synesthetic experiences. The point is that we are born with a huge potential that is gradually drummed out of us as we experience childhood/teens, so a hard drive is hardly the same.



posted on Dec, 28 2009 @ 01:03 AM
link   
reply to post by DizzyDayDream
 


Yes I can see where you're coming from and there certainly is a great deal about the Human brain we don't know. Personally, a lot of this stuff actually conflicts with my beliefs (I believe that we have souls, and that separates us from machines), but I always keep an open mind to things I don't understand so I find this subject fascinating.

Maybe I made it sound too simple when I said "blank hard drive". There are actually many experiments being performed where researchers are trying to "teach" robots like you would teach a child (giving the robots basic programming for motor functions and learning abilities, like a human has at birth), ranging from very simple tasks to more complex ones. Here are some links on the subject you might find interesting:


An arm that learns to manipulate objects

Creepy little child-robot that learns walking/social skills

Robot that learns like a child

And of course, Kismet, MIT's robotics project and leading experiment into machine socialization (my sociology teacher actually got to meet and interact with Kismet, although sadly I don't think the vids are on the web...



posted on Jan, 12 2010 @ 08:46 PM
link   

Originally posted by DizzyDayDream
reply to post by masonicon
 


How would DNA hamper the technological singularity?

Unless your implying DNA may be in some way involved in the biological process responsable for our consciousness.

activating up to 100% of our Genes



posted on Jan, 12 2010 @ 10:28 PM
link   
reply to post by CaptainIraq
 


I, too have always been fascinated by this increasingly possible direction that humanity may find itself facing in the very near foreseeable future.

In fact, I posted a similar thread a couple of months back
"Project DARPA-BAA-09-03 - will this be the 1st true machine intelligence ?"
www.abovetopsecret.com...

in which I presented evidence that DARPA was this very moment actively involved in the creation of a machine intelligence (AI) that had the ability to access virtually ANY online document i.e web page, email, blog, sms, newscast, etc, etc that was composed in normal human language structure ... then extract the informational content and using this content, make inferences, conjectures and generate completely new links and conclusions ... in essence, this DARPA AI will have the capacity for rational and logical reasoning ... bringing the Singularity one step closer.

As amazing (and frightening) as this was, it was even more amazing to see the lack of interest amongst ATS members ... DARPA are essentially in the process of creating an AI that can REASON ... and no one seems to care !



posted on Jan, 12 2010 @ 11:35 PM
link   
reply to post by tauristercus
 


I feel you but I think they're all a bit pre-occupied with bashing Americans
.



posted on Jan, 13 2010 @ 03:49 AM
link   

Originally posted by CaptainIraq
reply to post by tauristercus
 


I feel you but I think they're all a bit pre-occupied with bashing Americans
.


Quite possibly


But don't you find it humorous how so many will jump on threads that loudly proclaim global doom/cataclysm/catastrophe and post their little hearts out irrespective of the fact that virtually all of those kind of scenarios exist nowhere except in their furtive imaginations.

But give them a scenario which is almost straight out of scifi and on the verge of becoming reality and that will impact each and every one of their lives in extreme ways (not always for the best) ... and no one gives a hoot !

I'm not sure what part of "the US military is building an AI that CAN REASON" is unclear and that they seem to be having trouble comprehending.



posted on Jan, 25 2010 @ 02:59 AM
link   
FIRST : THere is not ONE SINGULARITY : But multiple

You speak about one 'type' of singularity : and even in this type : the is more than one singularity.

Openness and the Metaverse Singularity by Jamais Cascio

"For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness. We need an Open Singularity."



posted on Jan, 25 2010 @ 03:09 AM
link   
reply to post by Monts
 


I don't think an AI will need to conquer all universe : this is a common human point of view : evolution : adaptation : growth.

Most people don't understand evolution : I happen very fast : and the adaptation is 90% of what will be the stabilized entity ...

Stability : yes stability exists !

THERE IS NO INFINITE GROWTH : AND THERE IS NO NEEEEEEED OF INFINITE GROWTH.

but i think common human are to stupid to ever understand .



posted on Feb, 1 2010 @ 05:06 PM
link   
Very nice OP. I gave you a star and flag.



Is this possibly the way we will end up? A race of organic life forms that integrate ourselves with machines in order to become essentially immortal? Just food for thought.


Yes I think that's very possible. On a basic level, humans are animals. We do not wish to perish. There are a few of us, who in the heat of an emotional or altered state (drugs or alcohol), sometimes manage to pull the proverbial trigger. However, the vast majority of us can't do it simply because it's hard wired into us not to. This is verifiable in any 1st world city... just look at homeless people that have nothing going for them. You can't say it's part of their culture because they see the rest of us driving our cars, living in houses, and going about our 1st world ways. We are happier and healthier than they are. Yet even this unfortunate cross section of society finds the will to keep going. Why? Because deep down, nobody wants to die. Even though the majority of humans believe in the existence of the soul after death (life after death in some capacity), none of us truly want to put it to the test until there is no other alternative (unavoidable death like old age).

I believe eventually we will come to a time period where humanity will address this issue. Right now many among us are scared of technology, robots, AI's, and science. It's normal to be scared of something you don't understand. It's also normal to dismiss something you can't change. What I mean by this is why don't you see more people freaking out daily about their coming (unavoidable) death? Because most rational people "deal with it." Some go to church and adopt beliefs in an afterlife. Others simply avoid thinking about it too much by staying busy (work, school, family, friends, hobbies). Everyone is different in how they "deal with it," but all of us are similar in knowing it can't be avoided. Once the human race is able to actually DO SOMETHING about it, then you will see action.

The interesting thing to me is the mounting evidence of the singularity (or whatever you want to call it). Since all we can do is guess and make sci-fi movies like Terminator or I-Robot, who really knows?? I guess I believe that the computer age will have a huge impact and radically transform society (it may even save us from ourselves), but how exactly it all plays out is anybody's guess.

As you know, we are comprised of many, many cells. Most cells do not divide (like brain cells), so as they slowly die off, you slowly die off. The reverse is also bad. Cells that multiply out of control are typically known as cancer. There currently is no way around these problems. No way as of yet...

I think the concept of machines becoming self-aware etc is fascinating because it really is an extension of ourselves. All the available data points to us being an anomaly. There is simply no other creature on this planet which comes close to our level of understanding. But how did it all start? How did those damn apes in Africa become what we are today?? Perhaps an AI going thru a similar process could shed much light on our own origins.

The world is becoming so complex, the vast majority of us are unable to keep up with the rapid pace of evolution. We are already dependant on computers and this will not change. It's not a passing fad. Vernor Vinge understood this decades ago. He saw the signs beginning to form. Honestly, how in the world are we going to explore space? How will we leave this planet someday? It doesn't take a rocket scientist to see how our current state simply isn't made for space. But computers/machines are.

To me, all the signs are there, but how it ultimately plays out is pure speculation. One thing I know for sure... mankind will do whatever it takes to survive and if merging with machines is "what it takes" then it will happen. I'm willing to bet the bank on it....... hehehe. Unfortunately, considering our short lifespans, none of us will be alive to say "told ya so!"




top topics



 
3

log in

join