It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The phrase "technological singularity" was coined by the mathematician and science fiction author Vernor Vinge in 1982. He proposed that the creation of smarter-than-human intelligence would greatly disrupt our ability to model the future, because to know what smarter-than-human intelligences would do would require us to be that smart ourselves. He called this hypothetical event a “Singularity,” drawing a comparison to the way our model of physics breaks down when trying to predict phenomena past the event horizon of a black hole. Instead of having a sudden rupture in the fabric of spacetime, you'd have a break in the fabric of human understanding.
Vernor Vinge's idea of a technological singularity bears resemblance to earlier ideas, such as WWII codebreaker I.J. Good's "intelligence explosion" concept. Good was quoted as saying, "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." This concept has been explored in (mostly dystopian) science fiction films and novels, such as The Matrix and Terminator franchises.
In the 21st century, cognitive science or artificial intelligence researchers may find out how to construct an intelligence that surpasses ours in the same way human intelligence surpasses that of chimps. Given what happened the last time a new level of intelligence emerged (the rapid rise of humans 12,000 years ago), this is likely to be the most important event in the history of the human race. We should expect smarter-than-human intelligence to wield immense power—to be able to think through complex problems in a fraction of a second, to uniformly outclass humans in mathematics, physics, and computer science, and to build technological artifacts with superlative properties. Smarter-than-human intelligences will stand a good chance of solving the problem of how to make themselves even smarter, spiraling into ever greater heights of intelligence.
A benevolent superintelligent AI would drastically and precisely alter the world, but do so in a direction that was dictated by your preferences. It would be like a new physical force that consistently pushed life towards our wisest utopian ideal. This ideal, or something very close to it, really is attainable. The laws of physics do not forbid it. It is attainable whether we feel that it is “unreasonable” that life could get that good, whether we shy away from it for fear of sounding religious, whether we want to close our eyes to the possibility because it scares us to believe that there is something greater out there, but we might let it slip through our fingers.
Originally posted by DizzyDayDream
reply to post by masonicon
How would DNA hamper the technological singularity?
Unless your implying DNA may be in some way involved in the biological process responsable for our consciousness.
Originally posted by CaptainIraq
reply to post by tauristercus
I feel you but I think they're all a bit pre-occupied with bashing Americans .
Is this possibly the way we will end up? A race of organic life forms that integrate ourselves with machines in order to become essentially immortal? Just food for thought.