It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by DragonsDemesne
Wow! I can't read Dutch, unfortunately, but what an idea! It's so simple, yet so awesome. It might just be my imagination running wild, but I can't help wondering if this will lead to the technological singularity some writers have discussed.
Potential dangers
Superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race, and humans would be powerless to stop them.[citation needed]
Berglas (2008) argues that, unlike human intelligence, computer-based intelligence is not tied to any particular body, which would give it a radically different world view. In particular, a software intelligence would essentially be immortal and so have no need to produce independent children that live on after it dies. It would thus have no evolutionary need for love-- it would, in the strictest sense, have no evolutionary traits at all, as evolution is the result of reproduction.
Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".(Joy 2000)
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
Moravec (1992) argues that although superintelligence in the form of machines may make humans in some sense obsolete as the top intelligence, there will still be room in the ecology for humans.
This article may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (January 2010)
Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus might prevent other unfriendly AIs from developing. The Singularity Institute for Artificial Intelligence is dedicated to this cause. Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans.
Wiki
Originally posted by DaMod
Is there a difference between simulated intelligence and artificial intelligence? Where is the line?
On one hand we have the possibility that we may develop computer programs that understand complicated sentence structure and respond accordingly.
On the other we have to wonder about the "Good Morning Dave" scenario.
At what level of advancement will a machine know it exists?.. At what level of computational advancement is it wise for us to stop advancing?...
[edit on 13-1-2010 by DaMod]