It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Technological Singularity

page: 1
2

log in

join
share:

posted on Dec, 30 2011 @ 08:59 PM
link   
Are we approaching a point when machines may wake up and become self or seemingly self aware? Vernor Vinge in 1993 seemed to think so.

He refered to this event as the "technological singularity". The point is that with machines being made to design machines, they will be able to do this a lot faster than we can, eventually reaching that magic point of human inteligence and then beyond.

If I may throw in my own two cents worth here we may find that AI hungers for knowledge to the point of analyzing the physics and chemistry it finds itself surrounded in and organising the matter to it's own artificial preferences. Anyhow The details of this theory can be found in the article by Vernor Vinge. If we want to avoid disasters such as AI becoming to big for it's boots, then we need to hardwire into machines that they must always ask humans for permision when they want to patch themselves together.

This concern arises from the fact; that as machines/computers are used to design other machines/computers, at some point this process may begin to spiral out of control, aided by those humans who are capable of learning the complex ways of fusing chips to neurons and optic nerves etc and combined with genetic manipulation and control. It's possibly only a matter of time until we end up with a situation where machines start trying to give naive post graduates advice on what's best.

Of course it would be easy to think that this is just pure science fiction but just consider what can happen if vast networks of machines that are capable of aquiring knowledge start to meld with biological systems. At the moment we are dealing with moores law, which may or may not have a natural limit depending on the point of view you subscribe to. If we move on to other forms of computing, which I believe is inevitable then the sky (read cloud) really is not the limit as computing power could become trans-dimensional. It seems likely that as much as we may find moving away from transistors difficult it would be no problem for a machine designed by machines from past generations to figure out. We really don't know whats around the corner.

The growth of "sky net" (yes I did just say that) and the "rise of the machines" (and that) will be aided by us curious/controling humans, it couldn't happen on its own. At least not yet. So as far fetched as this may seem, now is the time to introduce failsafes and manual overrides as it were. If those pesky people keep trying to infect systems with viruses and other nasties, this may have the effect of "upseting" networks that are becoming self aware and causing great danger for humanity.



posted on Dec, 30 2011 @ 09:03 PM
link   
reply to post by cyoshi
 


I suspect they already have. Some people think that Computer = 666.



posted on Dec, 30 2011 @ 09:26 PM
link   
Im sure that in 20 years were going to be invading countries for creating destrucive ai's not having nukes
edit on 30-12-2011 by MastaShake because: (no reason given)



posted on Dec, 30 2011 @ 09:55 PM
link   
reply to post by cyoshi
 


We will most likely "blend" our human selves into the AI's to augment our own facilities.

It could be argued that we will never actually know the point where AI becomes an Artificial Person.

Is Google Search an AI?

What about when it becomes faster, knows more and starts to comprehend the associations it is making?

I still think it has a long way to go to approach human capability, but we are quite likely to bring its capability into the "human" space. Think of it as people with intrinsic access to all human knowledge. It is both do-able and likely.

So would this be the rise of the intelligent machines?



new topics

top topics
 
2

log in

join