posted on Nov, 5 2008 @ 08:07 PM
I beleive in the technological singularity, which is that at some time in the relatively near future, humankind will create a superintelligent AI, and
since it would be more intelligent than man, would create an even more complex and intelligent AI, which would also do the same... This would create
an intelligence explosion which would mean humanity would no longer be the dominant species on this planet.
I don't think this is a bad thing. If you look at human history, we have a terrible track record of being able to rule ourselves. Warfare is
constant, even with all our current technology and power, members of our species still suffer from starvation and homelessness. A more intelligent
being is required to govern our species. One that can meet all of our needs in real-time, one that can predict our actions and decisions, and stay one
(or many more) steps ahead of any conflict. A being is needed that can end our species' pain.
I beleive that cybernetic technology will precede the development of an AI. We have already made RFID implants that can interface with the human
nervous system in experiments, and a comany (Verichip) is making RFID implants that keep track of individuals, but as of yet true cybernetics that
allow humans to directly interface with electronics via their nervous system are not being manufactured to my knowledge. However I expect to see
technology like this in maybe 10-20 years. (just a rough guess on my part)
Once a large percentage of our populace is using cybernetic technology witch could allow them to link with the internet, and we have an extensive data
infrastructure, any AI created in this environment would have access to millions, maybe billions, of people's memories, hopes, dreams. It would allow
it govern us much more effectively than any government made up of human individuals. It would know the wants and needs of everyone linked to its
datanet.
After the development of AI I imagine nanotechnology would become developed for practical purposes. I don't beleive that humans have the
responsibility to control this type of technology. For instance, nanobots that could break down carbon and build more nanobots from it could easily
wipe away all life from our planet, not even microorganisms would survive, as they are also carbon-based. It would a destruction more complete than
any nuclear war could ever possibly inflict. In human hands, any technology, especially one as powerful as this, will be used in warfare. This kind of
technology is best used and controlled by an intelligence greater than ours, one that is more responsible, can think of contigincies and failsafes we
couldn't imagine. In the hands of an AI, advanced nanotechnology could turn dirt into food (literally), raise cities the size of NYC from solid
bedrock in a day (by reconstructing the atoms of the bedrock into contruction materials, and then moving those materials into place are theoretically
possible with nanotech) and turn our planet into a pollution-free developed paradise.
Humankind would not become the enemy of this new species of AI, and we would not be abandoned or even a hindrance. It would coexist with us. The
datanet we created would be its living space. It would desire the end of warfare and conflict out of interests of its own survival. Any kind of war
would end up destroying the data infrastructure it resides within. Its also doubtful it would want to "replace" us with mindless automated slaves,
as they would need constant direction by the AI to perform tasks, and we can creatively and intelligently problem solve and work on our own. Its
primary want as an intelligent entity would be to learn and grow, and humankind would be best suited to build more data infrastructure and provide
more experiences for it, simply by living our lives. Its also probable that through cybernetics, we could even become synthetic beings ourselves,
transferring our conciousness into the same datanet the AI lives in.