It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by Jakes51
This is a very interesting post indeed. I like the two posters thus far am concerned about our reliance on machinery and technology. The technology is far exceeding the ability of the human mind to grasp the affects of such technology.
In terms of a computer being able to think and reason on its own we have not met that stage yet, however; possibly in the 20-30 years we may. If a computer is able to deliberate on its own with out a programmer uploading data, what is going to stop it from destroying the weaker species(humans). I mean we destroy the weaker species don't we? Survival of the fittest at its finest.
Originally posted by zetabeam
reply to post by desertdreamer
Yes, it's the potential threat offered by AI technology that concerns me. Looking at how rapidly technological change is occuring, makes me think that machine intelligence may be upon us much sooner than we imagine. The march of progress is completely unstoppable and i think we may literally be throwing ourselves over a technological cliff with regards to certain research areas. The worrying part is that a large part of AI research is being conducted outside of the public domain and therefore we have little or no idea as to how close researchers are to flipping that switch and being rewarded with the response ... "I think, therefore I am".
Originally posted by zetabeam
Perhaps the lack of "intuition and emotion" may actually be a positive from the MI's point of view. If a decision is to be made based purely on logic, then that would take a lot of variables out of the decision making process. An MI would most likely take a particular course of action simply based on the best possible outcome being achieved from it's point of view.
Originally posted by desertdreamer
reply to post by zetabeam
See Zeta? All it took was my one response, and BOOM....now everyone wants in on the fray! LOL....
Originally posted by Gorman91
reply to post by zetabeam
Lol, mechanical vulcans.
In that situation, there WILL come a group of sentient machines that question the nature of not having emotions.
Really it comes down to this. They will make it, and someone or something will kill it, maybe even itself. The first sentient machine will be the death of all future sentient machines. Because nobody will allow it unless they want to spy/fight/ etc etc.
The facts remain that a robot like a man is inefficient. A robot like a specialized animal is more efficient. The only reason they would mass build sentient machines would be to comfort other humans or to create super soldiers in war that inspire others or make the enemy afraid, and maybe a few others.
Originally posted by zetabeam
Once machine intelligence arises, and if we are unable (or unwilling) to place sufficient restraints on it, that the death knell for the human race will begin to toll.
Is this scenario inevitable ? Unfortunately, I think so ....