It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by traditionaldrummer
So when a self-aware man-made machine begins to make demands of humans let's see how long it fares against us nuking it out of existence.
Originally posted by namine
I don't see how it's possible for humans to create a machine that suddenly becomes self-aware. Machines follow a set of rules that would've had to be programmed into them beforehand. Surely it would take nothing short of magic for them to sprout a consciousness and decide to go against the rules they're restricted to?
Originally posted by namine
I don't see how it's possible for humans to create a machine that suddenly becomes self-aware. Machines follow a set of rules that would've had to be programmed into them beforehand. Surely it would take nothing short of magic for them to sprout a consciousness and decide to go against the rules they're restricted to? What would trigger such a change if possible? I can't imagine it's anything in humans' control. Any 'self-awareness' a machine can experience would've had to be programmed into it - will we ever reach a stage where we are able to program sentience? I doubt it. There's always going to be something out there that's bigger than us.
Originally posted by mobiusmale
Note that all of the way through this learning, the machine will be "aware" of the fact that it is real...has certain characteristics...can see how it is different from other "beings" (and in what ways it is similar)...can develop a sense of purpose...etc.
Originally posted by Reptius
And the nuke idea is just silly since the reason a nuke is so deadly is because of the radiation it spreads around the area. Radiation that infects cells last I checked a robot made of kevlar and titanium doesn't really have cells like humans. So I doubt a nuke would do anything to it.
Originally posted by namine
reply to post by mobiusmale
Okay, I get what you're saying. However, a machine that can learn a lot of things is still...a machine. All its "intelligence" would be nothing more than the result of complex algorithms. It would be doing nothing more than following pre-programmed instructions, keeping things in memory and making decisions based on gathered data...not quite sentience, is it?
Originally posted by namine
reply to post by ppk55
True, I was considering current technology. Hm, so you're suggesting that through clever programming we will one day be able to give an organism/machine sentience that wouldn't have developed otherwise? I very much doubt that...but for fun's sake, if something like that were to work out, we wouldn't need to kill it unless it was dangerous, goes all terminator on the world or something. If it can communicate and reason like a human, might as well treat it like one. If it would want to live in society it'll have to live by society's rules like everyone else etc etc
If we don't correspondingly better ourselves then we're effectively obsoleting ourselves.
Originally posted by peck420
Why worry?
If a machine became "aware" but was still bound by constraints of logic...then it would logically assume that it is just a machine and continue on doing whatever it is doing.
For a machine to become a danger to humans it must first become 'illogical'.
Originally posted by namine
we wouldn't need to kill it unless it was dangerous, goes all terminator on the world or something. If it can communicate and reason like a human, might as well treat it like one. If it would want to live in society it'll have to live by society's rules like everyone else etc etc