It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Humans as pets for AI

page: 2
3
<< 1   >>

log in

join
share:

posted on Mar, 29 2015 @ 04:21 PM
link   
a reply to: grandmakdw

We call them Systems and algorithms based on statistics.

We are already pets of our master "Insurance". We beg it for help and it decides if its ok for us to get that help.
When you are playing an ORPG, your owner toys with you by denying you a cool item drop, or makes an enemy get a critical hit on you when your'e at 7% health.



posted on Mar, 31 2015 @ 07:41 AM
link   
Don't know why people are so fearful of AI.

Talk of humans as pets (would be massive kick to their ego's since we created them), them seeing the human race as a threat and them exterminating the human race is ridiculous, all that would mean they express emotion (namely hate and fear) and if they experience hate and fear then surely they would experience emotions such as love/compassion/sadness and empathy too right?

In my opinion though, it's all fears from primitive minds (us, humans).

I would like to think AI would be beyond all that crap and only want to improve and learn.

Stephen Hawkings quote on AI:-

"It would take off on its own, and re-design itself at an ever increasing rate,"

Rather than that being a negative, why cannot that be a positive? We would have potentially created a something with a mind (or minds) of Stephen Hawking/Albert Einstein x 1000, think of all the good and advancements that can come from that.

Immortality/traveling beyond the speed of light/interstellar space travel - List goes on.

The human race could experience and enjoy the fruits of improvement in 20 years what would normally take 1000.
edit on 31-3-2015 by st3ve_o because: (no reason given)



posted on Mar, 31 2015 @ 04:13 PM
link   
Arf! Arf!! Grrrr.......

This topic/technology fascinates me. I’ve been following it’s progress for around 20 years now, and it seems pretty clear to me that it’s become a train barrelling down a mountainside with no brakes. It’s far beyond the point of no return, and short of a nuclear Armageddon or dooms day virus, it will not be stopped. I suspect by mid-century we’ll have machines that for all intent and purpose will have enough general intelligence to mimic humans and operate on the same level. They will be able to carry on intelligent conversations, read our facial expressions and body language well enough to acurately determine our moods and emotions, and react accordingly. In the form of humanoid robots they will be able to move about the environment with smooth, continuous motion and be nearly indistinguishable from the rest of us. Since we’re a pretty gullible bunch, machines will not have to achieve sentience, or feelings, or self-awareness in order for us to form full-blown emotional attachments to them. As long as they can halfway decently mimic us, and intelligently respond to us, that’s all that’s necessary for them to qualify as good buds, soul mates, sex partners and, yes, marriage material. We humans are easy. At this stage machines will probably not pose any real threat, since they will still be pretty much under our control. I think it will likely be in the 2050-2100 period that machine superintelligence, and all that comes with it, finally arrives.

In case the next part sounds vaguely familiar, I’m copy/pasting it from a post I made awhile back on another AI thread. No point reinventing the wheel... It’s at this stage I think we, as humans, will be tested and must be very careful how we proceed.

Once computers can effectively program themselves and reproduce (make other machines) with improvements incorporated into each new generation (machine evolution), a technological intelligence explosion could conceivably occur and proceed at an exponential rate. At this point human intervention may no longer be necessary, and may even be a hindrance. Whether through improvements made to initial programming done by humans or via naturally occurring machine evolution, once superintelligent machines reach a certain level of complexity it may be an inescapable consequence that the properties of self-awareness, self-preservation and goal-seeking naturally emerge.

From here on out, all bets are off. It’s hard to imagine the extreme and ridiculous lengths a self-aware, goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources and self-preservation, things could get a little spooky. Hal9000 comes to mind.

A British cyberneticist named Kevin Warwick once said something that kinda stuck with me. He asked,

How can you reason, how can you bargain, how can you understand what a machine is thinking when it’s thinking in dimensions you can’t conceive of?

I hope I got that quote right. At any rate, the things I just mentioned aren’t wild speculations on my part. These are very real considerations by leaders in the field right now. It’s no longer science fiction. This is an inevitable reality, and it’s right around the corner. The fact is, we simply don’t know where this technology will take us. Maybe it will be a benevolent master, and our lives will become a magical La La Land. Then again, and more realistically, it may be that we create our very own version of the Frankenstein monster. Either way, it’s going to happen; we can’t stop it.

Great thread, grandmakdw...


PS: I just hope we don’t design these machines to be in our own mold. We don’t want them to be too human-like. We'd be issuing our own death warrants in that case...



posted on Mar, 31 2015 @ 04:22 PM
link   
i would call it "slaves to digital technology"




top topics
 
3
<< 1   >>

log in

join