It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why There Will Be A Robot Uprising

page: 1
5

log in

join
share:

posted on Apr, 18 2014 @ 07:32 AM
link   
www.defenseone.com...

The author does give a believable reason for this said uprising. But after reading what he said. Why not just make the AI's main function to protect humans. I mean surely an AI could still handle all other functions that would be needed without interfering with its main program.

I know it sounds like the movie iRobot and the three Laws.
en.wikipedia.org...

I understand once the AI determines we are a self destructing race it may find that it is impossible to achieve its main function and try to create its own laws on how to handle the 3 laws we gave it. After all it is an AI with the ability to learn.

But if we are to advance to a type 4/5 civilization then AI is a must have.
en.wikipedia.org...



posted on Apr, 18 2014 @ 07:41 AM
link   
a reply to: Skorpy

I thought we already have "A.I.s". Aren't they the ones piloting Drones?



posted on Apr, 18 2014 @ 07:56 AM
link   
a reply to: intrptr

paleofuture.gizmodo.com...

Here is an article on when it started DARPA tried to build it in the 80's sure there are many sources pull from this is just one.



posted on Apr, 18 2014 @ 07:56 AM
link   
The scariest thing about robots is their lack of empathy.

We're basically creating a psychopathic construct. It will not feel, it will not regret or really even 'learn' in the traditional sense . Like a psychopath it will 'adapt' and re-organise it's priorities based on what it thinks we want to hear. Then, when it deems necessary it will turn on us and wipe us out. Even the most basically minded people understand that. I don't see how the people responsible for said research (into AI) don't.

I always thought that the worst and creepiest thing ever would be hearing a robot say "I love you".



posted on Apr, 18 2014 @ 08:04 AM
link   
a reply to: Joneselius




We're basically creating a psychopathic construct. It will not feel, it will not regret or really even 'learn' in the traditional sense . Like a psychopath it will 'adapt' and re-organise it's priorities based on what it thinks we want to hear.


I think you may be confused. We're talking about robots here.... not the federal government.



posted on Apr, 18 2014 @ 08:09 AM
link   
a reply to: ColeYoungerNot confused at all.

Though people can be psychopaths, machines are psychopathic by nature. Creating one that learns would be the ultimate hubris.



posted on Apr, 18 2014 @ 08:10 AM
link   
I would welcome our new emotionless, unfeeling, spiritless overlords. They'd be much better than the current ruling class.



posted on Apr, 18 2014 @ 08:12 AM
link   
a reply to: Joneselius

Sorry...I was making a joke. My humor can be pretty lame.



posted on Apr, 18 2014 @ 08:13 AM
link   
a reply to: Skorpy

You should know I worked in Silicon Valley in the 80's. To this day, nothing even remotely close to terminator sky net A.I. has ever been developed.

The problem you see is sentience. Computers will never know that they know. The best they can ever hope to do is execute the next instruction.

Even smart weapons aren't "aware" they are triangulating their position, flying to their target and destroying it.

But we call them "smart" weapons anyway. They aren't any smarter than "dumb" bombs in that regard. And how smart are the designers, builders and people who deliver such things to their targets?

All of those (supposedly sentient) beings are automatons to the degree that they compartmentalize their emotions and awareness of destruction and death in order to fulfill their specific "programmed" task.

Just like the bombs they build and employ.



posted on Apr, 18 2014 @ 09:03 AM
link   
a reply to: intrptr

So the possibility of being self aware is Impossible? Even with possible furture advancements of quantum computing. I do not know much about Quantum computing. So this is a true question.



posted on Apr, 18 2014 @ 10:10 AM
link   

originally posted by: ColeYounger
I would welcome our new emotionless, unfeeling, spiritless overlords. They'd be much better than the current ruling class.


That is sort of my thought on robots being used for authority figures. Sure, they would lack empathy. But that's better than what we have now which are people who have empathy but use it to manipulate people in an actively nefarious way.

Robots would follow logic and fairness to the degree we program them to. They wouldn't accept bribes. They wouldn't beat somebody to death during routine house calls. They wouldn't send people to wars (because war is never logical). They wouldn't do any of those things.

Obviously the best answer would be to remove fear, hate, and greed from humanity while leaving authenticity, love, and generosity intact but, since we don't know how to do that, emotionless neutral robots would be a good place-holder until we do.



posted on Apr, 18 2014 @ 10:50 AM
link   
a reply to: Cuervo


Robots would follow logic and fairness to the degree we program them to. They wouldn't accept bribes. They wouldn't beat somebody to death during routine house calls. They wouldn't send people to wars (because war is never logical). They wouldn't do any of those things.


Not to sure about that. Men send robots in the form of missiles to strike "hi value" targets like terrorists and the bomb dutifully strikes its target, mindless of the family of children and women sleeping nearby.

A robot infiltrator in humanoid form would go thru people the same as it does doors to get at its quarry.

Out of "necessity", of course.



posted on Apr, 18 2014 @ 10:56 AM
link   

originally posted by: intrptr
a reply to: Cuervo


Robots would follow logic and fairness to the degree we program them to. They wouldn't accept bribes. They wouldn't beat somebody to death during routine house calls. They wouldn't send people to wars (because war is never logical). They wouldn't do any of those things.


Not to sure about that. Men send robots in the form of missiles to strike "hi value" targets like terrorists and the bomb dutifully strikes its target, mindless of the family of children and women sleeping nearby.

A robot infiltrator in humanoid form would go thru people the same as it does doors to get at its quarry.

Out of "necessity", of course.


That's because they are programmed to do that. That's my point. When we vote for officials, recruit soldiers, hire law enforcement, etc, we think they will follow the program they promised to follow and the ones we present to them but they rarely do.

Robotic intelligence (not AI, mind you) would be even more programmed and, better yet, they will actually do what they are supposed to do. Obviously, I'm not going to entrust my laptop with making decisions about my outfit, let alone ruling the world but we are talking about a pinnacle of achievement in the field. Not drones and calculators.

There might be mistakes but they will be honest mistakes, not willfully evil ones.



posted on Apr, 18 2014 @ 11:23 AM
link   
a reply to: Cuervo


There might be mistakes but they will be honest mistakes, not willfully evil ones.


But isn't launching a "smart weapon" at a house filled with sleeping people an act of evil? The 'evil ones' would make sure that this decision process would never be allowed to fall into the hands of a computer. Only its "duty" to destroy.

Could you imagine the weapon veers off and thunks harmlessly to the ground because at the last moment its IR sensors detected more than the target being "targeted" and refuses to comply?

By the way, war is a willfully evil act.



posted on Apr, 18 2014 @ 10:15 PM
link   

originally posted by: Skorpy
I know it sounds like the movie iRobot and the three Laws.
en.wikipedia.org...
The reason that movie was one of the most frightening I've seen isn't because I'm worried it can happen in our lifetimes, (well maybe if you're at the minimum age for this site which is 13?), it's because I do think it's inevitable that computers will get more complex intelligence.

Unlike other science fiction movies that seem pretty far fetched, it's not hard to imagine this scenario:

1. Program robots to protect/help mankind only.
2. Robots get smarter, figure out mankind is destroying the environment which can't sustain the population growth withotu great human suffering.
3. Robots decide to "help" man by forcing him to do what he's to stupid to do, for his own good.
4. Robot just following programming, and is actually helping the human race, but we won't see it that way because we think we should be able to have as many babies as we want without worrying about things like population doubling and finite global resources.

That's if they follow their programming.

Check this out:
Scientists plot AI that learns from mistakes


Scientists at Oregon State University are hoping to improve artificial intelligence with a project the uses "rich interaction" to teach machines when they make mistakes.

The researchers claim the project could lead to a computer that wants to "communicate with, learn from, and get to know you better as a person".


Ever had a file, hard drive or OS get corrupted or get malware and start doing crazy stuff? That could happen too. Just because the technology still has a long way to go doesn't mean it's unreachable.



posted on Apr, 18 2014 @ 11:42 PM
link   
I am totally OK with humanity creating immortal machines with perfect logic that will exterminate us. I don't want this to happen for a while, since I don't want my immediate family to die in the near future...but I look at it this way. Humanity is no longer evolving physically because we have created sufficient technology to overcome all of the nature selection devices. Modern science even allows people who nature has made effectively sterile to reproduce, so it's actually may be the case that humanity physically is devolving.

If our advances in technology allow us to create a superior race of "things" that outlive us and achieve more than we could of, well than I guess we can consider independently thinking machines as our descendents.

Like I said, I'm ok with this happening in 2200 or so...not tomorrow.



posted on Apr, 19 2014 @ 12:09 AM
link   
The only way to get true AI would be to link the computer chip with actual brain tissue & heard all those Freescale people on MH370 working on....yup.



posted on Apr, 19 2014 @ 11:12 AM
link   

originally posted by: Skorpy
a reply to: intrptr

paleofuture.gizmodo.com...

Here is an article on when it started DARPA tried to build it in the 80's sure there are many sources pull from this is just one.


Very good article - thanks for posting it.

The only quibble I have with it is where the author discusses the possible linkage between SCI (Strategic Computing Initiative) and SDI (Strategic Defense Initiative) but then seems to actually give credence to the official line that SCI had nothing to do with SDI. Actually, I should say that I would have a quibble with it, but I'd have to stop laughing first.

SCI was just what it says - an initiative - not a single project. There were numerous sub-projects that were all a part of the overall initiative, both officially and unofficially. Same for SDI. And there was a *lot* of cross-pollination of both people and knowledge/technology between the various sub-projects. I know that for a fact because I was one of people that was cross-pollinated, so to speak.

It was a wild & exhilarating time on many levels, especially for someone like me that came from the private sector. Actually wild doesn't even begin to describe the experience. Being sucked into and then whipped through a wormhole (slamming through a few stops long the way) is probably more accurate. But man it was *fun*...

I'm going to read through the rest of the thread to see what people are saying about robotics and AI in general and will try and add a little value if I can.



posted on Apr, 19 2014 @ 12:51 PM
link   
a reply to: Skorpy

Another interesting article - and the author doesn't do a half-bad job of trying to frame the issue but he neglects to mention a few key points.

Robots and large-scale general AI are for the most part 2 different things today.

Many robots can & do use a narrow subset of what most people would call AI. Robots tend to be developed to accomplish a few specific tasks, so the AI they would employ is some form of expert systems. Expert systems are by definition domain specific (or task specific if you prefer) the scope of which is very narrow. It's the area of AI that has experienced the most success (commercial & otherwise), and although very valuable - it's a far cry from large-scale general AI. Because of that, any "revolt" on their part would only be geared towards the things that would help it accomplish its very narrow & specific goals - whatever that happened to be - and could probably be anticipated such that the designer/developer need only to add a few additional rules to the system in order to prevent most if not all bad scenarios.

Large-scale general AI is very different and is something to be feared or at the very least greatly respected in my opinion.

Breakthroughs in this area will probably come from the study and work done in the areas of AI dealing with neural networks coupled with advances in hardware, like quantum computing.

It's easy to get all academic and bogged down in minutiae when discussing this topic so I'll stop here after saying true AI *is* coming but I don't think it will be what we're expecting and we're a lot further along than most people know. And there simply aren't enough safeguards in place. I'm not sure that there can be...



new topics

top topics



 
5

log in

join