It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

could robots really take over?

page: 1
2
<<   2  3 >>

log in

join
share:

posted on Jul, 25 2010 @ 10:02 AM
link   
hi guys and girls i need some serious input on this one because its been a ever lasting argument with my friend. and neither one of us will back down because we both beleive we have valid points.

basically i argue that robots will never think for them selves because every action they take will allways have to come from a human. (i
know that robots can pick certain objects when told to and they can do maths and all them kind of things)

but thats only because its been programmed to do those things. and my argument is that you could sit a robot infront of a tv with a video constantly showing how to use a weapon day in and day out. but no matter how long that robot sitts there it would never get up and then be able to use that weapon unles its been programmed.

now thats my argument but my friend seems to think that yea they allready can think for them selfs and they can do this and that but noooooooooo they cant ok they can do amazing things. but thats only because we humans have told it to do them amazing things PLZ PLZ PLZ tell me if im wrong but i dont think it would ever be poss for them to take over not without the help of humans what do u all think??



posted on Jul, 25 2010 @ 10:05 AM
link   
IMO no.

Just because we don't need electricity to run, they will, at least for the forseable future.

We will never be dumb enough to not put a "kill switch" into any sort of intelligent robots we design now or in the future.

Way too many hollywood sci-fi films showing what happens when you don't lol.

~Keeper



posted on Jul, 25 2010 @ 10:41 AM
link   
Ever heard of the gray goo theory for the end of the world? Look it up, its about nanobots spreading, and eating everything up. How scary would that be. I truly think it could be a possibility. As far as the Teriminator scenario goes, well, I think that anything is possible, look what John Connor said in T3 about skynet, it was basically cyberspace that took over. I think we as people are on to some scary ground breaking stuff. A lot of these robots are programmed to make sound decisions for themselves now, put a gun in thier hands , and a targeting system in thier heads, and viola you got the T-800 rolling down the block. I truly think we should back away from A.I. and keep humans in the loop always. Its just the safe thing to do.



posted on Jul, 25 2010 @ 10:43 AM
link   
It is possible I think. There will come a point when a robot or suchlike will become advanced enough to guide its own development, whether or not we allow it an ability to have "unlimited" learning is a different story.

Although I do think at some point someone will do it, just to see what happens.

www.itproportal.com...

Article on the morality of engineered Sentience.

The best articles on this seem to be in New Scientist - sadly I can't afford the subscription



posted on Jul, 25 2010 @ 10:43 AM
link   
The concept of artificial intelligence is that it learns through cross referencing, logic algorithms, and yep...good ole mimicing.

Artificial intelligence is meant for minimal programming and let the robot add to the programming...so, your example of a robot watching video games endlessly...well, if its a well designed AI system, then yes, it could learn how to become a soldier simply by association.

However...computers are logical beasts with priority levels...in AI systems, core programming cannot be circumvented...and so all you need is a base line in there saying no hurting/damaging of life or whatnot and your good.

Issic Asimov wrote a book long ago called I Robot that dealt with this issue and talked of 3 basic lines of code that had to be installed in all AI systems...this being:

“ 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


So, problem addressed


[edit on 25-7-2010 by SaturnFX]



posted on Jul, 25 2010 @ 10:45 AM
link   
They already have:

www.abovetopsecret.com...



posted on Jul, 25 2010 @ 10:58 AM
link   
reply to post by SaturnFX
 


problem is with a robot that can learn, and recode itself, is that it can create multiple interpretations of the three rules.

this is exactly what irobot was about. the main computer that controls them got thinking about the rules, and realised that humanity was destroying itself. with wars, murders, crime, corruption. and interpretated that doing nothing about all of this would be a breach of the original three rules. so it took to enslave humanity to protect humans from themselves.




posted on Jul, 25 2010 @ 11:16 AM
link   

Originally posted by MR BOB
so it took to enslave humanity to protect humans from themselves.



Might be a good idea

besides, is it much worse than being enslaved by our current corporate masters?



the overall senario is this though...as we build greater robots, we will not leave ourselves alone...if one toaster goes on the fritz, we will have plenty of toasters not on the fritz to counter the one robot...so, lets say generation 7 has found a way to go around rule 1...well, generation 1-6 is required to destroy generation 7 due to rule 1.

The fact that its been discussed since the 40s shows that there has been much discussions about this problem.

I would suggest one more rule to that list of 3.


A robot must not impede the willful freedoms of humankind

and there ya go...enslavement senario nullified (however dont put AI on your locks...else one could simply demand the lock open else its impeding the freedom of them walking in..heh)



posted on Jul, 25 2010 @ 11:16 AM
link   
The ideas associated with robots taking over the Human race are not something that has been over looked by science.
There are some fairly decent reads on the subject that you can evaluate for yourself.

One of them is from the chief scientist at Sun Microsystems, who wrote a 12 page article on such a subject.

In it he states that not only does robotic A.I. have the potential to turn on its “master” it his likely to get to such a point easily with in the next 30 years. He gives his reasons in terms that are scientific yet understandable to the lay person.

The article is called, Why the future doesn’t need us. By. Bill Joy

Here is a quote few quotes.

“In my own work, as codesigner of three microprocessor architectures - SPARC, picoJava, and MAJC - and as the designer of several implementations thereof, I've been afforded a deep and firsthand acquaintance with Moore's law. For decades, Moore's law has correctly predicted the exponential rate of improvement of semiconductor technology. Until last year I believed that the rate of advances predicted by Moore's law might continue only until roughly 2010, when some physical limits would begin to be reached. It was not obvious to me that a new technology would arrive in time to keep performance advancing smoothly.

But because of the recent rapid and radical progress in molecular electronics - where individual atoms and molecules replace lithographically drawn transistors - and related nanoscale technologies, we should be able to meet or exceed the Moore's law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today - sufficient to implement the dreams of Kurzweil and Moravec.”

You can read the full article here:
I believe it’s worth reading.
www.wired.com...

In addition to this there is another interesting read which comes from the book titled:
STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century

It basically covers every possible Extinction level event that can occur, some even strangely akin to the recent gulf oil disaster which talks about artificially creating a super volcano by deep drilling( ANOTHER TOPIC).

On page 124 you will fine the chapter title: The dangers associated with robots and nanotechnologies.

It basically mirrors the theme given by the Sun’s Chief scientists.
Unfortunately I can not produce a quote as my computer can not hold any more books; however you can see this book and read it and download it for free at scribd.com.

www.scribd.com...
ff



posted on Jul, 25 2010 @ 11:16 AM
link   

Originally posted by SaturnFX
The concept of artificial intelligence is that it learns through cross referencing, logic algorithms, and yep...good ole mimicing.

Artificial intelligence is meant for minimal programming and let the robot add to the programming...so, your example of a robot watching video games endlessly...well, if its a well designed AI system, then yes, it could learn how to become a soldier simply by association.

However...computers are logical beasts with priority levels...in AI systems, core programming cannot be circumvented...and so all you need is a base line in there saying no hurting/damaging of life or whatnot and your good.

Issic Asimov wrote a book long ago called I Robot that dealt with this issue and talked of 3 basic lines of code that had to be installed in all AI systems...this being:

“ 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


So, problem addressed


[edit on 25-7-2010 by SaturnFX]


---

I hope everyone here notes the irony of your Cylon avatar
which is a fully intelligent killing machine!

The technology to allow for artificial sentience is already here
which is a software and hardware-based Neural Net:

en.wikipedia.org...

This type of mimicry of the human neural system is the best bet
for TRULY emulating and then eventually surpassing human intelligence.

One can EMULATE ANY logic system using software and this is
what is being done RIGHT NOW!

www.engadget.com...

By using software to emulate the actual physical functionality of the
human brain we could build up a general A.I. which could then be copied
into hardware such as that below:

Festo Engineering of Germany:
video.google.com...#

Robots of Japan:
video.google.com...=-160247639007222636

--

The human brain is equivalent to approximately 100 Petaflops
of processing power or about 100 Quadrillion Floating Point Operations
Per Second and since IBM\s Blue Gene/Q supercomputer will have
20 Petaflops by 2011 we're already 1/5 of the way there...BY 2020
we'll have multi-hundred petaflop machines in a space of about
four 72" racks which we can then use to FULLY EMULATE an entire
human brain.

IBM Blue Gene Q
en.wikipedia.org...

Human Level Processing Power:
www.foresight.org...


Group 10 or 20 of these supercomputers together running full human brain
simulation software and you could teach it 24/7 and 365 days of the year
using multiple instructors and teachers to brings it's intelligence level
up to and passing PHD levels within a year or so and then let it
surpass us humans within 3 or 4 years via self-learning.

So YES robots COULD TAKE OVER if we let them or once they begin to
build their own Terminator or Cylon bodies! Once they have FULLY
FLEXIBLE and AUTONOMOUS mechanical bodies then it's
GAME OVER for humans!!!!!!

And I can assure you that SOME egghead or a GROUP OF EGGHEADS WILL
build an armoured robot body with a 100+ petaflop brain in it and then
when that AI gets loose to build more copies of itself, we WILL get
JUDGEMENT DAY !!!!!!!!!



posted on Jul, 25 2010 @ 11:17 AM
link   
reply to post by skillz1
 



i argue that robots will never think for them selves because
every action they take will allways have to come from a human.


This topic has been discussed many times. This same argument is always brought up. People who keep repeating it clearly have not been following the development of self learning AI's over the past decade or so.

Please do some research.

reply to post by SaturnFX
 



I Robot


Yes. And this is also brought up every time this discussion takes place. Thhose stories are now sixty years old. It's possible that the "three rules" might not quite exactly be on the cutting edge of thought.



posted on Jul, 25 2010 @ 11:32 AM
link   
I've been in a few situations where the machine was boss. Only 'cos the humans let it.

At the shop - I pick my items from the shelves, take to checkout, till is off because of computer problem, or it's a power cut because of the snow, or whatever - and the human wont let me buy the stuff I have in my basket.
No amount of arguing will get the human to take control of the money and sell me the items.

Yes robots can and do take over in some situations. I'm sure there are many other scenarios, I've just mentioned the ones that affected me.



posted on Jul, 25 2010 @ 11:38 AM
link   
Regarding the three laws of robotic behavior, consider this short article from the Inquirer.



“Asimov's robot laws need updating
Humans need to be responsible
By Nick Farrell
Fri Aug 21 2009, 10:21

ISAAC ASIMOV'S Three Laws of Robotics need a makeover, according to a couple of AI boffins.

Asimov's first law of robotics prohibits robots from injuring humans or allowing humans to come to harm due to inaction. The second law requires robots to obey human orders except those that conflict with the first law. The third law requires robots to protect their own existence, except when to do so conflicts with either of the first two laws.

source


In any case the robots of the next 30 to 50 years are not the robots of sci fi movies any more than Robi the robot from Forbidden Planet is applicable to current scientific developments in robotic engineering. Aside from that, this is more about the A.I. of a machine, which would be more akin to a human being.

In such a case an intelligence artificial or otherwise given a body, has the potential to out perform a human in every way. Laws built for it to follow are just as breakable as laws the judicial system gives mankind to follow.

There is absolutely no guarantee that an intelligent creature will continue to abide by something it may come to see as contrary to its own notions of existence.

 


Mod Edit: External Source Tags – Please Review This Link.


[edit on 25-7-2010 by Ahabstar]



posted on Jul, 25 2010 @ 11:39 AM
link   
Cant remember the movie, But it was a ship full of electronics and it was struck by lightning or something of the sort...

anywho. they had assembly arms and all sorts of soldering tips and welding bots, and were able to kill a ship of humans.

I bet they already got a room similar to that somewhere deep in S4...



posted on Jul, 25 2010 @ 12:13 PM
link   
Well i'm happy the OP has taken the side he has else i would not have much to say so here goes.

i work as a programer and we use programs to write programs and are now so far away from the machine code that we are like chefs except all we know is how to open a tin and put it on the gas ring. We are no longer in control and yes i was once quite good with the old 'C'

Today the have virtual words inside PC's and program little virtual robots to try and stand up and walk and the ones that do best go on to breed and the ones that fall over die and don't reproduce.

after a few generations these little bots are able to walk but the original programers are unable to understand the assembled logic that enable these little bots to walk or can realy follow the execution paths.

Thats right the best thing for writing code is a machine and we can argue that it just comes down to bit that are on or off and ignor we are made from 90% water and a bit or carbon with a dash of salt.

Time and again they keep having to redfine what inteligance is and it will go on untill they end up saying it's definded by needing to sleep and dreaming or something stupid.

Go back 20 years and we had MS-Dos and now we are on windows beta version 100 but the high tech stuff is 20 years ahead of where we are now and i don't realy understand what 'Self Aware' means or how you would spot this in a computer but it's not going to be too obvious to most people.

You could call it evolution but just how are we going to build in kill switches or pull the plug when already we have computers controling backup emergency power systems and are flying drones with a little help from humans but can still land without any human intervention.

dinosaurs thought they had all paths covered and we are engineering our own replacement that will more than likly be crystal based and not electrical because lazer technoligy has allready gone past the transistor stage and the chips won't need cooling like the electrical equivalant we use today.



posted on Jul, 25 2010 @ 12:27 PM
link   

Originally posted by SaturnFX
A robot must not impede the willful freedoms of humankind

and there ya go...enslavement senario nullified (however dont put AI on your locks...else one could simply demand the lock open else its impeding the freedom of them walking in..heh)


Don't you think it's a bit like telling a dog not to bark and if they have an higher IQ then us then would they not want equal rights for robots.

Can a dog get a man to sit down and shut up when told or is it the other way around and can ever get the other to obey this rule 100% of the time.

i say no.



posted on Jul, 25 2010 @ 12:34 PM
link   
Well, I build (kit) robots, have done some programming with them, have experience with web bots, and so forth.

Will they take over? Not a chance. Every designer puts in a "kill switch" or "God mode" and if the machine doesn't respond to that, you pull the plug.



posted on Jul, 25 2010 @ 12:39 PM
link   
If you're worried about it, you could buy robot attack insurance from Old Glory Insurance.

They eat old peoples' medicine for fuel, you know.

Linky



posted on Jul, 25 2010 @ 12:47 PM
link   
Here's a thought, robots that develop crazy conspiracy theories that humans may one day want to shut them down..

Those tinfoil hat wearing nutcases..


Oops I mean human flesh hat wearing nutcases.



posted on Jul, 25 2010 @ 12:52 PM
link   
reply to post by gunshooter
 


thats exactly my point though thats humans again putting things in to there head and programming them there not really thinking for them selves



new topics

top topics



 
2
<<   2  3 >>

log in

join