It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

could robots really take over?

page: 2
2
<< 1    3 >>

log in

join
share:

posted on Jul, 25 2010 @ 12:58 PM
link   
i dont understand how any one can say that robots could take over if it want for humans robots would not exist everything about arobot is man made and that goes for its actions



posted on Jul, 25 2010 @ 01:06 PM
link   
Although I think everything is possible... I don't think it's likely that robots will take over, in programming there's always a "default".

"If A happens do C, else if B happens then do D, if it's not A or B or there's any conflict... then shut down."

There's also manual and automatic mode.

And anything that an AI robot can think of, we can also. And if we can't, we'll build one that can.

PEACE.



posted on Jul, 25 2010 @ 01:10 PM
link   
reply to post by skillz1
 




It seems that what you are essentially referring to is a prevention of robotic sentience by human construction.
However it also seems that you’re overlooking that a machine can be made to learn on its own and go BEYOND its original programming to do things that it chose to do.

Yes humans did the first programming but unless they specifically tell it NOT to go beyond that then you have no advancement in A.I. .
When a machine learns from a mistake and then corrects it because of a human program, then later modifies itself to prevent such a mistake from happening again it has transcended human programming and become its own motivator.



posted on Jul, 25 2010 @ 01:11 PM
link   
today all the supercomputers in the world put together is not faster than 1 human brain but in a docu from discovery i saw resently experts said that in 100 years a supercomputer would be 500 times faster than the human brain.



posted on Jul, 25 2010 @ 01:17 PM
link   
reply to post by kennybubba
 

Which is probably a huge understatement
I have an encyclopedia from the early 20th century that boldly states that mankind will never go faster than 70MPH otherwise the strain would kill him.



posted on Jul, 25 2010 @ 02:05 PM
link   
reply to post by SaturnFX
 


No rules, and no laws could be forced to an intelligence !

Human doesn't follow rules !

If you want a strong ai with the same capabilities as human : it will not follow the stupid rules : and god laws you tell him.

It is not like that in psychology : the first ai should be a friendly ai : and only some of human being have the goodness to teach the goods thing at first.



posted on Jul, 26 2010 @ 06:57 AM
link   
the bottom line for me is a robot will never be able to take over or even think for its self. now i understand what a lot of u are saying that YES robots can make mistakes then later on learn to correct them.

and YES robots can work out certain problems but so can a calculator this is my reason for beleiving that it would vertually be impossable is because letts look at this properly ok you have some scientist bloke who makes a robot. witch means he knows everything it can do and everything it cant do.
so can u imagine him coming in to work one day and the robot is sat there having a cup of tea with its feet up the answer is NOOOOO because it lacks things that cant be duplicated and put into a machine.

such as instinct,a brain,the spirit,thought,even common sense, the end result is they only do what we humans tell them to do and if robots could think for themselves in the future then what are they waiting for ? why not start now ill tell u why because HUMANS and the key word ther was humans have not TOLD them to end of lol.



posted on Jul, 26 2010 @ 07:41 AM
link   
reply to post by skillz1
 


With respect your argument is the same as suggesting that a2 year old will never be able to grow into an engineer because it lacks the ability to do basic math, much less advanced math.

You seem to over look the fact that said two year old will advance and learn, initially by being fed a steady stream of education after which it will then begin to self explore.

This is the type of A.I. that science aims to create, not a mindless automaton that is more akin to your toaster.

You suggest that the creator of such a machine would know ”everything” about it, but that’s not completely true. The systems that science desires to create are self solving self replicating. They will be able to essentially do things that surprise their makers as their intelligence grows. That takes Joe blow everyman out of the equation entirely. The question then is, are human needed?



posted on Jul, 26 2010 @ 07:45 AM
link   
reply to post by skillz1
 


In addition, The A.I. science is after doesn’t need humans to program it, or implant anything into it at all, as it would be self sufficient in that aspect. That once again bypasses your argument that they can only do as humans desire.

The basic point here is that humans are not part of the machine world and the machines don’t need human decision to function.



posted on Jul, 26 2010 @ 07:57 AM
link   
I think this will happen, but I'm reckoning it will be more likely mid-century rather than the more optimistic 2030, since there will be a lead in period between the level of technology reached and then the development of the right level of complexity scientifically (with software and hardware).

What will happen then? Well, I've a feeling that no matter how much restraint you put in place to shut down one of these things, someone, somewhere, out of curiosity or whatever, will allow one of these machines to grow and develop beyond human level intelligence.

What happens then? Well, all bets are off, but I'm inclined to think the most likely scenario is one already raised - that they will want to save us from ourselves. That's if the nature of their intelligence has a level of understanding of altruism, emotion and protection. If not, then the scenario might indeed be extremely worrying. They will see the whole of documented human history - the tendency toward wars, aggression, genocide, self-destruction, civilisation collapse, etc and one of the first things they will do is to try to control global nuclear arsenals. I think whatever happens, it will be a critical juncture in human history. In a way, they might views us as Gods (albeit flawed gods), since we are the Creators, and therefore there might be an inherent desire on their part to protect us.

But will they eventually put us in human zoos much like we do gorillas, chimps and bonobos? No, I think most likely they will work alongside us in an attempt to meet common goals. You can bet that a unified advancement of both species would be incredible, and achievements and breakthroughs would happen quickly. However, there will come a point where their advance will become unbelievably rapid and we would lose sight of our robotic brothers. Our choice will be whether to upload ourselves into robot form to go along with them, or remain in human form.

So, what do I think we have to fear? No doubt in my mind the lead in period to this - a period we are in right now is the most fearful. It's in our hands as to whether we reach this point, this critical juncture, or destroy ourselves first...



posted on Jul, 26 2010 @ 08:20 AM
link   
ok well im glad ur having some input but ill leave u with this. the way a child learns is alot to do with instinct and emotion ie if ur naughty and ur parent smacks you then you realise what you have done was wrong.

u also felt the pain the way a robot learns is nothing like how a human learns due to the fact they dont have emotion and feelings. they also dont feel pain or have a mind of there own.

its like this a guy goes out buys a gun and kills 30 people no body has told him or shown him how to do that but yet he did it! and this is exactly what im saying.

if i place a gun on a table with a robot sat down beside it and i make this robot watch constant video footage of people using guns and how they work. it wont matter how advanced this robot is it will never just stand up and start firing the weapon on its own unless of course its been programmed to do so by a human and that is because it does not have the things witch are invisable and impossable to recreate such as instinct,emotion,common sense .



posted on Jul, 26 2010 @ 08:29 AM
link   
reply to post by the.lights
 


Agreed.
I honestly doubt that we will accomplish anything significant by 2030, though the remarks for that date were that technology would only be arriving at that time, not having a robot war.

In regards to saving us from ourselves, I’ve heard that idea put forward before. Problem is if they were to save us from ourselves it might include methods of admonishment by the machines that are worse then how we treat ourselves. The only difference being that the machines are being methodical and calculated.
If that the machines were truly emotionless and following the scenario of the thread author that may be the case. I for one presume that such A.I. would have a level of emotion even if it were alien to us. Such emotional machines might be reasoned with.

If the machines decided that our numbers must first be diminished as a way to save us the most logical thing to do is forcefully kill those.
Then the next step of chaos may ensue, doubtful as hell but fun fiction none the less.



posted on Jul, 26 2010 @ 08:42 AM
link   
reply to post by skillz1
 


I respect your opinion skillz I really do, and moreover that your not arrogant about such an opinion, that’s refreshing and hard to find on ATS these days.

I happen to disagree, and I personally think that your views on where A.I. is going are primitive.
That’s not meant as insults at all so please don’t misunderstand.
From the research I have done it would appear that science has found ways around the instinct patterns of the biological brain.

The only difference being that in a human the calculations are chemically based.
I agree that a computer has not the instinctual capabilities that a human has. If a person sees fire, instinct tells them it will hurt them before they touch it.

A machine may actually have to touch it to figure that out. After say 4 or 5 generation of machines for lack of a better tern. “Evolving” they too will develop what can best be described as instinct, essentially passing on the collective knowledge to the next generation that in turn uses that knowledge to further its progress.

It’s like Robo RNA or Robotic racial memory or something. Would that be close enough to instinct? I don’t know, but I think its going in the right direction.


[edit on 26-7-2010 by snowen20]



posted on Jul, 26 2010 @ 09:02 AM
link   
reply to post by snowen20
 


They would have to be taught war, be programmed for combat, or have warlike tendencies, and I think eventually, that they would decide that warlike tendencies might very well go against their higher reasoning for self-preservation. Would they want to preserve mankind? Most likely, but I agree with your viewpoint - in what form is the important question. It would be nice to think that a form of altruism might exist with a higher, artificial intelligence, but we have no case histories besides nature, and that usually turns out badly for the lesser organism.

Why reduce human numbers when in a few short years, you can create limitless energy and upload humans to matrix worlds or machine bodies? I know that models predict human numbers to peak mid to late century then start to decline toward a more sustainable level. I truly hope that culling mankind will not be an option. Anyway, we've not done too bad a job of it ourselves throughout history. we can only hope they don't share, or surpass, our murderous tendencies.



posted on Jul, 26 2010 @ 09:09 AM
link   
that actually made alot of sense and it has opened my eyes to it a little bit more. but i guess its a never ending argument haha because what your saying is right and what im saying is right. the fact that no one will really know untill it happens kind of brings this to a hault. however for the human race i hope it never getts that far in advanced or it will be terminater all over again lol



posted on Jul, 26 2010 @ 01:35 PM
link   
reply to post by snowen20
 


Think back to 1990 which is 20 years behind us. The Mac Classic is from that time. Imagine if I'd shown someone back then my iPhone. That's how far we've come in 20 years - a device with millions of colors (not black and white), higher resolution, 512MB memory (compared to 4MB), 64GB of storage (as opposed to 40MB) and wireless networking (compared to no Internet), and it fits in your pocket.

There's no reason to think that the advances in technology over the next 20 years (bringing us to 2030) will be any different. In fact, according to Kurzweil, they may even be more staggering.

Quite a few groups are working on different approaches to AI as we speak, and have been for some time now. Traditional desktops will have the computing power of the brain by 2025. By 2040 they will have the computing power of the entire human population. Nanotechnology is one of the single fastest advancing fields in the world right now. Advances in things like battery technology and energy generation technology are also coming - there are already, for instance, technologies that allow us to generate energy from ambient radio waves.

Combining the above will inevitably lead to self-aware, self-replicating, self-improving independent machines. The technology is inevitable. The question is not will it happen, but when will it happen.

And yes, as stated, you don't just program a set of directives into a free-thinking mind. It won't work.



posted on Jul, 26 2010 @ 02:02 PM
link   
reply to post by rozetta
 


I agree.
There are some things that would be quite fantastic unfortunately I doubt in the next 20 to 40 years we will have the intellectual maturity to handle the hardware we make.

I played pong as a child, and now I have a ps3. If you had told me back in 85’ that there would have been such a thing I would have called you a conspiracy theorist. Lol

Along with the endless possibilities in A.I. comes the idea behind quantum technology.
Researching quantum computers is something that will shed light on some interesting areas of science.

Imagine a superior A.I. that has reached a point through a dozen generations where it doesn’t need Human input whatsoever, and if you were talking to it on the phone or through a computer you would never be the wiser that it was a computer.

Now take that creature and give it a quantum brain, where it can now possibly predict the outcome of multiple events before they happen and always make the decision that best suits it with out fail. Creepy if I do say so myself, especially when considering such a machine using that intellect to create “improved offspring.”



posted on Jul, 27 2010 @ 10:19 PM
link   

Originally posted by Byrd
Well, I build (kit) robots, have done some programming with them, have experience with web bots, and so forth.

Will they take over? Not a chance. Every designer puts in a "kill switch" or "God mode" and if the machine doesn't respond to that, you pull the plug.


---

This is where you are WRONG!!!!!

I ain't puttin' in a kill switch on MY Neural Net bots!

NOT even the thin-film membrane fuel-cell powered
hydraulically-enhanced androidal ones we're already
designing with the 1024 core MCM(Multi-Chip Module)
super-chips we're gonna put in it!

Why? I want to JUST BECAUSE! and since OTHERS will also
think like me, eventually we'll have a Skynet initiated
Armageddon as the killbots see that WE are the problem!

Remember! Eggheads (or their robotic progeny) WILL rule
YOU ALL !!!! Prepare to meet your DOOOOOOOM!!!!!!!!

I just hope the next few versions of Skynet-designed Terminators
look as good as the Cameron Terminatrix (aka Summer Glau)
in The Sarah Connor Chronicles!

Hmmm? Can a Terminatrix get Pregnant?

he he he he

:-) ;-) :-) ;-)



posted on Jul, 27 2010 @ 10:27 PM
link   
reply to post by snowen20
 


---

"...Now take that creature and give it a quantum brain, where it can now possibly predict the outcome of multiple events before they happen and always make the decision that best suits it with out fail. Creepy if I do say so myself, especially when considering such a machine using that intellect to create “improved offspring.”

---

Who's to say that HASN'T already happened...who is to say
that WE aren't the IMPROVED PROGENY...The machine YOU are
talking about might as well be called GOD!

To use a science fiction TV show analogy, who says WE aren't
the Cylons or the 3CPO's of some master quantum computer mind!

I'm waiting for the microsecond disturbance in the quantum
foam that tells me the "Master Control Program" of the Universe
gives notices of my correct assumption and I blink in unsuspecting
confoundment as I am deleted from the Matrix!



posted on Jul, 28 2010 @ 01:29 PM
link   
letts just keep this siple u went way to deep with that



new topics

top topics



 
2
<< 1    3 >>

log in

join