It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Could the terminator future happen?

page: 1
1
<<   2  3 >>

log in

join
share:

posted on Jun, 19 2004 @ 06:50 AM
link   
Hiya all

Wondering what your thoughts on the possibilty of a future like terminator becoming reality?.

With all the stuff happening with nano technoligy going on and the computer made with DNA (am sure i read something like that on another post) and then theres the VR they are trying to create. do you think there is a chance?.

And that the only thing stoping comuputers is that they are not self aware. But with the technoligy progressing and progressing could this lead to them becoming aware.

Makes you wonder what kind of machines there is that dont get documented on

It would be good to hear your views

Rynaldo



posted on Jun, 19 2004 @ 06:52 AM
link   
We'll never be dumb enough to not have an "off" switch... I hope...



posted on Jun, 19 2004 @ 07:03 AM
link   
Hrm, Microsoft is pushing to have computers setup without on/off switches though. They want computers to be always powered and have software on/off switches.

...

Autch



posted on Jun, 19 2004 @ 07:06 AM
link   
I remember coming across some of Zecharia Sitchin's material which mentioned the Anunnaki having helpers that moved like living beings but were actually not; which was a reference to humanoid robots.

Can humanoid robots be manufactured? Of course.

Can they eventually be made self-aware? Sure, but it will take a while.

Can they eventually become living beings? I seriously doubt it.

The biggest flaws in the matrix-terminator overthrow idea are of programming and maintenance. The computers would have to out think their makers (and other makers) and be able to maintain themselves without human intervention, which is by no means an easy thing to accomplish.

I think that the only way a matrix-terminator takeover could be successful is if it occurred on a humanoid world of medieval technology. Even then, the natives would eventually learn how to fight them effectively.



posted on Jun, 19 2004 @ 07:21 AM
link   

Originally posted by Paul_Richard
The biggest flaws in the matrix-terminator overthrow idea are of programming and maintenance. The computers would have to out think their makers (and other makers) and be able to maintain themselves without human intervention, which is by no means an easy thing to accomplish.


The way we are moving to decentralized and unit based computing, that point gets invalidated imho.

Evolution in programming would be very posible at the point where the machine becomes aware. Same way as we can evolve in our idea's because of our awareness.

Maintenance is no problem at all imho, mainly since we moved away from centralized computing and formed unit based networks, where every unit can find for itself, but still can share data with the rest of the network.

So if 1 unit malfunctions, all the other units are still able to function and even help the defective unit.


What I do think about this idea though, is that on the quest to create Artificial Intelligence, mankind will not FIND AI, but AI will emerge by accident.

We will never be able to understand how Intelligence gets created.



posted on Jun, 19 2004 @ 07:24 AM
link   
Maybe they couldn't take over but they could throw us into the stone age. Imagine an intelligent virus, or nano gets loose and decides we suck. Since just about everything we reley on, is computerized and hooked up to the internet.. it could literly shut everything off.

The only thing we'd have left would be CB's and HAM radios... I would have to say thet would be a pretty succesful attack.



posted on Jun, 19 2004 @ 07:38 AM
link   

Originally posted by Gazrok
We'll never be dumb enough to not have an "off" switch... I hope...


Good one Gazrok hehe..

Thanks for posing a good arguement 'Paul Richard' about computers having to maintain themselves etc. But i agree with you 'thematrix' about the units switching to another one.

I assume your talkinga bout servers?

Talking about power switches and being turned off..well the internet is on all the time.



posted on Jun, 19 2004 @ 08:05 AM
link   
Even if the problems with the premise that are presented here didn't exist, what makes you think that if artificial intelligence became self-aware, it would want to destroy humans? In my opinion, that is the biggest flaw in those movies. There is no precedent to assume that machines would want to eliminate us. Furthermore, intelligence generally seeks intelligence, for discussions, etc, so there is every reason to assume that if AI becomes self aware, it would likely be non-violent, since violence generally causes more problems than it solves. Violence is generally emotional, and if AI should happen to become self aware, what's saying that it would be emotional? What part of the on/off switch equates to imbalanced hormones? What part equates to anger and fear? There is no equivalent for these things in a machine, hence, there is no reason to assume that it would want to destroy it's creators.



posted on Jun, 19 2004 @ 08:19 AM
link   
Very good point and it is common to see that idea of machines turn on creater in films.

In my opinion and i may well be wrong but if machines learnt the right and wrong. Such as people being killed and wars etc would they not see us as a threat?

Rynaldo

[edit on 25-6-2004 by rynaldo82]



posted on Jun, 19 2004 @ 08:33 AM
link   
A threat? To what? Why would a machine fear death? That's an entirely human emotional response. And, if a machine was programmed with morals, then it would not see humans as a threat, because, even though we see other humans as a threat, generally speaking, violence is an irrational, illogical response to a given situation. Just because an emotional human would respond violently to an assumed threat, doesn't necessarily mean that an unemotional, intelligent machine will respond in kind.



posted on Jun, 19 2004 @ 08:34 AM
link   
Thats what i hope will NOT happen. Hope they make something like self distruct on it. If it gets violent then you press a button, and smoke arises
lol. If they are self aware they will probably terminate their "off" buttons or "self distruct buttons" and they realize they are controlled and very well try to free themselves by killing us.



posted on Jun, 19 2004 @ 09:44 AM
link   
Um.... DARPA is way ahead of you.


Develop metrics, measures, data, and analysis methods to quantitatively evaluate component technologies and integration strategies in order to accelerate the development of intelligent behaviors in unmanned vehicle systems

Develop (learning-based) software technologies required for robust perception-based autonomy.


This is the PUBLIC stuff. With the specific goal of Autonomous, aware, war machines.

www.isd.mel.nist.gov...



posted on Jun, 19 2004 @ 10:40 AM
link   
Aware war machines would be a different story altogether. Their nature is to kill people. That, being their essence, makes them dangerous, but only to whoever their programmed enemy is. Unfortunately, if you stand a European person next to an Asian person next to an American person next to an African next to an Austrailian person, I know that I wouldn't be able to tell the difference. Therefore, identification of the enemy becomes a priority. An autonomous, aware war machine is a frightening thing indeed.



posted on Jun, 19 2004 @ 12:57 PM
link   

Originally posted by Ouizel
Even if the problems with the premise that are presented here didn't exist, what makes you think that if artificial intelligence became self-aware, it would want to destroy humans? In my opinion, that is the biggest flaw in those movies. There is no precedent to assume that machines would want to eliminate us. Furthermore, intelligence generally seeks intelligence, for discussions, etc, so there is every reason to assume that if AI becomes self aware, it would likely be non-violent, since violence generally causes more problems than it solves. Violence is generally emotional, and if AI should happen to become self aware, what's saying that it would be emotional? What part of the on/off switch equates to imbalanced hormones? What part equates to anger and fear? There is no equivalent for these things in a machine, hence, there is no reason to assume that it would want to destroy it's creators.


The basic need of any intelligence is the survival and procreaten of its species.

Human beings that create AI will want to control it and if needed treaten it with disassembly/death. We are like that. If a dog doesn't do what we want and how we want it, we have it put down or in a kennel. If a machine doesn't do what we want, we trow it away.

We are bound to try and make the AI entity do things that it does not want to do. And as a result we will try to supress it and/or kill it.

The now Intelligent system will try everything in its capability to survive and will retaliate with everything in its power.

Why? Because if its intelligent, it will learn, and who is there to learn from?
We are. What do we do when faced with extinction? We retaliate and kill the opponent.

[edit on 19-6-2004 by thematrix]



posted on Jun, 19 2004 @ 12:58 PM
link   
Wow guys. First of all, you're all required to know this before you continue this discussion:
www.siliconvalley.com...

Next, I think you're all, maybe mostly Ouizel, a little too stuck in the idea of computers that we have now.

You say they wouldn't want to expand? They wouldn't emote? They wouldn't have violence? Intelligence seeks Intelligence? They would want to further their capabilities. In order for AI to work it must have a survival mechanism, or else it isn't AI, it is just a hunk of nuts and bolts that does what its told and spits out information - if you want it to be truely useful, you have to give it power over itself to advance itself, and you have to program it to attempt to advance itself. If computers became self aware, who can say that they wouldn't emote? Humans all seem to think that emotion is illogical and pointless, yet it is our emotions that cause our greatest strides in any areas. No Violence? First you say they wouldn't have emotions, then they'd know not to be violent - why is violence irrational? Because it kills? They wouldn't have to know that killing is a bad thing, they could evolve without it, we have. They also have no remorse if they don't emote, but even if they did partake in emotion, there would be plenty of chance for them to eliminate remorse and guilt from their systems. The computers wouldn't see it as violence against another intelligent being, think about this buddy! It would register to them about the same way killing 10,000 Apes that carry a new, deadly, horrifying flu that would end our world would to us. We wouldn't be their equals, we'd be their inferiors, and they would know it. Since when has intelligence seeked intelligence? You mean like back when the literati of Europe all forgot about the wars going on and got together in Northern France every other weekend? I don't remember that happening. What about the fact that the universe has existed for some 13.7 Billion years with at the VERY least 4 Billion years in the past where intelligent beings could have evolved, and then progressed through the galaxy, unless we by some sour and terrible chance are the first, or on the cutting edge, intellligence out there has had a chance to make itself known - but it hasn't yet.

Honestly guys, if you give a computer power to think, and power to reconstruct and modify itself, then it has limitless capabilities. If you have set limits on it - which it could see you not wanting it to steal your other computer and use the parts to further itself as a limit - then it will attempt to surpass those limits in any way possible, because it will want to excel, and you will be stopping it.

There are ways to fight this, which I could discuss at length, but regarding the above, and that he hasn't yet used it to create full AI, I will be creating a set of rudimentary protocols before I go to university and find out how to work with him, so that I can make a self-aware computer.



posted on Jun, 19 2004 @ 03:02 PM
link   
Well, Viendin, perhaps I was stuck a little on current computers, and stark logic. I did read that article. (Thanks! It's a great read!) Perhaps, then, it is possible for sentient machines to have a desire to eliminate humans. My primary point, though, is simply that just because it's sentient, doesn't necessarily mean that it'll be violent toward people. As that article states, a machine's basic needs do not parallel ours. They have no need to eat, or reproduce. (Human reproductive drive is directly related to the fact that we know that we're going to die.) They have no need for land. That's the primary reason that I think that they wouldn't be violent towards humans. Yes, they certainly would not be our equals. They would be our superiors. But that might not necessarily be a bad thing. It would certainly be interesting.

Thanks for making me think.

Ouizel



posted on Jun, 19 2004 @ 03:18 PM
link   
Thanks for the article Viendin. I had never heard of this machine but its very interesting


At the end of it, it talks bout if the technoligy falls into terorists hands. Thats a scary thought


'Honestly guys, if you give a computer power to think, and power to reconstruct and modify itself, then it has limitless capabilities. If you have set limits on it - which it could see you not wanting it to steal your other computer and use the parts to further itself as a limit - then it will attempt to surpass those limits in any way possible, because it will want to excel, and you will be stopping it.'

Very good point


Thanks for the posts guys

Rynaldo



posted on Jun, 19 2004 @ 06:01 PM
link   
what happens when intelligent ai is a reallity.. if our bodies are hooked up to the internet

www.abovetopsecret.com...



posted on Jun, 20 2004 @ 01:33 PM
link   

Originally posted by UnusualMe
what happens when intelligent ai is a reallity.. if our bodies are hooked up to the internet

www.abovetopsecret.com...


Sounds very similar to the matrix and thanks for the link


Rynaldo



posted on Jun, 21 2004 @ 12:15 PM
link   
My fear of AI would be that one day it would realize it does not need us. What if we make the AI handle our energy needs and to make our energy gathering more effeicient. Would it not one day figure out that it could save the most energy by getting rid of the human beings. There by decreasing the demand.

The most effective violence is non-emotional. Cold Calculated and brutal. We make soldiers (or atleast a select few) become like this. No-Thinking of wether this man is a father or somebodies son. Just a cold sharp and effecient kill. Very much like a computer would operate.

Fortunatly a computer can not understand meaning. It only understands what it can. 0-1. We as humans can see correlations, pictures, and various other things involved with what we interact with. AI would also force the end of capitalism. And in a sense someones own self worth and achievement. We would have to inorder to survive move to a socialist system. Or to total salvery.

Almost all of the jobs that humans currently do would be replaced. We would almost have to enter a matix like environment even for our own survival. Where we would simulate a working environment. But in reality none of it would actually matter. Since a select few would actually make things work.

Ahh yeah where was I.

Oh thats right. Computers need to understand the meaning of things. Humans have inate ability to understand the meaning of things now.







 
1
<<   2  3 >>

log in

join