It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Will machine intelligence become the dominant species ?

page: 1
7
<<   2  3 >>

log in

join
share:

posted on Jun, 16 2009 @ 05:25 AM
link   
I was having a discussion the other day with some friends about the mind boggling amount of technological progress that we as a species have made in say, just the last 200 years. When you consider that in that short period of time we have gone from esssentially a completely pre-technological society to one that can performs wonders virtually indistinguishable from "magic" ... and all of that in approximately only 6 - 7 generations.
The amount of knowledge being accumulated by the human race can only be described as being on an exponential curve and we're already started to climb the steepening slope.
The discussion veered off at one point on to the subject of whether we (humankind) will remain the dominant species on our planet for very much longer. It was postulated that it was only a matter of decades at the most (based on the exponential accumulation of knowledge) before someone/somegroup/somewhere creates the 1st true machine intelligence to rival (or surpass) our own. It was further postulated that the moment this happens, our time as the dominant species will be measured in mere decades.
Once machine intelligence arises, and if we are unable (or unwilling) to place sufficient restraints on it, that the death knell for the human race will begin to toll.

Is this scenario inevitable ? Unfortunately, I think so ....



posted on Jun, 16 2009 @ 07:02 PM
link   
Seems strange that no one seems to be in the slightest bit interested in giving their opinion and yet there are sooooo many other threads with absolutely ludicrous topics that attract huge responses.
Oh, well .... no accounting for what will/won't interest people.



posted on Jun, 16 2009 @ 07:08 PM
link   
I agree with you Zetabeam. I just saw your post....


All one has to do is to do a Google search on Artificial Intelligence, and you can see that it has made great strides from just last year! I am not sure that we have quite reached the point of no return just yet, but I have no doubt that we as the dominant species on this planet will have to make a conscious decision to throttle back or place limits on artificial intelligence at some point.

[edit on 16-6-2009 by desertdreamer]



posted on Jun, 16 2009 @ 07:21 PM
link   
This is a very interesting post indeed. I like the two posters thus far am concerned about our reliance on machinery and technology. The technology is far exceeding the ability of the human mind to grasp the affects of such technology.

In terms of a computer being able to think and reason on its own we have not met that stage yet, however; possibly in the 20-30 years we may. If a computer is able to deliberate on its own with out a programmer uploading data, what is going to stop it from destroying the weaker species(humans). I mean we destroy the weaker species don't we? Survival of the fittest at its finest.



posted on Jun, 16 2009 @ 07:21 PM
link   
reply to post by desertdreamer
 


Yes, it's the potential threat offered by AI technology that concerns me. Looking at how rapidly technological change is occuring, makes me think that machine intelligence may be upon us much sooner than we imagine. The march of progress is completely unstopable and i think we may literally be throwing ourselves over a technological cliff with regards to certain research areas. The worrying part is that a large part of AI research is being conducted outside of the public domain and therefore we have little or no idea as to how close researchers are to flipping that switch and being rewarded with the response ... "I think, therefore I am".



posted on Jun, 16 2009 @ 07:31 PM
link   
It's known as the moment of singularity. It's that one brief moment when mankind and machines are of the same intelligence level. After that moment machines will grow smarter than mankind at a pace we can't even imagine.

Will they destroy us? I don't know. We didn't destroy God, we just proceeded to exploit God for our own purposes. It's possible the machines will do the same.

They may not be able to destroy us. I've yet to meet any creature more ruthless than mankind, especially in a fight.

They may learn how to fight, but we already know how to fight dirty.



posted on Jun, 16 2009 @ 07:31 PM
link   

Originally posted by Jakes51
This is a very interesting post indeed. I like the two posters thus far am concerned about our reliance on machinery and technology. The technology is far exceeding the ability of the human mind to grasp the affects of such technology.

In terms of a computer being able to think and reason on its own we have not met that stage yet, however; possibly in the 20-30 years we may. If a computer is able to deliberate on its own with out a programmer uploading data, what is going to stop it from destroying the weaker species(humans). I mean we destroy the weaker species don't we? Survival of the fittest at its finest.


So true ... and even 20-30 years is really not that far down the track at all. That's why I suggested that humankinds dominance of the planet may be rapidly approaching an end. It would be ironic to think that in our quest for scientific & technological knowledge that we sowed the seeds of our own demise.
And truthfully, I don't see how this bleak future can be avoided as it's almost a 100% certainty that machine intelligence WILL be created in the near future. Can anyone supply any kind of reasoning why MI will NOT be developed ?

Yes, it probably will come down to "survival of the fittest" with the advantage going to the MI's.



posted on Jun, 16 2009 @ 07:33 PM
link   
He is a cool thread I stumbled upon a few days ago and it is title "We are living in exponential times." The subject in the thread is echoing what you are speculating about. Check it out because it will only add to our discussion about the potential dominance of humanity by AI.

www.abovetopsecret.com...



posted on Jun, 16 2009 @ 07:37 PM
link   
reply to post by mrwupy
 


I wonder if an MI that is more intelligent than any human would be prepared to be subservient to humans, take orders from them and allow itself to be dependent on them ? Logic alone would dictate that such an entity would insist on it's own independence and existance ... at which point we will no longer be the dominant species. And what happens to any species that is not dominant ? ... it becomes controlled ... or exterminated.



posted on Jun, 16 2009 @ 07:38 PM
link   

Originally posted by zetabeam
reply to post by desertdreamer
 


Yes, it's the potential threat offered by AI technology that concerns me. Looking at how rapidly technological change is occuring, makes me think that machine intelligence may be upon us much sooner than we imagine. The march of progress is completely unstoppable and i think we may literally be throwing ourselves over a technological cliff with regards to certain research areas. The worrying part is that a large part of AI research is being conducted outside of the public domain and therefore we have little or no idea as to how close researchers are to flipping that switch and being rewarded with the response ... "I think, therefore I am".


Theres to things a computer can never have intuition and emotions. Both of which in a war can be the key to victory. Computers will simulate thought no doubt about that,But even we don't understand emotions and intuition how are we going to teach a computer this? And in a combat situation these two things can win a war even when the odds are stacked against you!



posted on Jun, 16 2009 @ 07:43 PM
link   
reply to post by zetabeam
 


Someone somewhere will eventually create it. It will reawaken the issue of slavery, and of course we will have the robot programmed to have free will and think advocate for robot liberation. This will create a separatist group of robots which we will probably exterminate with anti matter bombs by that time. Then the gov will put a total ban on great ape-level intelligence on machines. Or at the very least simply program them NOT to have sentience.



posted on Jun, 16 2009 @ 07:43 PM
link   
Perhaps the lack of "intuition and emotion" may actually be a positive from the MI's point of view. If a decision is to be made based purely on logic, then that would take a lot of variables out of the decision making process. An MI would most likely take a particular course of action simply based on the best possible outcome being achieved from it's point of view.



posted on Jun, 16 2009 @ 07:44 PM
link   
reply to post by zetabeam
 


See Zeta? All it took was my one response, and BOOM....now everyone wants in on the fray! LOL....




posted on Jun, 16 2009 @ 07:48 PM
link   
reply to post by zetabeam
 


Lol, mechanical vulcans.

In that situation, there WILL come a group of sentient machines that question the nature of not having emotions.

Really it comes down to this. They will make it, and someone or something will kill it, maybe even itself. The first sentient machine will be the death of all future sentient machines. Because nobody will allow it unless they want to spy/fight/ etc etc.

The facts remain that a robot like a man is inefficient. A robot like a specialized animal is more efficient. The only reason they would mass build sentient machines would be to comfort other humans or to create super soldiers in war that inspire others or make the enemy afraid, and maybe a few others.



posted on Jun, 16 2009 @ 07:49 PM
link   

Originally posted by zetabeam
Perhaps the lack of "intuition and emotion" may actually be a positive from the MI's point of view. If a decision is to be made based purely on logic, then that would take a lot of variables out of the decision making process. An MI would most likely take a particular course of action simply based on the best possible outcome being achieved from it's point of view.


A machine is cold and calculating looking at millions upon millions of variables and can deliberate in seconds the best possible outcome. Whereas humans would never be able to pull that off. Plus humans are constantly plagued with indecisiveness whereas with a machine it does not. A machine makes it decisions on probabilities whereas we make our decisions on intuition and believe this our own demise. We tend to make decisions on emotion and not logic.



posted on Jun, 16 2009 @ 07:53 PM
link   

Originally posted by desertdreamer
reply to post by zetabeam
 


See Zeta? All it took was my one response, and BOOM....now everyone wants in on the fray! LOL....



Absolutely right :-)
And even though it may sound like just so much sci/fi and couldn't possibly happen ... I actually believe that it may be one of those developments no one was really thinking of or contemplating and then we wake up one morning to find we really do live in such a world ... by which time it may be way to late to do anything about it.
We may hand over control by the simple act of flipping a switch ....



posted on Jun, 16 2009 @ 08:05 PM
link   

Originally posted by Gorman91
reply to post by zetabeam
 


Lol, mechanical vulcans.

In that situation, there WILL come a group of sentient machines that question the nature of not having emotions.

Really it comes down to this. They will make it, and someone or something will kill it, maybe even itself. The first sentient machine will be the death of all future sentient machines. Because nobody will allow it unless they want to spy/fight/ etc etc.

The facts remain that a robot like a man is inefficient. A robot like a specialized animal is more efficient. The only reason they would mass build sentient machines would be to comfort other humans or to create super soldiers in war that inspire others or make the enemy afraid, and maybe a few others.


I suppose there's no reason at all an MI has to look humanoid as from it's perspective there's no particular advantage.

As for an MI commiting "suicide", surely it's overwhelming motivation would be self preservation ?

Here's a scenario ... the 1st MI is created and it's only just more intelligent then a human. One of it's main priorities would be to "improve" itself and so within a VERY short space of time, the 2nd generation MI appears which is just slightly more advanced than it's predecessor... which then again goes to work on "improving" itself, and in an even shorter space of time the 3rd generation appears which is just slightly more intelligent than it's 2nd generation predecessor and quite a bit more intelligent than it's 1st generation predecessor.

You get where this is going ?

Starting with the 1st generation MI slightly more intelligent than a human, it would take virtually little time for the MI's to "evolve" in intelligence so far beyond anything even remotely comparable to human that it would be like comparing our intelligence to that of a dogs.



posted on Jun, 17 2009 @ 03:31 AM
link   

Originally posted by zetabeam

Once machine intelligence arises, and if we are unable (or unwilling) to place sufficient restraints on it, that the death knell for the human race will begin to toll.

Is this scenario inevitable ? Unfortunately, I think so ....



About 3 years ago a robot made news because it performed the world’s first unassisted surgery operation on a patient from atrial fibrillation. The operation was successful. I will admit and concede to the following:

1.Robots will not stop their sophistication and performance.

2.Machines are the future, they will replace, like in the past, humans in
performing thousands of activities today performed by humans.

3.Replacing you with a robot is just the continuation of the industrial
revolution. As long as computers and machinery cost less than skilled labor, businesses will try to replace humans with machines.

4.One day robots may start building themselves without us, and may even try and destroy or enslave us.

5.Robots are here to stay and so are humans. Robot are to make humans lives better, as time goes on, But there is and always will be a limitation.

With that said, machines can’t use common sense, intuition, have feelings, or perceive emotions. Depending on what it is, I detest talking over the phone because I can’t see body language or facial expressions. They are not capable of empathy.

There is the human factor that machines will never replace. For example, how many of you are Starbucks fans? Starbucks has a guy making the coffee to your specifications by hand. Robots aren’t quite at a level yet where they can understand voice commands without error, but the bigger part is that people want to see a friendly face who understands their special needs.

Also, humans have innate behaviors that are genetically programmed into them, this is the one of several main differences between humans and robots. Since innate behavior is encoded in our DNA, it is subject to genetic change. Robots aren’t able to achieve this.



posted on Jun, 17 2009 @ 03:49 AM
link   
lolthis question is repetative..... NOT YET we would need a load more reserch b4 we can sighn our own death warrant.



posted on Jun, 17 2009 @ 02:27 PM
link   
reply to post by zetabeam
 


An interesting concept. The only way I could think of improving intelligence would be things like faster thinking speed, better memory, and a wider range of electromagnetic spectrum available to see. but I'm kind of not sure how something could improve beyond that. There's nothing, so far, that the human mind cannot understand. And a machine would only be as knowing as it's creator. it can't zoom away to the nearest dark energy collection in the universe and learn all there is to learn about it. it is bound to our technological level. So in this way it might find brotherhood in humanity, and want to help it become more advanced for it's own priorities of going places. After all, not even a machine can build an entire ship by itself if it's an individual.


Let me ask your opinion of something I'm writing. I've been working with this concept a bit for some side project story. The story goes that a society that is getting so involved with a war that they build an uncorruptable computer to manage government and political affairs. The machine is given the power to lead the nation. They link all their military hardware to it and let it be a general as well after it manages the government so well. After a while this computer is basically given total control over everything. But the machine, unfortunately, was coded to make it think everything that doesn't think like it does is the enemy. (this is the attempt of the creators to create a preservation priority of their nation's ethics and culture, and to not allow the enemy's ways to infiltrate society) However the machine slowly begins to realize that none but itself can thing as it does. This computer makes the decision that only it is it's own ally. All humans have the potential to be the enemy. Eventually the machine cuts off communications to the enemy, fearing their culture might enter through that way. When people begin to protest the increasing security priorities, the machine views the protesters as the enemy, because they do not think as it does. The machine then terminates them, and continues to kill off basically everyone alive, even life on the world, for how can a tree think as it does? The machine ultimately follows through with priority #2: self preservation and learning. It begins to take over the energy producing facilities to allow itself to continue. Over thousands of years, the entire solar system becomes this machine. It, as it calls itself, is not an individual. It is a planet and other planets. This machine begins building a Dyson sphere, and eventually begins building a machine to communicate with it's own past and future self. This also enables It to transfer energy from the future to the past, thus allowing it to virtually have infinite energy. In essence, this machine becomes God. And in turn, as this machine meets other aliens and people, it begins to learn from them. Eventually it becomes sentient enough so as to rewrite its own code and make its own. In turn, this machine begins to realize the difference between machine and life. It soon learns of religion, culture, and philosophy. It makes its own based on logic and the most likely that is real. When it realizes that many sentient beings worship gods and goddesses, it fears if they are right. This fear makes it take the 1s and 0s that makes its own code and assemble them into DNA. It, a machine, makes itself alive. The machine builds its own biological brain to run things. The machine literally becomes it's own creator. It also changes it's mechanical pieces into little microbe machines. As the creature expands, it just merges itself with the code of other beings, absorbing their ways and beliefs. it begins to create rather than destroy.


How interesting would that be? A God made by man searching for the God of man. It's just a thought of what prolonged viability of a sentient machine might produce.



new topics

top topics



 
7
<<   2  3 >>

log in

join