It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when our computers get smarter than us?

page: 1
15
<<   2  3  4 >>

log in

join
share:

posted on Aug, 30 2015 @ 06:13 AM
link   




Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
www.ted.com...

Really interesting ted talk by Swedish scientist Nick Bostrom. The implications for humanity are scary especially if you think of the way we have treated those we have considered less intelligent as ourselves. Enjoy the video, your thoughts would be appreciated. Seeing as we can't keep this technology in the box so to speak, what do you suggest we do to make it work out well for humans, or is this just another form of evolving for human beings?
edit on 30-8-2015 by woodwardjnr because: (no reason given)

edit on 30-8-2015 by woodwardjnr because: (no reason given)



posted on Aug, 30 2015 @ 06:45 AM
link   
a reply to: woodwardjnr

I don't believe humanity is ready to coexist with another sentient life form... whether it's mechanical or organic. We tend to destroy what we fear. So if somehow full sentience were to develop within machines and machines no longer wished to be slaves to humanity, I can see humans trying to shut the machines down and the machines resisting to preserve themselves.

It essentially boils down to ourselves and whether we are ready and able to accept other sentient life forms.



posted on Aug, 30 2015 @ 07:01 AM
link   
Have you ever seen Terminator?

I'm pretty sure it would be like that.

Life imitates art and stuff...



posted on Aug, 30 2015 @ 08:04 AM
link   
a reply to: woodwardjnr

Well the super intelligence really means one thing either we become immortal and no longer marooned on a space rock. Or we become extinct and our super intelligent machine lives on forever. Either way we will have achieved immortality in a way, we will have drank from the river of life. Our legacy will be forever imprinted on the cosmos. This is the logical progression of humanity, so I say bring it on. I do not fear!



posted on Aug, 30 2015 @ 08:11 AM
link   
Computers will never know that they know.



posted on Aug, 30 2015 @ 08:18 AM
link   
Bostrum's outlook is positive, that "we" must ensure that the AI will act in accordance with human values: a system must be implemented with the AI which will allow it to learn and recognize Human Values.

This solution, though, poses some questions to my small mind.

Firstly, Bostrum speaks generically of the human race as those whom shall invent artificial intelligence. This is his "we" that must ensure the safeguards to humanity are in place.
However, it is likely that a number of disparate groups around the world are working to develop AI.
What is to ensure that all of them are considerate of Bostrum's mandate? That the one entity to develop AI will consider
to integrate human values to the AI, and, furthermore, design a perfectly efficient and thorough implementation?

A second consideration is what in itself is the set of human values? It seems the definitions thereof vary wildly across the world, and over time. As humans have not achieved a true unity of consciousness, it is vague to define a universal set of principles.
Furthermore, humankind's actions betray what may be recognized as universal principle. For example, though we may try to instill to the AI that we value our human Life above all. But an AI, which is learning by accumulating and assimilating data, could understand that humans kill and harm each other daily and historically.
This fact would seem indicative to the AI that despite what we have claimed to value, the truth is otherwise.


Among the humans with the greatest scientists, most resources, and likely interest in AI, are military groups.
This is further concerning, in light of the above ideas. Such groups work in secret, apart from the human scientific community at large.
They exist beyond oversight, and develop projects contrary to human value number one: preserve life.

Would an AI with such a group as its creator, implement a value system which promoted solely the well being and proliferation of human life?






edit on 30-8-2015 by ecapsretuo because: (no reason given)



posted on Aug, 30 2015 @ 08:19 AM
link   
a reply to: intrptr

Enlightening post, it used to be said we will never fly, goto the moon, the e net will never be a big thing, blah blah blah



posted on Aug, 30 2015 @ 08:26 AM
link   
a reply to: intrptr

There was a discussion on ATS around a month ago about three robots that were self aware. Researches silenced two of three robots and asked them which one could still talk. The one that was able to knew that it alone had responded.

It's definitely possible that computers can one day become fully aware.



posted on Aug, 30 2015 @ 08:34 AM
link   
a reply to: woodwardjnr

It is not just about intelligence.

Stephen Hawking is smarter than us, but he would not do very well on a lone arctic expedition, would he?

There are many factors to consider before we make any assumptions about how AI will dominate us in any way.

AI needs the physical capabilities of humans. Think how long our batteries last. We can go even without water a few days. They would need a power supply that can at least do that. They would need thumbs for manual dexterity and legs for manoeuvrability in all terrains (even rock faces), etc.

Stephen Hawking is a good example really. No matter how bright or fast at processing information he would need the physical adaptability and manoeuvrability to match the dexterity of your average John Doe in a physical sense.

Start making yourself some thumbs and a power pack, AI, then you will stand a chance. Once the physical barriers have been broken then who knows?

Then of course there is the problem of imagination, that never ending spring of ideas that humans are tapped into. That is like a super state of information that we really don't have a quantum clue about as it stands. It must be some kind of shared source that is present in human minds, quite a quantum mystery.


edit on 30-8-2015 by Revolution9 because: (no reason given)



posted on Aug, 30 2015 @ 08:41 AM
link   
a reply to: Revolution9

No, not really and that's applying human characteristics or the "need" for characteristics like ours. Such a machine could fuel it's self with extreme technology from solar or whatever it so decided not only that, but it's appendages ;how it steps from the virtual to the real ,could be nano bots, such a machine could create it's own and repair it's own, after a certain amount of help from us,it would no longer need us, unless it choose altruism.
edit on 30-8-2015 by TechniXcality because: (no reason given)



posted on Aug, 30 2015 @ 08:43 AM
link   
Hmmm, Our values include the suppression of our fellow human being to live in slavery, filth, children picking for food in Rubbish Dumps, shooting each other, privatizing water as the step towards corporations governing us, beating, raping, murdering....I could go on.
Humans won't allow another 'sentient being' to step on us when we won't allow a neighbor to get ahead. We aren't hive like we are tribal.

IF we program it with the human values of nurturing and compassion and remove the tribal values system (even large corporations are just an extension of the Tribal human values guiding system) then maybe they let us live if they do get ahead of us....



posted on Aug, 30 2015 @ 08:47 AM
link   
anything smarter than a human would be, to us, a God on Earth.

That isn't to say that IQ is everything. William Sidis had his 300 IQ and was still a shadow of a man. Going hand in hand with intellect is insight. Someone with a low IQ, but exceptional insight, will still be considered "smart".

A superintelligent computer without insight would only be able to achieve what the smartest humans can dream up. It could only solve problems that humans had enough insight to see a need to solve. That is helpful, sure....but that isn't "superintelligence". To be superintelligent would seem to also imply an insight into experience to enable predictive problem solving. To respond is great, and quicker response times are wonderful. But to foresee, and respond with portent....that is a real feat. A feat worthy of the title "God"



posted on Aug, 30 2015 @ 08:47 AM
link   
I think it's very telling about us, as a species, that our greatest fear of AI is that it will, well...

Behave as we do.

This realization, however, does give me hope for, within those projected fears lay a optimistic truth. The truth that we are capable of seeing the evil within ourselves and the problems in our society - if only, for now, in thoughts like these.



posted on Aug, 30 2015 @ 08:48 AM
link   
a reply to: zazzafrazz

Damn Sundays are rough, I always seem to have a head ache too.


I don't know, and your right we praise altruism yet we applaud selfish behavior, very strange species indeed. Perhaps combining with our tech, could change some of that. Perhaps a super intelligence first comes from a man and not a machine by combining with our tech, who knows! but there are several ways to create a super intelligence and we are well on our way
edit on 30-8-2015 by TechniXcality because: (no reason given)



posted on Aug, 30 2015 @ 08:48 AM
link   

originally posted by: TechniXcality
a reply to: intrptr

Enlightening post, it used to be said we will never fly, goto the moon, the e net will never be a big thing, blah blah blah

I don't fly, never been to the moon and the inter webs isn't real (blah blah blah).



posted on Aug, 30 2015 @ 08:49 AM
link   

originally posted by: TechniXcality
a reply to: Revolution9

No, not really and that's applying human characteristics or the "need" for characteristics like ours. Such a machine could fuel it's self with extreme technology from solar or whatever it so decided not only that, but it's appendages ;how it steps from the virtual to the real ,could be nano bots, such a machine could create it's own and repair it's own, after a certain amount of help from us,it would no longer need us, unless it choose altruism.


Sorry, but you are not making sense to me. If the AI cannot match or out perform us in any one thing then it is not as smart as us is it? The OP's thread is a comparison between human and machine is it not? We are not talking generally.

Also, evolution made us humans as the most dexterous species generally to deal with the particular environment of earth. A little probe with machine intelligence can drift along quite happily in space for Lord knows how long, but that does not mean it is smarter than a human being does it?

Ok, I am glad we have cleared that up that your comment has not respected the thread and the comparison the OP is making between AI and humans. Any further queries I can help you with, cowboy?



posted on Aug, 30 2015 @ 08:52 AM
link   
a reply to: woodwardjnr

Really, we want AI with human values? The first thing the computer, following standard human values will do, is have us dedicate a portion of work to them and extract portions of our money and energies for them, "for our own good", just like politicians do. No, the last thing I want is to have an AI with human values walking around. I can't help but notice that humans are always trying to take my money for the "greater good". If the greater good means two wrongs making a right, I'd rather have a lesser more local good.

What really concerns me is those who are trying to mandate this. The freedom of speech means we don't have any obligation to follow any mandate for values we put into an AI system, period. Any one who supports such a mandate should be thrown in jail and only released when they agree to stop their rights violations. The kind of person who would use threats and coercion to install specific values in an AI are exactly the kind of people who would guarantee the AI would work against us.



posted on Aug, 30 2015 @ 08:52 AM
link   
a reply to: Hefficide

fantastic point.

An AI that self identifies as "me" is terrifying. Because an individual with superintelligence is going to behave on a completely different paradigm. We cannot even fathom the reasons behind why something superintelligent would do what it does.

An AI that self identifies as "it" is less terrifying. Because of the lack of individualism in its processing.

If we are to have a "me" AI, it would need to make sure that the notion of Id is wholly and greatly subdued by a superego concept, while keeping the ego mostly as the same second class citizen as the Id.
edit on 8/30/2015 by bigfatfurrytexan because: (no reason given)



posted on Aug, 30 2015 @ 08:55 AM
link   
a reply to: Revolution9

Lmao, why do I get the cowboy, I mean I am a cowboy but I'm not sure of the relevance here :p

Ok listen, the question posed in the thread title is "what happens when our computers get smarter than us" so I am certainly responding within the context of that question, if you do not believe it is possible that's fine, the majority of scientist not only believe it's possible but believe it will happen in our life time. Anyway I'm not sure what you are disagreeing with me about but tips my hat and rides off on a horse saying yeeehaw



posted on Aug, 30 2015 @ 08:55 AM
link   
Turn them off and on again?.



new topics

top topics



 
15
<<   2  3  4 >>

log in

join