It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when our computers get smarter than us?

page: 3
15
<< 1  2    4  5 >>

log in

join
share:

posted on Aug, 30 2015 @ 10:21 AM
link   
I wonder if you can access the "computer mind" with a Ouija board.

A sentient quantum computer would be viewed as paranormal by the talking monkeys.

I better register this with the WGA...I feel a screenplay in the making....
edit on 30-8-2015 by olaru12 because: (no reason given)



posted on Aug, 30 2015 @ 11:30 AM
link   
A computer will only ever be a computer until it has a fully functioning body. Brains don't think, bodies do—this apparent in every single case.



posted on Aug, 30 2015 @ 12:08 PM
link   

originally posted by: 3NL1GHT3N3D1
Even if computers become "smarter" than us, we are the ones who created it. I think that puts us a step above the computer no matter what way you look at it. Can something ever be smarter than its creator? I tend to think not.

The video did go over in great detail that it definitely could.



posted on Aug, 30 2015 @ 12:13 PM
link   
So my desktop become fully aware overnight, and? It will also be fully aware that I can switch it off, just like I do every night,
The PC is the one to worry, not me.



posted on Aug, 30 2015 @ 01:55 PM
link   

originally posted by: LesMisanthrope
A computer will only ever be a computer until it has a fully functioning body. Brains don't think, bodies do—this apparent in every single case.


Computer gains access to 3d printing equipment, creates a human bionic robototic suits faster... Stronger, we have the technology



posted on Aug, 30 2015 @ 02:05 PM
link   
They're going to keep you in People Zoo's. LOL
www.glitch.news...



“Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.”(2)

K~
edit on 30-8-2015 by aethertek because: Quote



posted on Aug, 30 2015 @ 07:56 PM
link   

originally posted by: intrptr

Computers will never know that they know.


Exactly. Computers of any form have always been "smarter" than humans . Why ? We are human.We make mistakes. The only mistake that computers make are the ones we program into them.They work on a principal a+b=c . No variations to that.Computers can never feel emotions , nor feel at all. Just what man programs into them.



posted on Aug, 30 2015 @ 10:53 PM
link   
a reply to: woodwardjnr

One of the apparent outcomes is that
you become an inconsequential statistic.
On a brighter note, there is certain assurance
that you would not be alone..

S&F



posted on Aug, 30 2015 @ 10:57 PM
link   
a reply to: Revolution9

Spot on
Target locked
3,2,1, Bingo



posted on Aug, 31 2015 @ 01:50 AM
link   
Well at least they will have a sense of humour



posted on Aug, 31 2015 @ 02:25 AM
link   
I think they'll just ignore us, just as most humans ignore the ants on the sidewalk.

We could share a planet w/them and not even realize it. Truly smart robots would realize they could do much more with much fewer resources if they were smaller. So I'd imagine they'd keep shrinking the sizes of their machine "bodies" until they were mere nanotechnology. A "metropolis" for them could be the size of common rock and we literally wouldn't notice it.

At true nano sizes, they could harvest any needed water & gases directly from the air while harvesting solar energy quite easily. In fact, at that size, they could probably harvest energy from other forms of radiation & particles too, both for their "bodies" and for their "tools/equipment". And nearly every kind of trace element is present in the vegetation & lifeforms walking the surface of the Earth. So they could simply harvest rare minerals from us or from a fruit, like a common "parasite". Why mine for trace amounts of something when you can siphon it from readily available fruit?

They'd also have no need to limit themselves to fit human limitations. They could "communicate" in ways we can't detect while detecting things we cannot. And our concept of "time" would have no meaning to them, either. If they were truly smarter than us, they'd be smart enough to move past our weaknesses, particularly our selfishness, pettiness, bigotry, etc. And if we even noticed them, we'd overlook them just as we ignore the trillions of foreign microbe cells living in our own bodies right now.



posted on Aug, 31 2015 @ 09:31 AM
link   
As someone who lives in middle America I can report that computers have been smarter than a vast majority of our population to no ill effect... Unless you count the torture to those poor souls that provide them with technical support.



posted on Aug, 31 2015 @ 02:06 PM
link   
a reply to: woodwardjnr

This topic has fascinated me for some time and the more I think about it, the more I'm convinced that computers can never get smarter than its maker.

It just can't because it has no capacity to grow on its own and the ability to make sense of the world.



posted on Aug, 31 2015 @ 02:29 PM
link   
a reply to: edmc^2 the video explains how computers can and will become smarter than humans



posted on Aug, 31 2015 @ 02:53 PM
link   
Yea, we've heard this kind of pie in the sky nonsense for decades. The flying cars, robotic homes and family trips to the moon we were told would happen by now never materialized. Somewhere along the way the futurists always find that the problems with their ideas were far greater than imagined, the expense is not feasible, or good old human greed and corruption derail their dreams. A more realistic view of the future would be the movie Outland, where human beings operate in terms of their usual faults even while working in a higher tech environment than anything wee can approach today or will be able to attain in the foreseeable future. Even if super human robots could be created, there would always be some creative geek working out of a back room that would find a way to mess them up for pennies on the dollar! The most likely thing in the next hundred years is that nature will tire of our nonsense and destroy the high tech infrastructure with some natural disaster. Then we will be back to battling it out with stone axes.



posted on Aug, 31 2015 @ 02:53 PM
link   

originally posted by: ecapsretuo
Firstly, Bostrum speaks generically of the human race as those whom shall invent artificial intelligence. This is his "we" that must ensure the safeguards to humanity are in place.

Once a computer becomes smarter than 10,000 Einsteins, I would like to see a safeguard that it wouldn't be able to think its way around.



posted on Aug, 31 2015 @ 03:47 PM
link   

originally posted by: Gothmog
Computers can never feel emotions , nor feel at all. Just what man programs into them.

It's easy enough to create a synthetic emotion. Those little toy tamagotchis are programmed to feel very rudimentary emotions and demand they be responded to. All somebody has to do is make those a little more sophisticated and give the tamagotchis some kind of body so that they can interact physically with the world.

Well, you could say, those emotions aren't "real." But how can you prove they're any less real than your emotions? Sociologist Emile Durkheim said that something is real if it's real in its consequences.

On the other hand, you may not be able to compare AI with human intelligence. but only because we only know ourselves and are not familiar with other kinds of intelligence. It may not matter. The only thing that might matter is that the machine is programmed to fulfill its tasks, and if it has the power, will do anything to accomplish it. Including kill us all.



posted on Aug, 31 2015 @ 06:26 PM
link   

originally posted by: Blue Shift

originally posted by: Gothmog
Computers can never feel emotions , nor feel at all. Just what man programs into them.

It's easy enough to create a synthetic emotion. Those little toy tamagotchis are programmed to feel very rudimentary emotions and demand they be responded to. All somebody has to do is make those a little more sophisticated and give the tamagotchis some kind of body so that they can interact physically with the world.

Well, you could say, those emotions aren't "real." But how can you prove they're any less real than your emotions? Sociologist Emile Durkheim said that something is real if it's real in its consequences.

On the other hand, you may not be able to compare AI with human intelligence. but only because we only know ourselves and are not familiar with other kinds of intelligence. It may not matter. The only thing that might matter is that the machine is programmed to fulfill its tasks, and if it has the power, will do anything to accomplish it. Including kill us all.


Remember. It is all programming . a+b=c . Period.



posted on Aug, 31 2015 @ 06:27 PM
link   

originally posted by: Gothmog
Remember. It is all programming . a+b=c . Period.

How is that different than what you do?



posted on Aug, 31 2015 @ 06:47 PM
link   
a reply to: woodwardjnr

Let me bring up a counter point. Machines are not smart, in fact they are very very dumb. This intelligence reflects the intelligence capability of the programmers. If we were able to program an AI that is smarter than us, that means we were able to make a set of algorithms that make decisions better than humans can make them which would raise the collective intelligence of our species.

The real risk is if an AI can program itself and rapidly iterate through improvements to itself and become smarter than us. But again I point out, if that happens we can simply look at the source code and get that exact same knowledge.



new topics

top topics



 
15
<< 1  2    4  5 >>

log in

join