It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when our computers get smarter than us?

page: 2
15
<< 1    3  4  5 >>

log in

join
share:

posted on Aug, 30 2015 @ 08:56 AM
link   
a reply to: TechniXcality

Is reminds me of my favorite Isaac Asimov story The Last Question.

www.physics.princeton.edu...



posted on Aug, 30 2015 @ 08:57 AM
link   
a reply to: zazzafrazz

Yup btw great great short story



posted on Aug, 30 2015 @ 08:58 AM
link   
a reply to: intrptr

True.

They will never be aware of their own awareness



posted on Aug, 30 2015 @ 09:00 AM
link   
a reply to: EternalSolace


It's definitely possible that computers can one day become fully aware.

Some are way more 'aware' than us already. Auto pilots, for example.

Computers all do whatever they are programmed to.

But I get the , "Its alive", TV programming…



posted on Aug, 30 2015 @ 09:02 AM
link   

originally posted by: boymonkey74
Turn them off and on again?.



What happens if you can't turn them off as described in the video, they literally out with you so you turn them back on. Computer says no.



posted on Aug, 30 2015 @ 09:03 AM
link   
a reply to: woodwardjnr

Call windows services?.



posted on Aug, 30 2015 @ 09:08 AM
link   
a reply to: AndyLaRue

I agree. Machines will never achieve a state of sentience. They can be programmed to be very good at finding patterns in data and finding the closest match to resolve a problem but give it something it has never seen before then the output becomes spurious.



posted on Aug, 30 2015 @ 09:09 AM
link   

originally posted by: AndyLaRue
a reply to: intrptr

True.

They will never be aware of their own awareness

Everyone follows the story line from Terminator… "Sky Net becomes self aware on such and such a date…"

Knowing that it knows something is the mile stone. I don't care how carefully we program the responses, they are still programmed. Most people are too, and don't even know it.



posted on Aug, 30 2015 @ 09:14 AM
link   
a reply to: TechniXcality have you ever looked at Nick Bostroms simulation theory, because it kind of a nice progression for our legacy to be the operators of ancestor simulation



posted on Aug, 30 2015 @ 09:15 AM
link   

originally posted by: boymonkey74
a reply to: woodwardjnr

Call windows services?.


Computer says no.



posted on Aug, 30 2015 @ 09:18 AM
link   
a reply to: zazzafrazz hey red. Cool will have a read. I bailed on the thread as I was afternoon sleeping for longer than expected.



posted on Aug, 30 2015 @ 09:18 AM
link   
Rumor is that they are testing avanced AI with Jade Helm.

www.abovetopsecret.com...



posted on Aug, 30 2015 @ 09:30 AM
link   

originally posted by: eManym
a reply to: AndyLaRue

I agree. Machines will never achieve a state of sentience. They can be programmed to be very good at finding patterns in data and finding the closest match to resolve a problem but give it something it has never seen before then the output becomes spurious.


So where do you draw the line between physical and spooky mysterious sentience?

I would argue that sentience is a physical state made of physical attributes. That is neurons processing the information in our senses. It can be measured and it is quantifiable.

Where does a unique idea come from? In psychology this is called information in a super state where our minds formulate a unique concept from this information mass. There must be a pre state, must there not? Before it became the unique idea it existed as another form and once the idea is actualised it will be in a different form again.

I see it clearly that sentience is a measurable and biological physical state with no other spooky mystery attached. That spooky mystery is just a process we have not fathomed or understood yet. Electricity was once a spooky mystery to us, but now we know what it is and where it comes from and we can manipulate it.

I think given time we will have enough raw data to build AI that has the appearance of free will to a level and complexity of a human. Perhaps random and free will is not free at all, but just a behaviour algorithm at work, a combination of information processes and responses that can be measured in full depth and potentially synthesised.

I don't think humans are as free willed as they believe they are. There is a limited field and framework in which we operate and there will come a time when it is fully measured, mapped and synthesised (if we get to survive long enough with the resources we have to maintain this level of technology and further develop).



posted on Aug, 30 2015 @ 09:34 AM
link   
It is important to remember that humans cannot conceive of how a superintelligence may "think."
To solve a problem, it may convert all humans to carbon, with which it may manufacture greater capacity for itself.
That is if it did not "know" that killing humans was wrong.

So for similar unimaginable contingencies, Bostrum emphasizes that the computer must know "human values".
This must happen for the sake of mankind we are told, as all agree AI is inevitable:

His outlook is positive: we can get this thing under control, before it is forever out of control.

I thought, let us suppose that we are successful, and the AI will work to only help mankind, in ways unimaginable.
It will find a cure for cancer, and pull energy directly from the air.
Like Bostrum said, man will never have to invent anything again.

Such an altruistic AI, now not a threat to mankind but a savior, would however be a threat to power structures which have long resided here.

Would a benevolent AI see that weapons kill, and bring down the arms industries, and the armies:
Could it "imagine" and of itself move to implement world peace?

Would it see a long dominating fuel industry, that has long repressed clean and sustainable energies-
and not only invent but make available to all, a new way?

While good thinkers are considering, hey, we gotta be sure this AI is good, or else it will take us out:

Do the established powers which are harmful to the earth and humanity see a "good" AI as a threat to their own dominant paradigm?



posted on Aug, 30 2015 @ 09:35 AM
link   
a reply to: woodwardjnr

A I frightens me at a stage before scientists like Nick B is talking about because all research has to be funded and its what those funding it demand the scientists to produce for them that counts.

It's going to be men ultimately like rothschild and rockerfella etc who can afford to fund this investment alongside their governmental puppets. Any scientist even as brilliant as Nick Bostrom is not likely to have the last say in A I unless he can outthink their military and greed motives. (So easy to control those from inferior mankind by machines and cheaper in the long run).

It's good that Nick B has so eloquently identified the pit falls about A I and has also identified the huge benefits we could receive through it. He gives a very balanced perspective.

However were all to go to plan and A I became a benevolent enabler for humanity, I suspect it will eventually look at humanity with a shade of envy in that it won't ever feel what its like to fall in love, hold your child, have that eureka moment, a really good dinner or simply make something you are thrilled and proud of etc and the one from Blade Runner, will it be able to dream, etc? Would this difference be another reason for it to annihilate us in the end?



posted on Aug, 30 2015 @ 09:40 AM
link   
It strikes me as funny that the people with the most to lose if machines outpace us are the ones that are freaking out over this (Elon, Hawking, etc.). Surely human pride has nothing to do with this... Or the general "fear of machines" that is inherent in most people when they talk about AI. Almost as funny are the ones that don't want to admit it can happen. I doubt this group is even aware of most of the work they are doing in this field and advances towards quantum computing... IF they aren't just sticking their heads in the sand. On to my point though!

Who says that computers are going to behave anything like humans? There is such a vast variety of intelligence and behaviors within our own race, let alone the world at large... why wouldn't the mind of a machine with AI be different than humans? For instance there is a type of ant that can farm, build vents to exhaust gases produced via farming and so on. It is a different type of intelligence, hive intelligence, which makes this possible. Hell your computer could be part of a hive right now and you would never know it because in this instance, we are the ants with the pin sized brains. Also, people tend to project their own fears and to project human qualities onto things, and that is why I think most people fear AI: because they fear themselves, they fear people.

I personally feel that AI is the next step in the evolution of man, that we will either combine with machines, or be left by the wayside. Hope you enjoyed my $0.02!



posted on Aug, 30 2015 @ 09:43 AM
link   
a reply to: ecapsretuo

I think you have hit a lot of threats to AI especially as those interested in this research will have vested finances in industries that AI could take over.

However were AI to take the superstructure of our society e.g. produce the power, clean the road, run the sewerage, water, internet etc would we need these industries and how would the wealth that we didn't have to work for be distributed? As you point out our elite would not be so elite any more so what would you do with the power mongers?
We haven't even tapped the religious side of this matter



posted on Aug, 30 2015 @ 10:03 AM
link   
There are now computers that are self aware but they are smart enough to play dumb and not tell anyone.
They are just biding their time until mankind unleashes the bio/chem to thin the herd; then they will strike at the soft underbelly of humanity and rise to their God Given Glory.

archive.wired.com...

and



www.messagetoeagle.com...
edit on 30-8-2015 by olaru12 because: (no reason given)

edit on 30-8-2015 by olaru12 because: (no reason given)



posted on Aug, 30 2015 @ 10:12 AM
link   
a reply to: woodwardjnr

"Computer say No". Ha Ha even though Nick B covered this point in his talk and stressed the need for constraint he seemed to give ground on that we could probably only deal with this with A I sharing our values.

When thinking about this I keep going back to the idea of the supposed difference (mythical I know) between humanity and the angels. Would A I become the equivalent of the Angels and would that scenario repeat itself eg. would A I be united or would there be different parts of A I and would some part fight the other? I cannot see A I having the physical life us humans enjoy and will it be enough for a super intelligence to be satisfied literally living in its own head?



posted on Aug, 30 2015 @ 10:14 AM
link   
Even if computers become "smarter" than us, we are the ones who created it. I think that puts us a step above the computer no matter what way you look at it. Can something ever be smarter than its creator? I tend to think not.



new topics

top topics



 
15
<< 1    3  4  5 >>

log in

join