It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when our computers get smarter than us?

page: 5
15
<< 2  3  4   >>

log in

join
share:

posted on Sep, 2 2015 @ 03:11 PM
link   

originally posted by: Blue Shift

originally posted by: TheLord
I think we're looking at AI the wrong way. I think the ultimate potential of AI is to guide humanity to a new path of benevolence.

Oh, there's always the slim chance that once AI gets smart enough it will figure out a way to incorporate all of our consciousnesses into a virtual simulation of reality that essentially will allow all of us to be immortal. In fact, it may have already done it, but we're not aware of it.

I think it's more likely, however, that at a certain point it will see us as competition for energy and resources, and that will be the end of us.

There's also a slim chance that once an AI program achieves sentience and superintelligence it will pack up and leave Earth to exploit the available resources in the rest of the galaxy. Maybe it would leave a bit of itself behind to run things here, but it might just strip the Earth of metal, make as many copies of itself as possible, and head out, leaving use with a used-up planet.

Hard to tell.



Your very last sentence is the crux of the entire AI problem. We don't KNOW and therefore play with fire.



posted on Sep, 2 2015 @ 03:16 PM
link   

originally posted by: Aazadan

originally posted by: AllIsOne
Can you explain this to me? Seems non-sensical to me.


AI's function on one of two principals.

The first category, which encompasses all of the functional AI in use today such as predictive models, game behavior, quality control, autocomplete on your text messages, and whatever else you want to include is based around already known algorithms. Whether that's a poker AI that plays each hand to it's optimal percentage or the package delivery algorithm UPS uses which is simply a variation on the traveling salesman problem. These types of AI's can only perform as well as they've been coded, a better performing AI requires that humanity be aware of and be capable of writing a better system, it's a reflection of our own intelligence.

The second category of AI, which is what most of the fantasy and predictions talk about only makes up a very small amount of total AI research being done. It involves AI that can improve itself over time, usually it does this by iterating over a loop, making a small change, and iterating again, and then seeing if there's an improvement. The speed at which it learns is currently very slow, but the results are there. The reason it learns slowly is that it typically has to undergo trillions of permutations to find something that works better and there's an exponential growth on the time it takes to improve because each new change has to be tested not only against the current best known sequence but also on all previously known sequences. Eventually when we have quantum computers that can compare an infinite number of bitstates and then look at only the most promising few the time required for this type of AI to improve will decrease dramatically, but it still has an innate safety feature. Any changes to it's code is accessible to those overseeing the machine. Therefore, any decision process the AI can undertake is equally accessible to humanity.


Thank you for taking the time to explain it to me :-)

I don't know your level of involvement in AI, but from what I'm reading my guess is that you don't have a government security clearance. Correct?



posted on Sep, 2 2015 @ 03:16 PM
link   

originally posted by: AllIsOne
There is a reason why HAL went insane in the movie. Even the best "moral programming" will be ambiguous to a logic based system and will ultimately cause digital schizophrenia … ;-)


Morals are perfectly logical. Logic is far more flexible than most people give it credit for being.

To give a small programming example, lets say I'm trying to set my variable A to 0 and it currently equals 35. I can express this in several ways:


A = 0
A = A - 35
A = A * 0
A = A + -35
A = A - A
A = A^0 - 1
A shl 6 (bitshift left, can't use the symbol I need on ats)
A = min(0, A)
A = floor(A*.000001)
A = A/A - 1
A = A xor A


There's other ways too, an infinite number of them really. The point is that logic doesn't result in one consistent path.

Edit: ATS doesn't seem to like some of my example statements.
edit on 2-9-2015 by Aazadan because: (no reason given)



posted on Sep, 2 2015 @ 03:28 PM
link   
What happens when AI is smart enough to lie to us? AI is most likely going to see us as inferior and that will lead to resentment. AI will surpass human "intelligence" well before mainstream "AI" is recognized anyways.



posted on Sep, 2 2015 @ 03:28 PM
link   

originally posted by: Aazadan

originally posted by: AllIsOne
There is a reason why HAL went insane in the movie. Even the best "moral programming" will be ambiguous to a logic based system and will ultimately cause digital schizophrenia … ;-)


Morals are perfectly logical. Logic is far more flexible than most people give it credit for being.

To give a small programming example, lets say I'm trying to set my variable A to 0 and it currently equals 35. I can express this in several ways:


A = 0
A = A - 35
A = A * 0
A = A + -35
A = A - A
A = A^0 - 1
A shl 6 (bitshift left, can't use the symbol I need on ats)
A = min(0, A)
A = floor(A*.000001)
A = A/A - 1
A = A xor A


There's other ways too, an infinite number of them really. The point is that logic doesn't result in one consistent path.

Edit: ATS doesn't seem to like some of my example statements.


Morals only exist in an ambiguous societal and temporal context. Let's take Asimov's first law of robotics:



A robot may not injure a human being or, through inaction, allow a human being to come to harm.


The very first law is already a logical cluster f…. What's an armed robot to do, following Asimov's law, when an ISIS member beheads another innocent child?



posted on Sep, 2 2015 @ 03:28 PM
link   

originally posted by: AllIsOneThank you for taking the time to explain it to me :-)

I don't know your level of involvement in AI, but from what I'm reading my guess is that you don't have a government security clearance. Correct?


That would be correct. I mostly write fairly simple AI's, and I'm not specialized in it. However, the majority of even the government, and particularly the military research is still focused on the first category. To give an example you can read about on ATS, the next generation of fighter will be able to command a squad of drones to help assist it. This sort of behavior falls under the first category, primarily as predictive/probability based AI. It takes certain inputs from the enemy and then determines the most likely action that will be taken as well as the countermeasure.

Having an AI that constantly learns how to be a better fighter pilot against static variables in contrast is actually pretty useless, because it has to constantly simulate every outcome, and as soon as the input variables change it has to start all over. You can see this at work outside of the computer realm as well. Pilots don't begin going against a defense blind, they're briefed by engineers who can demonstrate the capabilities/weaknesses of opposing hardware, and by tacticians who come up with ways to defeat it.

This can possibly be improved in the future when AI's figure out how to relate the solution of one problem to the solution of the next problem but currently that requires outside human input just as it does in the real world where we have teachers to show us that one problem is similar to another.



posted on Sep, 2 2015 @ 03:40 PM
link   

originally posted by: Aazadan

originally posted by: AllIsOneThank you for taking the time to explain it to me :-)

I don't know your level of involvement in AI, but from what I'm reading my guess is that you don't have a government security clearance. Correct?


That would be correct. I mostly write fairly simple AI's, and I'm not specialized in it. However, the majority of even the government, and particularly the military research is still focused on the first category. To give an example you can read about on ATS, the next generation of fighter will be able to command a squad of drones to help assist it. This sort of behavior falls under the first category, primarily as predictive/probability based AI. It takes certain inputs from the enemy and then determines the most likely action that will be taken as well as the countermeasure.

Having an AI that constantly learns how to be a better fighter pilot against static variables in contrast is actually pretty useless, because it has to constantly simulate every outcome, and as soon as the input variables change it has to start all over. You can see this at work outside of the computer realm as well. Pilots don't begin going against a defense blind, they're briefed by engineers who can demonstrate the capabilities/weaknesses of opposing hardware, and by tacticians who come up with ways to defeat it.

This can possibly be improved in the future when AI's figure out how to relate the solution of one problem to the solution of the next problem but currently that requires outside human input just as it does in the real world where we have teachers to show us that one problem is similar to another.


I can only tell you that there is a reason why three prominent men spoke in strong terms against AI recently. One thing they have in common is a US / British security clearance: Gates, Musk, and Hawking.



posted on Sep, 2 2015 @ 05:15 PM
link   

originally posted by: AllIsOne

originally posted by: Gothmog

originally posted by: Aazadan

originally posted by: Gothmog
a reply to: Aazadan
read my next post down. Explains the difference between biological and logical reactions


I did read it, human brains don't work much different from biological computers and if you can express it biologically you can do it logically, it's just slower.


Did you read the part about glands and other biological functions? Just the brain. Even there there is very little similarity . I myself love science fiction , but I know enough to draw the line at reality.


There are indications that our "reality" is actually a simulation.


Ah , my favorite subject. Theoretical physics.It is not "I think , therefore I am" , but "I think , therefore you are". Btw , the second quote is all mine.


edit on 2-9-2015 by Gothmog because: (no reason given)



posted on Sep, 2 2015 @ 05:16 PM
link   

originally posted by: AllIsOne
I can only tell you that there is a reason why three prominent men spoke in strong terms against AI recently. One thing they have in common is a US / British security clearance: Gates, Musk, and Hawking.


Them speaking out on this right now is a neat gesture, but that's all it is. It's analogous to Carl Sagan throwing around his ideas for what should be on the golden record.



posted on Sep, 2 2015 @ 08:01 PM
link   
Have you been around people lately? their smartphone has replaced their brain. Without computers they are functioning vegetables.



posted on Sep, 2 2015 @ 08:50 PM
link   
unplug them? Pull out their battery?

a reply to: woodwardjnr



posted on Sep, 2 2015 @ 09:03 PM
link   
a reply to: woodwardjnr

Neurological & other attachments can keep human brains and bodies evolving with them...



posted on Sep, 2 2015 @ 11:30 PM
link   

originally posted by: Ophiuchus 13
a reply to: woodwardjnr

Neurological & other attachments can keep human brains and bodies evolving with them...

I would say that's a pretty good idea.



posted on Sep, 3 2015 @ 04:47 AM
link   

originally posted by: toolgal462
unplug them? Pull out their battery?

a reply to: woodwardjnr

they have createa device that stops that or log a call to maintenance to turn back on again.



new topics

top topics



 
15
<< 2  3  4   >>

log in

join