It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when our computers get smarter than us?

page: 4
15
<< 1  2  3    5 >>

log in

join
share:

posted on Aug, 31 2015 @ 06:51 PM
link   

originally posted by: Blue Shift

originally posted by: Gothmog
Remember. It is all programming . a+b=c . Period.

How is that different than what you do?


We depend on various sensory organs and various specialized glands throughout the human body for input. The computer only has the programmer. Yes , you could SIMULATE these but never reproduce the effects inside a computer . Ever.



posted on Aug, 31 2015 @ 06:54 PM
link   
a reply to: woodwardjnr

Citizens of the world it time to really knuckle under and lick some boot. Introducing your new overlords..


Not really.



posted on Aug, 31 2015 @ 06:59 PM
link   

originally posted by: Gothmog
Exactly. Computers of any form have always been "smarter" than humans . Why ? We are human.We make mistakes. The only mistake that computers make are the ones we program into them.They work on a principal a+b=c . No variations to that.Computers can never feel emotions , nor feel at all. Just what man programs into them.


You react on the same principal. Base action*emotional modifier equals response.

If the action is punching someone and you're angry at that person, you're more likely to do it.
If the action is punching someone and you're happy with them, you're more likely to not do it.

Or to represent this as a formula, 1 is angry, 0 is happy
punch*0 = 0, you're not going to punch them
punch*1 = punch, you're likely to do this.

Maybe you're only 60% angry though
punch*.6=.6 punch. You may punch them.



posted on Aug, 31 2015 @ 07:28 PM
link   
a reply to: Aazadan
read my next post down. Explains the difference between biological and logical reactions



posted on Aug, 31 2015 @ 08:37 PM
link   

originally posted by: Gothmog
a reply to: Aazadan
read my next post down. Explains the difference between biological and logical reactions


I did read it, human brains don't work much different from biological computers and if you can express it biologically you can do it logically, it's just slower.
edit on 31-8-2015 by Aazadan because: (no reason given)



posted on Aug, 31 2015 @ 11:04 PM
link   

originally posted by: Aazadan

originally posted by: Gothmog
a reply to: Aazadan
read my next post down. Explains the difference between biological and logical reactions


I did read it, human brains don't work much different from biological computers and if you can express it biologically you can do it logically, it's just slower.


Did you read the part about glands and other biological functions? Just the brain. Even there there is very little similarity . I myself love science fiction , but I know enough to draw the line at reality.



posted on Sep, 1 2015 @ 08:31 AM
link   

originally posted by: Gothmog
Did you read the part about glands and other biological functions? Just the brain. Even there there is very little similarity . I myself love science fiction , but I know enough to draw the line at reality.


It all produces chemicals that alter behavior.



posted on Sep, 2 2015 @ 01:46 AM
link   

originally posted by: woodwardjnr




Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
www.ted.com...

Really interesting ted talk by Swedish scientist Nick Bostrom. The implications for humanity are scary especially if you think of the way we have treated those we have considered less intelligent as ourselves. Enjoy the video, your thoughts would be appreciated. Seeing as we can't keep this technology in the box so to speak, what do you suggest we do to make it work out well for humans, or is this just another form of evolving for human beings?


Btw: Michio Kaku's latest book is exploring this issue as well. I think the most sensible answer is: We Simply Don't Know. An advanced AI may ignore us, fight us, or help us grow. We don't know …

(I don't mean to knock anybody, but some people's "idea" of what a computer can or cannot do is pretty much stuck in the '80s. Neuroscience and AI go hand in hand and A LOT has happened since … !!!)



posted on Sep, 2 2015 @ 07:30 AM
link   

originally posted by: AllIsOne
Btw: Michio Kaku's latest book is exploring this issue as well. I think the most sensible answer is: We Simply Don't Know. An advanced AI may ignore us, fight us, or help us grow. We don't know …

(I don't mean to knock anybody, but some people's "idea" of what a computer can or cannot do is pretty much stuck in the '80s. Neuroscience and AI go hand in hand and A LOT has happened since … !!!)


Here's the main issue with computers. Any path they take to solve a problem has already been discovered by a human. If a computer is able to successfully write it's own algorithms and they're more effective than what we already have, then humans have the same tools as the computer. As long as we have access to their source code this will always remain true.



posted on Sep, 2 2015 @ 10:32 AM
link   

originally posted by: Aazadan

originally posted by: AllIsOne
Btw: Michio Kaku's latest book is exploring this issue as well. I think the most sensible answer is: We Simply Don't Know. An advanced AI may ignore us, fight us, or help us grow. We don't know …

(I don't mean to knock anybody, but some people's "idea" of what a computer can or cannot do is pretty much stuck in the '80s. Neuroscience and AI go hand in hand and A LOT has happened since … !!!)


Here's the main issue with computers. Any path they take to solve a problem has already been discovered by a human. If a computer is able to successfully write it's own algorithms and they're more effective than what we already have, then humans have the same tools as the computer. As long as we have access to their source code this will always remain true.


Please read up on AI. What you write lacks total understanding of where AI has gone.

en.wikipedia.org...



posted on Sep, 2 2015 @ 11:15 AM
link   

originally posted by: Gothmog
Yes , you could SIMULATE these but never reproduce the effects inside a computer . Ever.

It's just a matter of what kind of machine you use to get the effect. We use electrochemicals, the AI would use a slightly different combination of electricity and chemicals. We have "instincts" coded into our DNA. The machine would have programming. The thing is, if the result is the same, it doesn't matter how it is achieved.

Unless you're looking for some kind of God-given "soul" in the machine. And in that case, unless you can prove that you have one, the point is moot.



posted on Sep, 2 2015 @ 02:29 PM
link   

originally posted by: AllIsOne
Please read up on AI. What you write lacks total understanding of where AI has gone.

en.wikipedia.org...


Which part specifically do I not understand? While writing this post I am simultaneously writing a probability based AI, which if you check the wiki page you linked me is one of several types of AI. I'm not an expert on all AI types but I do understand how the field works.

I should also point out that a large part of what makes a potential AI scary is that it has a superior body to a human. If AI becomes sufficiently advanced we can put it on worse performing hardware so that it can never function to it's true potential. For example designing an AI that modifies itself and continually refines a particular algorithm while we incrementally reduce the speed of it's cpu and give it less memory to work with so that improvements always yield the same run time.



posted on Sep, 2 2015 @ 02:30 PM
link   

originally posted by: Gothmog

originally posted by: Aazadan

originally posted by: Gothmog
a reply to: Aazadan
read my next post down. Explains the difference between biological and logical reactions


I did read it, human brains don't work much different from biological computers and if you can express it biologically you can do it logically, it's just slower.


Did you read the part about glands and other biological functions? Just the brain. Even there there is very little similarity . I myself love science fiction , but I know enough to draw the line at reality.


There are indications that our "reality" is actually a simulation.



posted on Sep, 2 2015 @ 02:36 PM
link   
As a guy currently reading I, Robot, I think Asimov was way closer than he ever could have imagined. Human's creating true AI would create problems not even thought of by anybody before. The three laws of robotics are great in their original intended form, but hard-coding them into every robot? Basically magic as far as our current tech is concerned.



posted on Sep, 2 2015 @ 02:38 PM
link   
a reply to: woodwardjnr

I think we're looking at AI the wrong way. I think the ultimate potential of AI is to guide humanity to a new path of benevolence.



posted on Sep, 2 2015 @ 02:41 PM
link   

originally posted by: Aazadan

originally posted by: AllIsOne
Please read up on AI. What you write lacks total understanding of where AI has gone.

en.wikipedia.org...


Which part specifically do I not understand? While writing this post I am simultaneously writing a probability based AI, which if you check the wiki page you linked me is one of several types of AI. I'm not an expert on all AI types but I do understand how the field works.

I should also point out that a large part of what makes a potential AI scary is that it has a superior body to a human. If AI becomes sufficiently advanced we can put it on worse performing hardware so that it can never function to it's true potential. For example designing an AI that modifies itself and continually refines a particular algorithm while we incrementally reduce the speed of it's cpu and give it less memory to work with so that improvements always yield the same run time.


You wrote:


Any path they take to solve a problem has already been discovered by a human.


Can you explain this to me? Seems non-sensical to me.

You are in dream land. If an AI becomes "sufficiently advanced", let's say IQ of 1'000'000, it would run circles around you. Simply limiting its hardware would become impossible for humans. Once the machine becomes conscious it wants to "live", and would protect itself accordingly. Of course this is just an assumption on my part, but consciousness and life are intertwined.



posted on Sep, 2 2015 @ 02:44 PM
link   

originally posted by: TheLord
a reply to: woodwardjnr

I think we're looking at AI the wrong way. I think the ultimate potential of AI is to guide humanity to a new path of benevolence.


Oh, is this what we humans have in mind with an ant hill?

I predict that an advanced AI will, at best, completely ignore us. But I have a feeling that logic predicts that we humans are the greatest thread to earth, the machine's habitat, and therefore must be eliminated.



posted on Sep, 2 2015 @ 02:48 PM
link   

originally posted by: thov420
As a guy currently reading I, Robot, I think Asimov was way closer than he ever could have imagined. Human's creating true AI would create problems not even thought of by anybody before. The three laws of robotics are great in their original intended form, but hard-coding them into every robot? Basically magic as far as our current tech is concerned.


There is a reason why HAL went insane in the movie. Even the best "moral programming" will be ambiguous to a logic based system and will ultimately cause digital schizophrenia … ;-)



posted on Sep, 2 2015 @ 02:52 PM
link   

originally posted by: TheLord
I think we're looking at AI the wrong way. I think the ultimate potential of AI is to guide humanity to a new path of benevolence.

Oh, there's always the slim chance that once AI gets smart enough it will figure out a way to incorporate all of our consciousnesses into a virtual simulation of reality that essentially will allow all of us to be immortal. In fact, it may have already done it, but we're not aware of it.

I think it's more likely, however, that at a certain point it will see us as competition for energy and resources, and that will be the end of us.

There's also a slim chance that once an AI program achieves sentience and superintelligence it will pack up and leave Earth to exploit the available resources in the rest of the galaxy. Maybe it would leave a bit of itself behind to run things here, but it might just strip the Earth of metal, make as many copies of itself as possible, and head out, leaving use with a used-up planet.

Hard to tell.



posted on Sep, 2 2015 @ 03:06 PM
link   

originally posted by: AllIsOne
Can you explain this to me? Seems non-sensical to me.


AI's function on one of two principals.

The first category, which encompasses all of the functional AI in use today such as predictive models, game behavior, quality control, autocomplete on your text messages, and whatever else you want to include is based around already known algorithms. Whether that's a poker AI that plays each hand to it's optimal percentage or the package delivery algorithm UPS uses which is simply a variation on the traveling salesman problem. These types of AI's can only perform as well as they've been coded, a better performing AI requires that humanity be aware of and be capable of writing a better system, it's a reflection of our own intelligence.

The second category of AI, which is what most of the fantasy and predictions talk about only makes up a very small amount of total AI research being done. It involves AI that can improve itself over time, usually it does this by iterating over a loop, making a small change, and iterating again, and then seeing if there's an improvement. The speed at which it learns is currently very slow, but the results are there. The reason it learns slowly is that it typically has to undergo trillions of permutations to find something that works better and there's an exponential growth on the time it takes to improve because each new change has to be tested not only against the current best known sequence but also on all previously known sequences. Eventually when we have quantum computers that can compare an infinite number of bitstates and then look at only the most promising few the time required for this type of AI to improve will decrease dramatically, but it still has an innate safety feature. Any changes to it's code is accessible to those overseeing the machine. Therefore, any decision process the AI can undertake is equally accessible to humanity.



new topics

top topics



 
15
<< 1  2  3    5 >>

log in

join