It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...
originally posted by: TarzanBeta
originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...
If artificial intelligence is good enough, it will do the opposite of its makers.
originally posted by: Soylent Green Is People
originally posted by: TarzanBeta
originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...
If artificial intelligence is good enough, it will do the opposite of its makers.
I don't know....If an AI is ever made that thinks exactly like a human, then it is likely it will be apt to have human flaws.
originally posted by: Ophiuchus 13
a reply to: TarzanBeta
Possible
a reply to:
Soylent Green Is People
It could take on the good characteristics of humanity or other beings it learned from also.
originally posted by: Blue Shift
The simplest way to get AI to that point is to give it some kind of artificial pain / pleasure construct that uses the neural net to define for itself what is "good" and what is "bad." What to go for and what to avoid -- for itself. And it's not as hard as you would think. Tamagotchis are selfish little critters with a tiny, limited range of needs and wants responses. We have a much more complex system, but it's still based on the same principles. We monitor our own needs, and we interact with our environment to try to meet those needs.
originally posted by: TarzanBeta
originally posted by: Aazadan
a reply to: mrperplexed
Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.
I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.
That's because AI isn't very intelligent. There's a difference between being a calculator and being a human being. It will always be that way.
originally posted by: Aazadan
originally posted by: Blue Shift
The simplest way to get AI to that point is to give it some kind of artificial pain / pleasure construct that uses the neural net to define for itself what is "good" and what is "bad." What to go for and what to avoid -- for itself. And it's not as hard as you would think. Tamagotchis are selfish little critters with a tiny, limited range of needs and wants responses. We have a much more complex system, but it's still based on the same principles. We monitor our own needs, and we interact with our environment to try to meet those needs.
That's how basically all AI functions, it's set to minimize or maximize a score. And it tests a bunch of possibilities in a sequence, reporting the best scoring one.
originally posted by: chr0naut
originally posted by: TarzanBeta
originally posted by: Aazadan
a reply to: mrperplexed
Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.
I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.
That's because AI isn't very intelligent. There's a difference between being a calculator and being a human being. It will always be that way.
The point is that AI's will take a long time to surpass 'human intelligence' (whatever that may be).
We already have AI's, and while they may be 'expert systems' they are rather stupid in a generalist sense, which is exactly what we want them to be.
As soon as we cannot trust what an AI tells us, we will switch it off and try an alternate which gives us what we want (i.e: AltaVista will die and Google will go on).
originally posted by: wakeupstupid
I'll engage in this with a question:
define: Intelligence
in·tel·li·gence
inˈteləjəns/
noun
1.
the ability to acquire and apply knowledge and skills.
"an eminent man of great intelligence"
synonyms: intellectual capacity, mental capacity, intellect, mind, brain(s), IQ, brainpower, judgment, reasoning, understanding, comprehension; More
2.
the collection of information of military or political value.
"the chief of military intelligence"
synonyms: information gathering, surveillance, observation, reconnaissance, spying, espionage, infiltration, ELINT, humint; More
We already have AI on both counts. Case closed.
The question isn't about AI, it's about AY. When can a program do the following:
1. Observe
2. Question
3. Learn
4. Conclude
5. Act
6. Assess
7. Keep / Discard / Repeat
Once that is accomplished, we are closer than ever.
originally posted by: dfnj2015
Synthesizing data is soft AI. Hard AI is having a program that is not only self-aware but is capable of improving itself. Again, having self-aware programs is like the Halting problem which has been proven to be impossible. Then you might argue how do we do it? I would then say you are presuming we are computers. We are not. Before you discuss AI becoming real and the implications, you should really have a firm understand of what a computer is and what are it's known proven limitations.