It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence Experts Thread

page: 1
7
<<   2  3 >>

log in

join
share:

posted on May, 10 2017 @ 01:08 AM
link   
Hello fellow ATS'ers -

I am looking to chat with other ATS'ers who have a background with AI, deep learning, big data, etc. Personally I am interested in hybrid deep learning, and video applications combining static visual, short term patterns, and long term temporal clues however this thread is open to all AI topics ranging from common to esoteric.

In addition to a technical thread, any non AI interested in learning or demystification of AI, please feel free to ask questions - I suspect that since this thread is not political, it has a chance of self policing in a way that should be welcoming of "on topic" questions.


OK, I will start with an icebreaker - What is your favorite AI platform and why?



posted on May, 10 2017 @ 01:26 AM
link   
How about expert critics of AI/AGI?






posted on May, 10 2017 @ 01:50 AM
link   
a reply to: IgnoranceIsntBlisss


Well I am halfway through the video so far.. So much for the dream of my thread not being politicized!


Personally I am public sector, but my first thoughts are that I love the music and the TRON like stylization of the video.

For the content...imagine if the same level of resources was directed at ending cancer, or solving for humanities greater issues. I will have more to say once I finish the video. Are there any specific areas you would like me to address?



posted on May, 10 2017 @ 03:39 AM
link   
a reply to: mrperplexed

How does the esoteric relate to AI?

Confused.



posted on May, 10 2017 @ 04:45 AM
link   
a reply to: mrperplexed

You lost me at AI. All "AI" is replicated from humans, learning script is hard but it is based on human ingenuity, not a computer.

In scripting the word 'If' is very powerful and that is what makes an 'AI' work, they do not make decisions for themselves yet...when machines start writing their own scripts, that would be true AI.



posted on May, 10 2017 @ 04:48 AM
link   
When an artificial intelligence can take an MBTI assessment and end up with an ESFP result, then I believe it will be time to acknowledge them.



posted on May, 10 2017 @ 05:03 AM
link   

originally posted by: mrperplexed
Hello fellow ATS'ers -

I am looking to chat with other ATS'ers who have a background with AI, deep learning, big data, etc. Personally I am interested in hybrid deep learning, and video applications combining static visual, short term patterns, and long term temporal clues however this thread is open to all AI topics ranging from common to esoteric.

In addition to a technical thread, any non AI interested in learning or demystification of AI, please feel free to ask questions - I suspect that since this thread is not political, it has a chance of self policing in a way that should be welcoming of "on topic" questions.

OK, I will start with an icebreaker - What is your favorite AI platform and why?



Well, to some, AI will happen some time in the future..

Yet I see the current stage of Google search as an AI.

Long ago, Google was just a way to search only for web pages containing your search terms, now you can ask all sorts of questions and receive mostly relevant answers.

You type in very little information and it draws inferences from the words and from similar searches and comes back with answers usually very much like you wanted (and expect).

Consider the comparison of Bing and Google. The difference in the relevance of answers is starkly obvious.

This is what AI will be like, vast inferential links to vast datasets. The next will be adding computational and mildly cognitive abilities, but always with a purpose of fulfilling our requests and ranked by the quality of our satisfaction in the result.

Alexa, Siri, Iris & etc are also AI functionality, but Google is currently ahead of the game (with Deep Mind close behind).

Also, genetic algorithms and neural nets are also showing great potential.


edit on 10/5/2017 by chr0naut because: (no reason given)



posted on May, 10 2017 @ 05:05 AM
link   
a reply to: IgnoranceIsntBlisss

Thanks for the video. I guess the impact of it is supposed to be negative on the viewer, but I found it interesting and reassuring. In this world of ours since during and after World War II, there is nothing that stands in the way of the dark forces as the inherent powers of the US and its allies. A control system is not the same thing as dominance. Take that leadership away, and communist China will be your master overnight. It is as simple as that. How quickly people and even history forget the horror and lessons learned of unbridled Nazism.



posted on May, 10 2017 @ 05:11 AM
link   

originally posted by: 0racle
a reply to: mrperplexed

How does the esoteric relate to AI?

Confused.


Esoteric: requiring or exhibiting knowledge that is restricted to a small group.

Reread my sentence with the definition above and this should remove the confusion.



posted on May, 10 2017 @ 05:32 AM
link   

originally posted by: Thecakeisalie
a reply to: mrperplexed

You lost me at AI. All "AI" is replicated from humans, learning script is hard but it is based on human ingenuity, not a computer.

In scripting the word 'If' is very powerful and that is what makes an 'AI' work, they do not make decisions for themselves yet...when machines start writing their own scripts, that would be true AI.


In the case of deep neural networks, yes humans may design the learning algorithm and give it data, but the rest the neural networks learn on their own. This tells me that given the right learning and data, in theory machines writing their own scripts is not far fetched.



posted on May, 10 2017 @ 05:34 AM
link   

originally posted by: TarzanBeta
When an artificial intelligence can take an MBTI assessment and end up with an ESFP result, then I believe it will be time to acknowledge them.


Sadly I fear many may be far too introverted.



posted on May, 10 2017 @ 05:56 AM
link   
a reply to: mrperplexed

I think you would be better off just googling the topic. Find some experts in the field. And read everything they've ever published. That's what I would do.

The soft AI stuff is just programs processing data. Ho hum. The more interesting question is hard AI. Will we some day have a computer program pass the Turning test? When I was in college I was in computer science. But learning about software was just okay for me. I wanted to understand how computers worked. So I kept taking computer engineering classes. Around my junior year I found the answer. The difference between a calculator and computer is good question. But the thing that gave me the most insight was the get-fetch-execute cycle and the Von Neumann architecture. The way computers are designed now they really only have one thought: get-fetch-execute. Computers have no self-awareness and every instruction that is executed has the same emotional attachment.

So after finally understanding what IS a computer, I came to the conclusion it's not the place where hard AI will be possible. You can't prove a negative so my conclusion is based on intuition. Computer bits are like an arrangement of rocks in a field. No matter how you arrange the rocks, they will always be just an arrangement of rocks. And rocks all have the same level of meaning. I think human intelligence is more than just our thoughts. I think our brains are like yogurt. We grow thoughts in the yogurt that have never been thought of before. And we attach emotions to our thoughts which helps us decide what is important.

Computers do exactly what you tell them to do. That's it. Nothing more. Now you might be able to write a program that generates new programs. But even that is limited to what it is told to do. I think hard AI requires the ability to be self-correcting, extend its own program beyond what was previously intended, and be aware of what a program is while the program is running. If you study Computability theory and the limits of computation this self-program awareness thing is almost like the Halting problem. Who knows maybe solving the Halting problem is how you get self-awareness.

It's very sobering when you study Computability theory and find out about the existence of unsolvable problems. People naive think computers are God without any limitations. Computers are much closer to a glorified calculator than they are to how the human mind works.

The human mind has really strange characteristics like the ability to prevent mind lock from an infinite loop. There was a time when I was trying to discover a thought that would actual cause the brain to go into a coma. This was the closest thought I was able to come up with. It takes the form of a question and semantically it is an infinite loop. Most people who hear it will chuckle or laugh. I believe the chuckle mechanism is our brains way of preventing mind-lock. Here is the thought in the form of a question. Think about the semantics really hard for this question:

Have you ever thought about what your brain is doing between thoughts?

A computer would never laugh at the following program:

10 NOP
20 GOTO 10

Computers have no sense of humor!


edit on 10-5-2017 by dfnj2015 because: (no reason given)



posted on May, 10 2017 @ 06:42 AM
link   
a reply to: mrperplexed

Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.

I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.



posted on May, 10 2017 @ 06:57 AM
link   
a reply to: Aliensun




there is nothing that stands in the way of the dark forces as the inherent powers of the US


LOL the US is the dark force that threatens the whole world - but I can see you are an American

en.wikipedia.org...



posted on May, 10 2017 @ 07:08 AM
link   
a reply to: mrperplexed

Computers will never know that they know. At best they can only emulate intelligence, as the terminology suggests, "Artificially".



posted on May, 10 2017 @ 07:12 AM
link   
a reply to: TheConstruKctionofLight

This is a much more interesting list:

en.wikipedia.org...



posted on May, 10 2017 @ 08:03 AM
link   

originally posted by: Aazadan
a reply to: mrperplexed

Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.

I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.


That's because AI isn't very intelligent. There's a difference between being a calculator and being a human being. It will always be that way.



posted on May, 10 2017 @ 08:25 AM
link   
a reply to: dfnj2015

thanks...I had previously seen that as well



posted on May, 10 2017 @ 10:59 AM
link   

originally posted by: dfnj2015
So after finally understanding what IS a computer, I came to the conclusion it's not the place where hard AI will be possible. You can't prove a negative so my conclusion is based on intuition. Computer bits are like an arrangement of rocks in a field. No matter how you arrange the rocks, they will always be just an arrangement of rocks. And rocks all have the same level of meaning. I think human intelligence is more than just our thoughts. I think our brains are like yogurt. We grow thoughts in the yogurt that have never been thought of before. And we attach emotions to our thoughts which helps us decide what is important.


The people who have faith hard AI is possible would argue if we just create the same type of yogurt matrix then computers would grow thoughts the same way human beings do. I think this is missing the point. Ideas and the yogurt are NOT discreet or digital in nature. So the analog thing that makes human beings human may not translate into the digital pigeon hole.



posted on May, 10 2017 @ 12:38 PM
link   
a reply to: dfnj2015

The problem is that thoughts created by a complex brain (such as a human brain) are an amalgamation of a whole bunch of different tiny pieces in our brains...

A thought is a little bit of one specific memory, plus a handful of some other memory, plus a sprinkle that comes from our sensory input...all influenced by some hard-wired instinct that is a remnant of our brain's reptilian parts.

Consider how each neuron in the large group of neurons that need to be specifically accessed to create a thought are interconnected through a series of synapses, and consider that the "information" stored in any specific neuron gets filtered through an entire networks of other neurons on its way to being a little piece of that thought.




top topics



 
7
<<   2  3 >>

log in

join