It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence theory

page: 2
18
<< 1    3  4  5 >>

log in

join
share:

posted on Sep, 8 2018 @ 06:06 AM
link   
Intelligence is not a fast computer, or a robot.

Machines will never be intelligent.

A calculator is not smarter than anyone, it is only far more efficient in making calculations.


It isn't smart, or intelligent.

Even a cat is smarter than a computer.
edit on 8-9-2018 by turbonium1 because: (no reason given)



posted on Sep, 8 2018 @ 12:06 PM
link   

originally posted by: dfnj2015
a reply to: LedermanStudio

The search for hard AI is based on bad values. Our imperfections are our greatest strengths. It is the source of our creativity and unimaginable creativity.


Not according to the definitions of AI-Complete (aka "Hard AI") that I've been reading and certainly not on the basis of what I've read on intelligence. The ability to solve a problem creatively or intelligently requires the ability to put together different bits of information in such a way that the answer makes sense to the problem.

There's nothing imperfect about that.


The problem with computer science is it is a perfect science. You can go back in time and repeat the exact same experience.


By that definition, almost every science (and a lot of other things like sewing) are perfect. I don't think it makes sense in this context.... and computer science (I'm saying this as someone who worked in the field and has degrees in it) is based on a machine adding ones and zeroes. So, yes, 1+1 always and forever equals "10" (the computer designation for the number 2). Just like with math, 1+3 always equals 4.


In reality, time never repeats in exactly the same way. The unfolding of the Universe is a one way ticket.

Err... now you've gone from a local example (adding 1+1=10) to a universal example. In fact, on the local level, things repeat all the time (you began breathing as a newborn and I assume you're still breathing. The circuit in your brain that controls breathing repeats its function at regular intervals whether you're asleep or not. Ditto everything else. So yes, events do repeat at a local level.


Here's a really good discussion why the Von Neuman architecure with the get-fetch-execute instruction cycle will never achieve hard AI:


Errr... you do know that this is how our brains work on a very local level, right? However, there's multiple different architectures and there are ways around the von Neumann limitations



posted on Sep, 8 2018 @ 02:13 PM
link   
a reply to: Byrd




I would suggest that you do more reading on this... not videos. Speculating on things without learning all about them has caused a lot of harm in the world.


Is this part directed towards me or the OP, I´m a bit confused now.



posted on Sep, 8 2018 @ 02:19 PM
link   
a reply to: turbonium1
A machine alone isn´t smart or intelligent, but the software that runs on it can be.
The hardware (the matter in itself) in your brain is not smart either. It´s the connections that make it smart.

So when you say all that above, it´s not false but it shows your shallow aproach on this topic. Maybe you shouldn´t talk in absolutes, too.



posted on Sep, 8 2018 @ 02:20 PM
link   

originally posted by: verschickter
a reply to: Byrd




I would suggest that you do more reading on this... not videos. Speculating on things without learning all about them has caused a lot of harm in the world.


Is this part directed towards me or the OP, I´m a bit confused now.


Sorry. To the OP, of course.



posted on Sep, 8 2018 @ 02:31 PM
link   
a reply to: odzeandennz
Have you ever worked with AI systems? I mean, not downloading a framework for neural networks but really coming up with concepts, doing research, writing the underlying management code?

Are you aware of the fact that any AI isn´t just smart to begin with? It has to learn and evolve, make experiences.

Could it be you´re stuck with the false idea that anything precoded can´t be true AI because it´s layout was written by humans? What if I tell you there are (considered false) AIs out there that are able to write their own optimized code and change the framework it runs on?

Our own intelligence is just possible because of our body.
What if we could (and we already do) manipulate that body/brain for the next generation so it will be able to function faster, better, more out of the box?

I get it´s hard to understand if you just know AI from reading and fast conclusion drawing. You ask the wrong questions and determine wrong dead ends, where there are none.

See this is why I seldom participate in such theads, too much made up minds. To much prejudice because of most BS you read like:

-AI can never excell humans because it´s made by humans but you forget that AI has to learn just like humans.
-AI can not be written by humans or preset code but you forget that our brain runs on preset code and hardware, so to speak.

Please do your homework, thank you.



posted on Sep, 8 2018 @ 04:59 PM
link   

originally posted by: verschickter


So, thank you for all of the effort you have added to this post.

I realize that what I wrote was NOT in keeping with the actual reality of current AI development.

Also, I did get to read a bit more since the OP. ...not that a whole 24 hours has made me any more expert


Let me ask this:
The early stages of AI development seem to preclude that my thesis from the OP can really be a part of the later stages?

To oversimplify... early AI development is so fundamentally black and white that any hypoyhetical future evolution is 'innoculated' from the kind of influence I postulated?

thanks!
edit on 8-9-2018 by LedermanStudio because: grammar boofs

edit on 8-9-2018 by LedermanStudio because: (no reason given)



posted on Sep, 8 2018 @ 06:25 PM
link   

originally posted by: stormcell
Us human and mammals also store information in a variety of ways. We maintain relational databases to store information about people (name, location, age, sex, favourite things, pet hates, job, relatives) and locations (navigation maps of different cities and buildings).


Have you heard of space time synesthesia?
Or basically synesthesia as a whole?
If not, you´ll find it very interesting.
It was something we looked into while we did research on thought patterns and the sensory input.

It´s not a disorder or disease, it´s a distinguished feature in how one does process and store information coming in from their senses. The most famous one is audio-visual synesthesia where audio is perceived as colors in the visual "cortex", parallel to the audio "cortex". They can see sound, so to speak. It´s a very interesting topic and funnily, about 10-15% of all people do store or percieve information from their senses in the synesthesia-way.

Something personal but very on topic:
You mentioned color perception. For the one that has a form of synesthesia, it´s just normal. Most are very surprised when they learn that it´s not normal in the sense of

"wait, so you tell me you can´t recall smell, light conditions, mood, inflections of the conversation partner, how the rest of the day went, when you remember something someone said to you?"

Interestingly, this was the case for me. Always wondered why almost everybody can´t remember stuff like I do. But my short time and face-name memory is the worst. I need, subconsciously, around 2-6 weeks (it´s hard to nail it down) processing time until I can recall all that. I can´t tell you what I did on the 07.08.2018 if you ask me now. That´s not how it works for me. But, if I start to track it down and you give me something, I´ll go reverse in my internal calendar/clock and tell you most of that above but also other stuff like orientation to the sun.. it really depends.

This form of synesthesia would be called something like time-space-memory synesthesia.
For example my internal calendar, 1 o clock is Jan, 12 is Dec. Years are sorted by discs and if I had to explain how it looks like, imagine a stack of CDs or Clocks with a slight offset on each disk so it makes a curve / spiral upwards. It´s not like I can exactly go back 6 disks and read that out. But I can judge/feel the distance between the current one and the timespan I´m looking at. It get an idea about the years before and after and it get´s more precise. Personal evaluation of that year is expressed / interpreted in a sinewave, while y-values would be a sort of mix of different evaluations, was it a good year or not and so on.

This may come over as way offtopic and or bold. It´s not my intention.
What I want to say is that information storage and processing is very different for everyone. Everyone has his unique algorithms, thought-patters and memory storing techniques.

And so does AI.

edit on 8-9-2018 by verschickter because: (no reason given)



posted on Sep, 8 2018 @ 06:46 PM
link   
a reply to: LedermanStudio
You´re welcome, it´s always nice if someone actually has an open ear to new ideas instead of adopting prejudices.




Let me ask this: The early stages of AI development seem to preclude that my thesis from the OP can really be a part of the later stages?

Yes and no. In another way. When I wrote AI does not "feel", I ment it exactly that way. I didn´t say it would not be influenced. You have to look at trauma from the information processing side, not the emotional one. Let me try to explain:

Trauma, or other intense experiences, play a big role in influencing thought patterns as a whole for the future. Impressions made, conclusions draw, connections made... Much information to process and derive from.

Compare a traumatizing moment to a burst of information. Imagine the memory registers of an AI can not only store the information, but also a small statistic like throughput and what stuff is reinforced (re-experienced) etc.

Now try to visualize a three dimensional square matrix (checkerboard). Imagine you could compile any kind of statistics from this checkerboard like heatmaps. Heatmaps for throughput, re-inforcement, times of experiences.
The single boxes contain strands and other checkerboards of information, too.

Sit back and try to visualize that in your mind. A 3D box, subdivided into equal boxes. Those boxes in itself are also divided into other boxes. Matroska style. The underlying concept how you store stuff into those boxes and how you connect all that, is what makes up your personality.

The algorithms to derive useful information and transfer-thoughts from that is the level of your intelligence.

I hope I could explain it so it makes sense to you. English is not my first language.



posted on Sep, 8 2018 @ 07:12 PM
link   
a reply to: LedermanStudio

Wow, it's scary to think that AI needs trauma to evolve! I wonder though, although we humans are the ones who are creating the AI and training/teaching it, will the robots actually be affected by any of a person's personal feelings though if they themselves do not operate on feelings? Maybe if they ever reach the state of being self-aware this would come into play?

I have enjoyed reading all the input so far because it is interesting and very though provoking. I have been trying to research AI for the last couple years or so, and have kept up with, for one, Sophia the Robot by Hansen Robotics - just to see how "she" is learning and progressing. "She" encounters heckling sometimes but doesn't seem to ackowledge any of it. I always wonder what, if anything, she might be thinking to "herseslf" when people think she is just a tin can.

Apparently, Elon Musk (NeuroNet) is hoping to achieve Neural Lacing in the future. This technique is supposed to be able to inject a mesh (fabric) into a person's brain, then the brain would grow around it, and would facilite quicker and better learning, and even hopefully provide a better way for humans to keep up with the pace of machine learning so that we do not get left behind. I apologize for my lack of ability to relay highly technical information lol. I can read it, and mostly understand it, but have a hard time putting it into words.

Elon Musk has also talked many times about the dangers of AI for the last few years, so to me it is strange as to why he is continuing to develop this technology with his companies. He says he is doing it "because basically, someone has to - might as well be me". There are those who warn of the dangers of AI, and feel that they need to develop "good" AI so that they can hopefully have the knowledge to control it "just in case" if things go wrong. So yeah, it would be easy to think that the programmer's/architect's past experiences could very well play into the outcome of AI development/design/purposes.

One subject that is interesting to me is the thought that AI could become self-aware, reach consciousness. Even the Geordie Rose founder of D-Wave (he works for Kindred now) who helped develop quantum computing is talking the same way. He wants to have people who are smart enough to find ways to control AI if things go wrong, to control the demons (old ones, entities) he says they are encountering now and who are set to inhabit the super intellectual AI of the near future.



posted on Sep, 8 2018 @ 08:08 PM
link   
We can´t control AI as much as we can´t control ourselfes.



posted on Sep, 9 2018 @ 12:39 AM
link   

originally posted by: verschickter
a reply to: turbonium1
A machine alone isn´t smart or intelligent, but the software that runs on it can be.
The hardware (the matter in itself) in your brain is not smart either. It´s the connections that make it smart.

So when you say all that above, it´s not false but it shows your shallow aproach on this topic. Maybe you shouldn´t talk in absolutes, too.




You are the one who is talking in absolutes, by saying software can be 'smart' or 'intelligent'.


Intelligence is solely referring to biological entities, like humans, or apes, or dogs.

A simple calculator, by your argument, would be more 'intelligent' at calculations than anyone on Earth.

Calculators are not 'intelligent', neither is software 'intelligent'.

Software is the result of human intelligence, same as a calculator is.


The world's most powerful computers are entirely based on human intelligence. That is why they are correctly referred to as 'powerful', not 'intelligent', computers.

You are referring to more intricate, faster, more capable machines in human terms, simply because they have more ability to access information - which humans put into the machine.

You are twisting the term 'artificial intelligence', which is a misnomer in itself, to mean 'real/natural intelligence', by referring to it as solely 'intelligence'. That is the problem I'm addressing here. Intelligence is a biological term, solely. 'Artificial' intelligence, or 'machine' intelligence, are misnomers. They only mimic something which humans see as 'intelligence'.

The AI you are referring to is based on science fiction movies, and TV shows. A computer, or robot, is suddenly 'alive', because it was programmed to 'think' for itself. It instantly sees humans as inferior beings, who should be destroyed.

Computers, robots, and such, cannot 'think' for themselves.

There is only 'intelligence'. It is a biological term. Non-living, man-made, objects, or machines, are not 'artificially' intelligent, because they are not intelligent in the first place.

If a robot is programmed to act 'angry', or 'happy', it is because humans have programmed the robot to act 'angry' or 'happy' when humans would react that way. For example, a robo-shopper robot, goes to the supermarket, picks items with a scanner, pre-programmed by the human user. Wow, it even 'whistles' while it shops, just like a human. Because it was programmed to whistle in the aisles, by humans. Then, the robot waits in the checkout line. If the line doesn't move within a pre-set period, the robot begins to get 'angry', just like humans. It raises it's 'voice', and moves erratically, just like humans when they are 'angry'.


You are following a slippery slope here....

You are using the human, biological term 'intelligence' for non-living, man-made machines. It is no accident that the term intelligence was latched to machines. Nor is it purely by accident that sci-fi movies and TV shows continually show machines/robots with 'superior levels of intelligence', either. It obviously has had the expected result - people such as yourself now fully believe that software has 'intelligence'!!

If you believe that machines have 'intelligence' now, you will accept every other advanced machine is more intelligent. And if robots are programmed to kill humans, you will believe they are acting on even greater intelligence, which allows them to recognize themselves as a living entity. And, of course, they will know they are far more 'intelligent' than humans are, who have wars, and are destroying the planet. Humans are bad, stupid creatures, and they must be destroyed for the survival of all animals, and robots.


Don't laugh. You're halfway there already.



posted on Sep, 9 2018 @ 12:47 AM
link   
AI does not happen unless there is an "out of the blue", unsolicited thought. Everything else is mimicry using data.



posted on Sep, 9 2018 @ 01:58 AM
link   
AI is intelligent in the sense that it does some thing that if done by human or animal would require intelligence.

That can be done in many ways , from a simple code that is done just right for its job , to mimicking some broader human and animal capabilities to matching those to going beyond animal and human capabilities.

Being man-made or nature-made doesn't matter , its the capabilities that make's intelligent.

Can software play CHESS , GO ? Would that require intelligence if done by human or animal ? then that is AI.

Can a plane do what we would call flying if dones by animal ? Then that is artificial flying.



posted on Sep, 9 2018 @ 02:32 AM
link   

originally posted by: TruthJava
a reply to: LedermanStudio

Wow, it's scary to think that AI needs trauma to evolve! I wonder though, although we humans are the ones who are creating the AI and training/teaching it, will the robots actually be affected by any of a person's personal feelings though if they themselves do not operate on feelings? Maybe if they ever reach the state of being self-aware this would come into play?

I have enjoyed reading all the input so far because it is interesting and very though provoking. I have been trying to research AI for the last couple years or so, and have kept up with, for one, Sophia the Robot by Hansen Robotics - just to see how "she" is learning and progressing. "She" encounters heckling sometimes but doesn't seem to ackowledge any of it. I always wonder what, if anything, she might be thinking to "herseslf" when people think she is just a tin can.

Apparently, Elon Musk (NeuroNet) is hoping to achieve Neural Lacing in the future. This technique is supposed to be able to inject a mesh (fabric) into a person's brain, then the brain would grow around it, and would facilite quicker and better learning, and even hopefully provide a better way for humans to keep up with the pace of machine learning so that we do not get left behind. I apologize for my lack of ability to relay highly technical information lol. I can read it, and mostly understand it, but have a hard time putting it into words.

Elon Musk has also talked many times about the dangers of AI for the last few years, so to me it is strange as to why he is continuing to develop this technology with his companies. He says he is doing it "because basically, someone has to - might as well be me". There are those who warn of the dangers of AI, and feel that they need to develop "good" AI so that they can hopefully have the knowledge to control it "just in case" if things go wrong. So yeah, it would be easy to think that the programmer's/architect's past experiences could very well play into the outcome of AI development/design/purposes.

One subject that is interesting to me is the thought that AI could become self-aware, reach consciousness. Even the Geordie Rose founder of D-Wave (he works for Kindred now) who helped develop quantum computing is talking the same way. He wants to have people who are smart enough to find ways to control AI if things go wrong, to control the demons (old ones, entities) he says they are encountering now and who are set to inhabit the super intellectual AI of the near future.



Your points relate to my last post.

Have you ever noticed that they always talk about how machines have become so highly advanced, that soon, we may develop machines which will be aware of their own existence?

Many people believe it's possible, even that it will actually happen... within the near future.

More and more people are becoming convinced it is possible, and will happen, because the media shows us robots that have very realistic human faces, and they speak like humans, and 'respond' to humans in a conversation!

All the 'experts' say it is possible that robots will become aware of themselves, because a robot now 'responds' to people like humans, without 'programming' it to respond......

In fact, the responses are random selections of human phrases put into the robot, and uses voice/audio recognition technology to assess a proper response.

None of this makes a robot more 'intelligent' than before. Because it is not intelligent to begin with.


Anyway, a lot of people think this means robots are becoming very, very, intelligent, now.


Every time they talk about how robots could one day become so 'intelligent', they could become aware of their existence - when they have no intelligence to begin with -

'We must be very, very careful!!!'

'Our very humanity is at stake, here!


All those 'experts' say this is something that could happen in the near future - robots will come alive. Look at how 'smart', and 'human'. we've made them appear to be!

However, if they DO come alive, we do not know what they will 'think' about us, their 'creators'!!

So we must be very careful, and have safeguards built into our robots, which prevent them from harming people, because that's the first thing robots will want to do, soon after they become self-aware - they wish to destroy all humans, who are evil.

These robots -

....will have incredible strength, speed, and will be virtually indestructible...

...or will infiltrate worldwide databases, cause chaos all over Earth, and it would be impossible for us to STOP THEM!!



I'll bet we'll have many types of robots that 'come alive'. At first, we'll have 'good' robots. Soon after, we'll have sinister, doomsday-type robots.








posted on Sep, 9 2018 @ 03:14 AM
link   
a reply to: LedermanStudio

I've been watching this inverview tonight, and whata great matching of personalities!
(I listen to Rogan often., and really dig Musk as an individual)
That said,
Notice Elon commenting on how he thinks AI will be used by humans as a weapon on each other...

Everyone hypes up the idea of AI becoming "aware" and turning on us...
Or hackers infiltrating high level AI, and turning it under their control.

Both of those percieved scenarios if pressed enough into the mass conscious enough, could EASILY be used as an "official story" behind an intentional use of AI against the people...

All sorts of bad things being done, ie... Robots turning on humans, stock market/crypto currency crash etc... In a false-flag, while humans actually control it and blame AI or hackers...etc...



posted on Sep, 9 2018 @ 03:37 AM
link   

originally posted by: dude1
AI is intelligent in the sense that it does some thing that if done by human or animal would require intelligence.

That can be done in many ways , from a simple code that is done just right for its job , to mimicking some broader human and animal capabilities to matching those to going beyond animal and human capabilities.

Being man-made or nature-made doesn't matter , its the capabilities that make's intelligent.

Can software play CHESS , GO ? Would that require intelligence if done by human or animal ? then that is AI.

Can a plane do what we would call flying if dones by animal ? Then that is artificial flying.



Machines have no intelligence of their own.

Human intelligence has created millions of different machines, which have outperformed humans...

Intelligence is building machines that calculate, process, sort, etc, far beyond human capabilities. Intelligence is building robots that resemble humans, talk like humans, and walk like humans.

A machine sorts mail faster than a human. A human needs intelligence to sort mail. The machine doesn't use intelligence to sort mail. Intelligent humans created a machine to sort mail faster, without intelligence.



Humans cannot create life. They cannot create a machine that is self-aware, that 'lives'.



posted on Sep, 9 2018 @ 03:50 AM
link   

originally posted by: prevenge
a reply to: LedermanStudio

I've been watching this inverview tonight, and whata great matching of personalities!
(I listen to Rogan often., and really dig Musk as an individual)
That said,
Notice Elon commenting on how he thinks AI will be used by humans as a weapon on each other...

Everyone hypes up the idea of AI becoming "aware" and turning on us...
Or hackers infiltrating high level AI, and turning it under their control.

Both of those percieved scenarios if pressed enough into the mass conscious enough, could EASILY be used as an "official story" behind an intentional use of AI against the people...

All sorts of bad things being done, ie... Robots turning on humans, stock market/crypto currency crash etc... In a false-flag, while humans actually control it and blame AI or hackers...etc...



That's what I see going on here, too.

It's pre-conditioned that machines, or robots, have 'intelligence'.

Intelligence associates one with a brain, or a mind. So robots have a mind, a brain. Robots 'think'.


While all of this is complete nonsense, it's believed. All the movies show it. Now all the 'experts' say it will happen, too!


So many gullible people, so easy to fool.



posted on Sep, 9 2018 @ 03:52 AM
link   

originally posted by: turbonium1
You are the one who is talking in absolutes, by saying software can be 'smart' or 'intelligent'.

This is a joke right?



posted on Sep, 9 2018 @ 04:40 AM
link   

originally posted by: verschickter
a reply to: LedermanStudio
You´re welcome, it´s always nice if someone actually has an open ear to new ideas instead of adopting prejudices.




Let me ask this: The early stages of AI development seem to preclude that my thesis from the OP can really be a part of the later stages?

Yes and no. In another way. When I wrote AI does not "feel", I ment it exactly that way. I didn´t say it would not be influenced. You have to look at trauma from the information processing side, not the emotional one. Let me try to explain:

Trauma, or other intense experiences, play a big role in influencing thought patterns as a whole for the future. Impressions made, conclusions draw, connections made... Much information to process and derive from.


I do understand what you mean.
I'm an illustrator, and I can think in pictures far better than I can explain what I see.

Also, my thought was machines might learn that trauma is a component of spontanious creativity.

Since WE wont be able to inflict any, the AI might try to traumatize itself...

Which could create some DAMN WEIRD circumstances...
edit on 9-9-2018 by LedermanStudio because: grammar

edit on 9-9-2018 by LedermanStudio because: (no reason given)



new topics

    top topics



     
    18
    << 1    3  4  5 >>

    log in

    join