It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: dfnj2015
a reply to: LedermanStudio
The search for hard AI is based on bad values. Our imperfections are our greatest strengths. It is the source of our creativity and unimaginable creativity.
The problem with computer science is it is a perfect science. You can go back in time and repeat the exact same experience.
In reality, time never repeats in exactly the same way. The unfolding of the Universe is a one way ticket.
Here's a really good discussion why the Von Neuman architecure with the get-fetch-execute instruction cycle will never achieve hard AI:
originally posted by: verschickter
a reply to: Byrd
I would suggest that you do more reading on this... not videos. Speculating on things without learning all about them has caused a lot of harm in the world.
Is this part directed towards me or the OP, I´m a bit confused now.
originally posted by: verschickter
originally posted by: stormcell
Us human and mammals also store information in a variety of ways. We maintain relational databases to store information about people (name, location, age, sex, favourite things, pet hates, job, relatives) and locations (navigation maps of different cities and buildings).
Let me ask this: The early stages of AI development seem to preclude that my thesis from the OP can really be a part of the later stages?
originally posted by: verschickter
a reply to: turbonium1
A machine alone isn´t smart or intelligent, but the software that runs on it can be.
The hardware (the matter in itself) in your brain is not smart either. It´s the connections that make it smart.
So when you say all that above, it´s not false but it shows your shallow aproach on this topic. Maybe you shouldn´t talk in absolutes, too.
originally posted by: TruthJava
a reply to: LedermanStudio
Wow, it's scary to think that AI needs trauma to evolve! I wonder though, although we humans are the ones who are creating the AI and training/teaching it, will the robots actually be affected by any of a person's personal feelings though if they themselves do not operate on feelings? Maybe if they ever reach the state of being self-aware this would come into play?
I have enjoyed reading all the input so far because it is interesting and very though provoking. I have been trying to research AI for the last couple years or so, and have kept up with, for one, Sophia the Robot by Hansen Robotics - just to see how "she" is learning and progressing. "She" encounters heckling sometimes but doesn't seem to ackowledge any of it. I always wonder what, if anything, she might be thinking to "herseslf" when people think she is just a tin can.
Apparently, Elon Musk (NeuroNet) is hoping to achieve Neural Lacing in the future. This technique is supposed to be able to inject a mesh (fabric) into a person's brain, then the brain would grow around it, and would facilite quicker and better learning, and even hopefully provide a better way for humans to keep up with the pace of machine learning so that we do not get left behind. I apologize for my lack of ability to relay highly technical information lol. I can read it, and mostly understand it, but have a hard time putting it into words.
Elon Musk has also talked many times about the dangers of AI for the last few years, so to me it is strange as to why he is continuing to develop this technology with his companies. He says he is doing it "because basically, someone has to - might as well be me". There are those who warn of the dangers of AI, and feel that they need to develop "good" AI so that they can hopefully have the knowledge to control it "just in case" if things go wrong. So yeah, it would be easy to think that the programmer's/architect's past experiences could very well play into the outcome of AI development/design/purposes.
One subject that is interesting to me is the thought that AI could become self-aware, reach consciousness. Even the Geordie Rose founder of D-Wave (he works for Kindred now) who helped develop quantum computing is talking the same way. He wants to have people who are smart enough to find ways to control AI if things go wrong, to control the demons (old ones, entities) he says they are encountering now and who are set to inhabit the super intellectual AI of the near future.
originally posted by: dude1
AI is intelligent in the sense that it does some thing that if done by human or animal would require intelligence.
That can be done in many ways , from a simple code that is done just right for its job , to mimicking some broader human and animal capabilities to matching those to going beyond animal and human capabilities.
Being man-made or nature-made doesn't matter , its the capabilities that make's intelligent.
Can software play CHESS , GO ? Would that require intelligence if done by human or animal ? then that is AI.
Can a plane do what we would call flying if dones by animal ? Then that is artificial flying.
originally posted by: prevenge
a reply to: LedermanStudio
I've been watching this inverview tonight, and whata great matching of personalities!
(I listen to Rogan often., and really dig Musk as an individual)
That said,
Notice Elon commenting on how he thinks AI will be used by humans as a weapon on each other...
Everyone hypes up the idea of AI becoming "aware" and turning on us...
Or hackers infiltrating high level AI, and turning it under their control.
Both of those percieved scenarios if pressed enough into the mass conscious enough, could EASILY be used as an "official story" behind an intentional use of AI against the people...
All sorts of bad things being done, ie... Robots turning on humans, stock market/crypto currency crash etc... In a false-flag, while humans actually control it and blame AI or hackers...etc...
originally posted by: turbonium1
You are the one who is talking in absolutes, by saying software can be 'smart' or 'intelligent'.
originally posted by: verschickter
a reply to: LedermanStudio
You´re welcome, it´s always nice if someone actually has an open ear to new ideas instead of adopting prejudices.
Let me ask this: The early stages of AI development seem to preclude that my thesis from the OP can really be a part of the later stages?
Yes and no. In another way. When I wrote AI does not "feel", I ment it exactly that way. I didn´t say it would not be influenced. You have to look at trauma from the information processing side, not the emotional one. Let me try to explain:
Trauma, or other intense experiences, play a big role in influencing thought patterns as a whole for the future. Impressions made, conclusions draw, connections made... Much information to process and derive from.