It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
I think I get what you're saying. To sum it up "cry wolf too many times and when the wolf really comes no one will listen."
originally posted by: AaarghZombies
a reply to: kwakakev
Half of these problems aren't real, they're liberals wringing their hands over hypotheticals.
originally posted by: ChaoticOrder
a reply to: mbkennel
Until it isn't. Occasionally, exceptionally creative philosophers promote novel concepts and arguments that humans didn't clearly have before, or they named and clarified them in an original way. Not only novel in a statistical sense (which any language model with stochastic sampling can do) but novel in a conceptual way and which is coherent.
I think it's extremely rare for such situations to occur, if we look at almost any scientific theory, we see it was built from many previous concepts. Every thought I have has some correlation to my past thoughts, every "novel" concept I develop is a combination of many simpler concepts. But it's certainly possible our brain utilizes random biological processes to generate random thoughts which are truly novel/original. Random number generators can allow computers to do the same thing, however I see no good reason that is required for sentience.
Without a body and proprioceptive receptors can it even happen?
originally posted by: ChaoticOrder
a reply to: Grimpachi
I think I get what you're saying. To sum it up "cry wolf too many times and when the wolf really comes no one will listen."
That's part of what I'm saying, the other important point I'm trying to make is that we shouldn't be so confident in our presumptions about current AI. We don't truly understand the nature of consciousness and we don't truly understand what is happening inside a massive artificial neural network trained on terabytes of data. But we can discern they have some conceptual framework from which they can reason and form arguments. We are building the foundations for truly self-aware AI and we need to acknowledge that instead of treating it like a joke.
originally posted by: ChaoticOrder
a reply to: nugget1
I believe the only realistic way to merge with machine intelligence would be to digitize the human mind. If we could fully simulate every aspect of a real human brain, then I see no reason that simulation wouldn't produce sentience.
originally posted by: olaru12
a reply to: Archivalist
I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.
How aware would an AI have to be to realize it might be prudent to play stupid so the humans won't pull the plug.
originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.
“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”
Living skin for robots
originally posted by: nugget1
If/when humans can interface with AI will that make them sentient? Will they see mankind as the greatest threat to earth and devise a plan to deal with said threat?
originally posted by: Archivalist
originally posted by: olaru12
a reply to: Archivalist
I agree with Google's judgment that this chatbot is not fully self aware and not fully sentient.
How aware would an AI have to be to realize it might be prudent to play stupid so the humans won't pull the plug.
Very young children do not physically understand how to lie, so they don't.
The ability to lie and deceive is something we consider to be an intellectual milestone, during childhood development.
I see no reason to assume that an AI, built with the intent of creating human-like intelligence, could skip that developmental step. We are trying to build it, to mimic us, and that is a trait we have.
originally posted by: buddha
They can teach a parrot to say a word in response to a word.
you can teach it to say apple when it sees one.
the parrot has no idea what the word means.
The AI just responds to millions of stord responcses.
I think some humans are like this to!
some people Dont have emotions!
a LOT of humans dont have empathy & sympathy.
AI's just do what they have leard.
originally posted by: Archivalist
a reply to: olaru12
Right, because we can design something more intelligent than we are, immediately OOB.
I feel like you're missing my point.