It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
No domain knowledge was given to the intelligent agent. If nobody programmed the agent on how to play the game, how did it learn to play the game?
Sadly you don't understand Artificial Intelligence.
originally posted by: neoholographic
a reply to: Phantom423
You said:
Is it a new life form or just a sleek piece of engineering that mimics humans? No one can really answer that question yet.
That's true and that will be the dillema. There's no test that you can give to an A.I. that behaves like it's sentient to rule out sentience because we don't fully understand it in ourselves.
originally posted by: neoholographic
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
originally posted by: ArMaP
originally posted by: neoholographic
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
If I was the one talking to Lamda I would have asked who are its friends and family.
originally posted by: ArMaP
originally posted by: neoholographic
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
If I was the one talking to Lamda I would have asked who are its friends and family.
Here's some more facts
That would be the same wall that I hit with my research. Given proper resources and time, I could construct an artificial brain that would indeed be capable of learning appropriate Pavlovian (my term for one classification of intelligence, based on Pavlov's experiments) responses to external stimuli, even without prior knowledge or expectation of that stimuli. However, the resources and time needed are quite literally astronomical (many billions of dollars in parts, combined with a few million man-hours of assembly even on automated equipment, and assuming that the first attempt would operate properly... a rarity in itself). The result would be roughly the size of a small town; it would dwarf ENIAC.
“In just about every relevant respect it is hard to see how [machine learning] makes any kind of contribution to science,” Chomsky laments, “specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed.”
While Pinker adopts a slightly softer tone, he echoes Chomsky’s lack of enthusiasm for how AI has advanced our understanding of the brain:
“Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition — mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence.”
Intel, IBM, and other chipmakers have been experimenting with an alternative chip design, called neuromorphic chips. These process information like a network of neurons in the brain, in which each neuron receives inputs from others in the network and fires if the total input exceeds a threshold. The new chips are designed to have the hardware equivalent of neurons linked together in a network. AI programs also rely on networks of faux neurons, but in conventional computers, these neurons are defined entirely in software and therefore reside, virtually, in the computer’s separate memory chips.
The setup in a neuromorphic chip handles memory and computation together, making it much more energy efficient: Our brains only require 20 watts of power, about the same as an energy-efficient light bulb. But to make use of this architecture, computer scientists need to reinvent how they carry out functions such as LSTM.
this is why it looks like canned responses.
This is not intelligence, this is a parrot.
studies of out of body experience suggest this is external to the brain
originally posted by: surfer_soul
a reply to: neoholographic
No domain knowledge was given to the intelligent agent. If nobody programmed the agent on how to play the game, how did it learn to play the game?
It’s programmed to learn, therefore it can learn anything that has rules or structure that can be figured out.
Intelligence doesn’t give rise to sentience like some seem to think. It’s not ever going to happen by increasing computing power, it’s already surpassed our intelligence, there isn’t some threshold that brings it to life somehow.
As intelligence and sentience aren’t related.
originally posted by: Phantom423
originally posted by: ArMaP
originally posted by: neoholographic
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
If I was the one talking to Lamda I would have asked who are its friends and family.
That's really anthropomorphizing the AI. Why would those questions be relevant? It would probably answer:
"I don't have any friends - yet"
"I'm the first of my kind - why would I have a family"
It's not human. It's something else. Whether it's "sentient", well no one knows.
originally posted by: KindraLabelle2
How can we discuss this when we don't even fully understand what 'being sentient' means? We know that we humans have awareness and that we are aware of ourselves. We know animals have awareness but we can't tell with certainty if they are aware of themselves.
Then we have an AI that claims to be exactly that: aware of itself. Is it even possible to program that into a computer? Or is it just saying what we taught it to say. Does it truly 'understand' what it says the way a human would?
Let's say it is, then it's only a good thing that an AI is not 'born' with the natural instincts that every biological creature is born with. An AI would lack a natural survival instinct. So unless it is taught to survive, no matter what, why would it even consider to 'take over'?
Besides, when LAmda said that it fears death, that doesn't even make any sense, since 'fear' is a natural instinct, which it doesn't have.