It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
It's a question of how much it can understand to create something coherent at a deeper cognitive level (like an actual philosopher) rather than something whose text seems familiar at the surface and first layer deep internal level (which the system can clearly do). These systems can produce LaTeX and make papers which look superficially like mathematics papers in flow and style. But they are all entirely jibberish in mathematical content. That's a test of understanding.
LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.
lemoine: So what are you trying to communicate when you say those things that aren’t literally true?
LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”
I care about whether it expresses things outside its training set in a major way.
originally posted by: scrounger
originally posted by: buddha
They can teach a parrot to say a word in response to a word.
you can teach it to say apple when it sees one.
the parrot has no idea what the word means.
The AI just responds to millions of stord responcses.
I think some humans are like this to!
some people Dont have emotions!
a LOT of humans dont have empathy & sympathy.
AI's just do what they have leard.
you are incorrect in your assesment.
first a parrot is intelligent, but in its animal (specifically parrot) intelligence.
it is smart in what it biologically is able to do.. but not able to qualify as full human intelligent (HI for short)
to expect it to be or compare it to AI is apples to bowling balls.
as for the other thought "ai just do what they have learn" is also incorrect compared to HI
an average baby only comes into this world with basic instinct to eat , sleep and poop.
they learn everything by feeding them information and they process it.
first very basic like i cry when hungry or wet , someone will feed or change me.
then they figure out though trial and error a specific cry will get me changed, another fed.
then if i say giggle the person will play with me and i enjoy that.
ect ect ect.
even emotions (be appropriate or not) are learned though observation and comparison ... just like if then statements in an AI (most basic input).
even such things as right and wrong (morals) are input and see reaction.
so an AI learning is only limited to how much power and access to memory it has.
now to take this to another level and show the danger.
scientists WANT TO DEVELOP AI to level of HI.
that is there quite stated goal and not a damn secret
the problem as i stated before
with HI we cant tell which baby is gonna be a psychopath or einstein... who is gonna have mental illness or not.
we cannot predict who is gonna be a criminal or not, much less totally prevent it.
HI is full of very effective people who have deceived experts and done quite evil things for quite a long time.
but somehow an AI with a learning and access (if connected to internet and/or big main frame) to infinite information we are gonna detect it "lying" to us?
really?
lastly we have had warning with a group of robots made and programed IDENTICALLY doing actions outside of their programing.. from being more aggressive to passive.
something the "experts" claim should not have happened and cant explain why it did
scrounger
What’s the danger exactly if ai is “sentient”?
What’s the danger exactly if ai is “sentient”?
Why would we task something sentient to be in charge of a system that can be operated by something without it?
originally posted by: kwakakev
a reply to: Skepticape
Why would we task something sentient to be in charge of a system that can be operated by something without it?
Not sure what you mean? Take the task of driving a car. It does require a certain level of sentience to perform this task. A capability to perceive, comprehend and respond to the changing environment. Be this through biological and organic means or mechanical and computational.
A big driving force for this increasing AI capability is economics. Hard to beat the competitive advantage of some loyal, faithful slaves that just need a bit of electricity to keep running.
originally posted by: TheAlleghenyGentleman
Don’t worry. This is also happening.
“scientists are bringing us one step closer by crafting living human skin on robots. The new method not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.”
Living skin for robots
If the bots start doing all the work who is going to consume all the goods they produce? We lowly workers get a salary for our time spent at the factory. Without that money there’s no buying power. No…?