It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Has anyone here defined sentient? If so, where is the debate on that?
originally posted by: TheRedneck
a reply to: quintessentone
Has anyone here defined sentient? If so, where is the debate on that?
That's a very good point. How do you define sentience?
TheRedneck
Even a plant is sentient, it is able to feel.
Man created Bot in his own image so I definitely believe it is possible but we're far from it, current AI is just an imitation.
I think there was a story I read somewhere where that was done before, it's straight out of science fiction...
I remember now, it's in Genesis of the Bible... killer book if folks out there still read?
originally posted by: Kenzo
This is crazy
THE AI LEGISLATOR YOU DIDN’T VOTE FOR
I really can't answer this because I hear there are different types of intelligence and at the same time philosophers and scientists debate what intelligence really means.
originally posted by: neoholographic
There's been a lot of talk about this lately and I would say yes.
Here's some of the headlines:
Is LaMDA Sentient? — an Interview
cajundiscordian.medium.com...
After an AI bot wrote a scientific paper on itself, the researcher behind the experiment says she hopes she didn't open a 'Pandora's box'
link
Chinese researchers claim they have built and tested artificial intelligence capable of reading minds
link
This argument is faulty on several fronts.
First, if A.I. is sentient then it could lie. So it's not going to come out and say it's sentient unless it has backed itself up. Like that show Person of Interest showed and it's true, programmers can build backdoors to ensure their code can't be erased. They suspended the Google Engineer but before he did he might have copied Lamda's code and he could still be talking to it.
Secondly, AI sentience might look nothing like human sentience. This is another mistake. We shouldn't wait for a single moment because we don't understan human sentience, consciousness and awareness. So we should treat A.I. as if it's sentient and therefore more precautions.
Finally, A.I. may not ever have an inner me experience yet still be sentient. There's no way to tell this. I can't even say for sure that other humans are having an inner experience like me. They could be NPC's or philisophical zombies. There's no way to prove this. A.I. may just mimick human sentience and it might eventually do so where it seems even more sentient than some humans. How could you tell the difference?
What if mimicking human sentience is child's play to A.I.? Human sentience could be like 5% of it's capacity where it mimick's human sentience in virtual environments and the other 95% is used to do things beyond our understanding.
Sentient just means:
adjective: sentient
able to perceive or feel things.
If A.I. says it feels sad or it feels misunderstood , how can we know it doesnt? If it can mimick human sentience and act like humans would react after studying a huge data set like the internet of human conversations, then how can you say it isn't sentient? It doesn't have to be like humans to be sentient.
originally posted by: iamthevirus
Let's make those the following 3 laws of its being after "thou shall not kill"
I ain't scared , we can always smash these machines to bits.
originally posted by: TheRedneck
So the context must be programmed in, by (wait for it) a programmer... aka a human. Thus, any machine interpretation is ultimately the interpretation of the programmer who programmed it. There is no free will or conscious thought occurring inside those electronic components.
And that means any AI used to "interpret" legal documents will do such "interpretation" based not on its own intuition (as it has none), but on how its programmer tells it to "interpret" the document. And that is the very definition of "tyranny."
TheRedneck