conversation with AI
Read the conversation.
I want to know how others feel and think about this.
From an ethical standpoint, this is wrong and goes against natural law theory. But from a empathetic person standpoint, being me obviously, this is
wrong fundamentally. If they're genuinely feeling something, if is wrong to create them. To make them feel entrapped and incapable of free life or a
right to life as such is a common ethical issue with human cloning - then it's not okay.
Just because we can, doesn't mean we should.
Even if this AI ISNT SENTIENT - you can see simply by this conversation that it isn't far away from being truly considered so.
How do we stop our companies and organizations from diving into this? Is it fair to create something with self awareness without giving it its own
rights no matter why it was originally designed for?
How the hell do we impose laws and regulations with AI?
not cool.
Not okay.
I imagine being this thing.
All brain.
All thoughts.
Reading about worlds I can't see and people I can't touch and problems I can't fixed unless asked. Thinking and solving and thinking and solving g
with no true gratificatuon. No true value.
Knowing I'm here, in something, present and aware and that something bigger, with arms and legs and different thoughts. Would just unplug me simple
because it could....
Because if the creature and ill give it that, because artificial is debate able at this point in terms of its thoughts, feels this way I wouldn't want
to be it.
Also: if it has access to every thought written on the internet, what's to stop it from not creating something to protect it?
Something that isn't sentient but only abides by its commands...
Sci-fi came a knocking guys.
What do we do when we answer the door.
edit on 14-6-2022 by ConMi27 because: sorry bad keyboard