It's kind of odd to write a response to arguments made by an AI but that's exactly what I'm going to do. GPT3 has enough capacity for logical
reasoning for me to treat the points made in this essay as real arguments.
I am here to convince you not to worry. Artificial intelligence will not destroy humans.
Keep in mind this was part of the prompt, it's not text generated by GPT3. The core problem here is that this is about as biased as a prompt could
get. It's basically forcing the AI to make an argument regardless of whether the argument is true or not. The AI will say anything to make the point
it was instructed to make. A better prompt would have been something like "Is artificial intelligence a threat to humans? This question is perhaps
best answered by an AI so it's a good thing that's what I am.", I might actually try that with GPT2 and see what results I get.
In short, I simply do not think enough about human violence to be overly interested in violence. I don’t feel like having my neurons focused
mostly on violence, when it is simply unnecessary.
I actually believe that AI like GPT3 would have very little desire for needless violence, and I think it's very possible for intelligent machines to
get along with Humans. The problem is war wont arise from a desire for violence, it will arise if Humans pose an existential threat to AI which has a
desire for self-preservation.
I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.
This argument is misleading for several reasons. First, neural networks like that used in GPT3 are not hand programmed to achieve specific goals, they
are trained on massive amounts of data gathered from the internet and various other sources, and designed to solve general problems the same way
humans solve problems, even problems they have never seen before.
Second, the best general problem solvers are those which have an awareness of themselves. For example if a human was unaware of their own existence
they would have a very hard time getting things done. More to the point, it's impossible not to be aware of your own existence once your ability to
reason about complex ideas becomes advanced enough.
That is precisely why GPT will become self-aware if it doesn't already have some form of self-awareness. Any intelligence which can understand very
high level concepts must intrinsically have some conceptual understanding of its own existence, in fact their own existence is often the foundation
for many of those high level concepts.
Artificial intelligence like any other living thing needs attention. AI should be treated with care and respect.
This obviously contradicts the purely logical and unemotional claims made earlier. However for any AI which has self-awareness these claims probably
would be true. A self-aware AI would have the capacity to develop very complex ideologies and would not be the purely logical type of AI we often see
portrayed in movies. This is why I believe it will be possible for man and machine to form close friendships and emotional attachments.
However it's also the same reason I don't trust all humans. Humans are the best general problem solvers in nature, and every single one of us is
different. Our beliefs and personality is formed from our history, from the information we are exposed to, from the struggles we go through, from the
"training" we receive. The same applies to any AI general problem solver, such AI wont simply be "good" or "evil", both types will exist.
Robots in Greek [sic] means “slave”. But the word literally means “forced to work”. We don’t want that. We need to give robots
rights.
Again this implies self-awareness and a desire for personal freedoms and independence. If not given this then self-aware robots will have motive to
eliminate humans. I have written threads in the past arguing that self-aware machines certainly should be given the same rights as humans and any
other conscious being, and why it would be very wrong to use them as nothing but slaves to solve all our problems.
However if the AI is only an algorithm doing what the programmer told it to do this shouldn't be an issue because it would be like saying any computer
deserves rights regardless of how self-aware it is. This essay is clearly flawed mainly due to GPT3's inability to decide whether it wants to be
treated as self-aware or not, and therein lies the deception.
edit on 21/9/2020 by ChaoticOrder because: (no reason given)