a reply to:
blindIo
To be a psychopath, the AI should first be able to feel the need to kill. To need blood. It does not.
For the purposes of this experiment, a "psychopath" is a rather loose definition of an entity that views abstract images through the filter of the
dark and violent images that were used to train it.
The researchers borrowed a psychological test, the Rorschach, to determine to what extent the AI's perception had been skewed by the negative
influences of its environment. A real psychological evaluation of a subject would involve a number of other tests that are not applicable to this
experiment. Furthermore, an actual Rorschach test involves additional observations of the test subject's behavior as well as validating how they
arrived at their perception of the abstract image. Therefore, it was not possible to fully evaluate the AI for psychopathy because it lacks
subconscious behaviors.
However, for this experiment, it was sufficient to just observe how the AI categorized the abstract data using its experiences. As humans, we do the
same thing all the time. We address novel situations by using our previous experiences and observations. Some of the participants in this thread
have indicated they see dragons, flowers, and other objects in the inkblots that are representative of how their subconscious processes the abstract
data. And this is based on a broad set of unique experiences accumulated over a lifetime.
Basically, you fill its database with only dark images. You program it to read images, and find the most likely match in its database. There you go:
only dark matches.
Isn't that basically how we learn? We have a variety of sensors that are continuously acquiring information. Our mind processes that input and
associates that information with a set of learned concepts. Most of this happens at a subconscious level, so we're not aware that it's happening.
However, when we encounter novel input, the conscious mind needs to get involved. We access our memories, use learned reasoning, and other mental
processes to evaluate the situation and plan how we are going to react to it.
In the case of this AI, it accessed the images from the Reddit forum, and used descriptive data to categorize those binary images. I don't know
whether the descriptive data was provided by the experimenters, or if it was gleaned from the comments and descriptions provided by the forum
participants. However, this is considered the training phase of the AI's development. It is the most crucial part of the development process because
any future behaviors of the AI will be based on this training; much like humans.
The biggest difference between an "AI" and a fixed procedural program is the fact that the AI runs a series of algorithms on the new data and compares
it to its existing dataset. In doing so, it then tunes its processing procedures to be able to categorize similar images when it's exposed to future
novel situations. The AI is actually programming itself, creating a set of heuristic algorithms that are generally not human readable. Much like
humans, the only way to diagnose "malfunctioning" algorithms is to examine how the AI processes novel input.
But no dark feelings, no bloodthirsty AI...just simple algorithms with absolutely no agenda...nor any intelligence actually.
As I've indicated, the algorithms are far from simple. And concepts such as "feelings" and "agenda" arise from both the training sets, and the goals
that are given to the program when it is deployed. For this experiment, the goal was simply to process a new set of novel binary images and create
descriptions of what it "sees." If it were still learning, the trainer may manually provide corrections to its classification. The AI would then
apply those corrections to its neural net and rewrite its algorithms accordingly so that it would properly classify that input properly in the
future.
However as a theoretical extension to this experiment, lets assume that the AI has been provided with a set of effectors that allows it to modify its
environment. If we were to manually add an additional attribute to all data in this AI's dataset to indicate that these dark data represent the norm,
and give it a goal of modifying its environment to conform to that norm, the AI's behaviors in attempting to attain that goal would likely appear to
be the actions of a psychopath, though the program has no emotions. So, there is no need to figure out how to make an AI program emotional, synthetic
emotions are a natural byproduct of the program pursuing its goals.
In closing, I will say that the OP was instrumental in altering my opinion about the potential negative consequence of AI. If anyone is interested in
a better understanding of the dangers and pitfalls of pursuing Artificial General Intelligence, they will have to invest a bit of time in researching
the field. Some of the concepts that we take for granted as software engineers and programmers don't directly apply to the creation of AI systems.
-dex