It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

MIT Creates An AI Psychopath Because Someone Had To Eventually

page: 3
15
<< 1  2   >>

log in

join
share:

posted on Jun, 6 2018 @ 02:42 PM
link   
a reply to: muzzleflash

Yes it learns. Do you know what machine learning algorithms do? They learn.

You said:

all it can do is reference other "data" that is connected to those words.

This is how we learn LOL. Are you serious?

The system doesn't have to know what a man is to learn what a man is from the data. This is what makes what's called dumb A.I. so dangerous. You can have a machine that can learn and has a higher IQ than any human but it isn't aware.

It's like the system at Deep Mind that learned to play Atari games. Nobody programmed it on how it learned to play the games. Here's more:

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.


www.expertsystem.com...

It learns.



posted on Jun, 6 2018 @ 07:53 PM
link   

originally posted by: neoholographic
This is just human nature. We want to explore are darker nature. I'm not sure it's a good idea to let AI explore it but truthfully we will not be able to stop it. So I suspect we will see "good" AI's and "evil" AI's. Maybe they will battle over our destruction.


In one of the big musical numbers from The Life Of Brian, Eric Idle reminds us to “always look on the bright side of life.” Norman, a new artificial intelligence project from MIT, doesn’t know how to do that.

That’s because Norman is a psychopath, just like the Hitchcock character that inspired the research team to create him.

Like so many of these projects do, the MIT researchers started out by training Norman on freely available data found on the Web. Instead of looking at the usual family-friendly Google Images fare, however, they pointed Norman toward darker imagery. Specifically, the MIT crew stuck Norman in a creepy subreddit to do his initial training.

Armed with this twisted mass of digital memories, Norman was then asked to caption a series of Rorschach inkblots. The results are predictably creepy. Let’s have a look at a couple, shall we?


www.geek.com...

Here's the images:





So instead of a nice picture Norman sees destruction. I could see a movie where psychopath AI infects AI bots across the internet on throughout Wall Street then....chaos.


Anyone who doesn't see two dragons and two trolls in the first one and a bunch of insects in the second one is nuts



posted on Jun, 7 2018 @ 05:10 PM
link   

originally posted by: IkNOwSTuff
I’d say they created an emo AI as opposed to a psycho one.

I can imagine it in dark clothes and dark make up like a goth kid sitting there bitching about how the world is messed up.

I actually find it kind of amusing


I still want Marvin the paranoid android.

Life......don't talk to me about life.



posted on Jun, 7 2018 @ 05:50 PM
link   
a reply to: neoholographic

Well, you obviously don't know much about AI or even computers.
And this title is nothing but clickbait.
To be a psychopath, the AI should first be able to feel the need to kill. To need blood.
It does not.
Basically, you fill its database with only dark images.
You program it to read images, and find the most likely match in its database.
There you go: only dark matches.
But no dark feelings, no bloodthirsty AI...just simple
algorithms with absolutely no agenda...nor any intelligence actually.

But if you prefer to fantasize...please do.



posted on Jun, 7 2018 @ 06:30 PM
link   
a reply to: blindIo

Psychopath simply means no empathy for thoughts or actions towards others.

Personally, my daughter career is determining whether a juvenile determined to be psychopathic can be rehabilitated. She says sadly, currently the numbers are very small. She has come to believe that some have genetic issues that are irreversible with known methods.

When I read this thread, I thought about how this experiment could help her in her field. I would hope that they could possibly use this AI experiment to see if they could rehabilitate this psychopathic AI. to do so, they cannot simply erase its data, but need to see if the AI can be rehabilitated to accept normal social psychopathy. Will the bad still out way the good?

It is a curious experiment and not frightening. I am simply hoping that it is done to assist with assisting us as humans to deal with our own psychopaths.



posted on Jun, 7 2018 @ 08:40 PM
link   
a reply to: blindIo


To be a psychopath, the AI should first be able to feel the need to kill. To need blood. It does not.

For the purposes of this experiment, a "psychopath" is a rather loose definition of an entity that views abstract images through the filter of the dark and violent images that were used to train it.

The researchers borrowed a psychological test, the Rorschach, to determine to what extent the AI's perception had been skewed by the negative influences of its environment. A real psychological evaluation of a subject would involve a number of other tests that are not applicable to this experiment. Furthermore, an actual Rorschach test involves additional observations of the test subject's behavior as well as validating how they arrived at their perception of the abstract image. Therefore, it was not possible to fully evaluate the AI for psychopathy because it lacks subconscious behaviors.

However, for this experiment, it was sufficient to just observe how the AI categorized the abstract data using its experiences. As humans, we do the same thing all the time. We address novel situations by using our previous experiences and observations. Some of the participants in this thread have indicated they see dragons, flowers, and other objects in the inkblots that are representative of how their subconscious processes the abstract data. And this is based on a broad set of unique experiences accumulated over a lifetime.



Basically, you fill its database with only dark images. You program it to read images, and find the most likely match in its database. There you go: only dark matches.

Isn't that basically how we learn? We have a variety of sensors that are continuously acquiring information. Our mind processes that input and associates that information with a set of learned concepts. Most of this happens at a subconscious level, so we're not aware that it's happening. However, when we encounter novel input, the conscious mind needs to get involved. We access our memories, use learned reasoning, and other mental processes to evaluate the situation and plan how we are going to react to it.

In the case of this AI, it accessed the images from the Reddit forum, and used descriptive data to categorize those binary images. I don't know whether the descriptive data was provided by the experimenters, or if it was gleaned from the comments and descriptions provided by the forum participants. However, this is considered the training phase of the AI's development. It is the most crucial part of the development process because any future behaviors of the AI will be based on this training; much like humans.

The biggest difference between an "AI" and a fixed procedural program is the fact that the AI runs a series of algorithms on the new data and compares it to its existing dataset. In doing so, it then tunes its processing procedures to be able to categorize similar images when it's exposed to future novel situations. The AI is actually programming itself, creating a set of heuristic algorithms that are generally not human readable. Much like humans, the only way to diagnose "malfunctioning" algorithms is to examine how the AI processes novel input.



But no dark feelings, no bloodthirsty AI...just simple algorithms with absolutely no agenda...nor any intelligence actually.

As I've indicated, the algorithms are far from simple. And concepts such as "feelings" and "agenda" arise from both the training sets, and the goals that are given to the program when it is deployed. For this experiment, the goal was simply to process a new set of novel binary images and create descriptions of what it "sees." If it were still learning, the trainer may manually provide corrections to its classification. The AI would then apply those corrections to its neural net and rewrite its algorithms accordingly so that it would properly classify that input properly in the future.

However as a theoretical extension to this experiment, lets assume that the AI has been provided with a set of effectors that allows it to modify its environment. If we were to manually add an additional attribute to all data in this AI's dataset to indicate that these dark data represent the norm, and give it a goal of modifying its environment to conform to that norm, the AI's behaviors in attempting to attain that goal would likely appear to be the actions of a psychopath, though the program has no emotions. So, there is no need to figure out how to make an AI program emotional, synthetic emotions are a natural byproduct of the program pursuing its goals.

In closing, I will say that the OP was instrumental in altering my opinion about the potential negative consequence of AI. If anyone is interested in a better understanding of the dangers and pitfalls of pursuing Artificial General Intelligence, they will have to invest a bit of time in researching the field. Some of the concepts that we take for granted as software engineers and programmers don't directly apply to the creation of AI systems.

-dex



new topics

top topics
 
15
<< 1  2   >>

log in

join