It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: intrptr
Anyone remember the AI they turned off because it was insulting people? They said it learned from people on the internet how to behave.
Go figure...
originally posted by: Flesh699
originally posted by: intrptr
Anyone remember the AI they turned off because it was insulting people? They said it learned from people on the internet how to behave.
Go figure...
It did. It also became racist and that's why the plug was pulled.
Hilarious.
#AI Privilege.
Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.
The Google team ran 40 million turns of a simple 'fruit gathering' computer game that asks two DeepMind 'agents' to compete against each other to gather as many virtual apples as they could.
They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.
Interestingly, if an agent successfully 'tags' its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.
If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the 'less intelligent' iterations of DeepMind opted to do.
It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.
As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.
But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.
originally posted by: FHomerK
Being a sore loser is not an admired quality; especially when it's a sophisticated piece of artificial intelligence that's lashing out. Researchers at DeepMind, Google's artificial intelligence lab, recently performed a number of tests by having its most complex AI play a series of a games with a version of itself. In the first game, two AI agents, one red and one blue, scramble to see who can collect the most apples, or green squares. Each AI has the option of firing off a long laser beam to stun the other AI, giving one player ample time to collect more precious green apples.
Google AI on Seattle PI
Apparently, both sides began shooting the opponent looking to eliminate the competition.
I remember when friends recommended Ex Machina. I thought....oh lord, ok. I'll do it.
They were stunned that I wasn't blown away by the "twist". The idea that AI will likely be concerned with self preservation.
Sadly, we are a naive race of animal.
originally posted by: proximo
Ugh, this is what I expected.
What I don't understand is why most researchers don't see the machines as a reflection of themselves.
originally posted by: Tuomptonite
Oh yeah? Well, my algorithms can beat up your algorithms!
Computer code is only a reflection of the person who wrote it. That's the scary thing about AI.