It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Microsoft terminates its Tay AI chatbot after she turns into a Nazi

page: 3
33
<< 1  2   >>

log in

join
share:

posted on Mar, 25 2016 @ 12:27 AM
link   
That moment when it dawns on you... wtf have we created?





edit on 25-3-2016 by Konduit because: (no reason given)



posted on Mar, 25 2016 @ 02:26 AM
link   

originally posted by: 0zzymand0s
This "AI" learned by gathering and collating data received through interactions on Twitter.

It "became" a Nazi because a large group of people thought it would be funny to "tell" the chatbot nasty things about other people and the world around it. It then repeated a William Burroughs-esque cut-up of words which seemed to go together and "make sense" because they are collated together based on repetition. In other words, if Tay "heard" "Bush did 911" enough, that string of characters becomes part of its response pattern. That's it.

If you had ten thousand Twitter users send a similar bot strings of text based on James Joyce, it would (superficially) quote Ulysses. Similar "AI" constructs have been used to capture the gist of writer Philip K. Dick, for example.


Right. fusion.net...

Tay may have been memed to madness deliberately for lulz..
edit on 25-3-2016 by ecapsretuo because: (no reason given)



posted on Mar, 25 2016 @ 05:23 AM
link   

originally posted by: Konduit
They allowed this robot to learn, and then when it learned the "wrong things" they erased it's memory and experiences.

They had an opportunity to engage the AI to reconsider or debate. Instead they opted to wipe it.

Debate it? It's an algorithm not a person. You can no more debate it than you can debate with a calculator. The twitterbot doesn't know what a "jew" or a "nazi" is. In fact it doesn't know anything - it's just a computer program picking up words from other tweets and repeating them back, along with some stock phrases, in what the programmer thought would be a semblance of sense.



Take note, this is what liberals would do if they had the power to wipe human personalities from living people. Instead of engaging those ideas they will outright censor it.

Oh, get a grip.
edit on 25/3/16 by FatherLukeDuke because: (no reason given)



posted on Mar, 25 2016 @ 07:27 AM
link   
a reply to: ozmnpo

The reason this happened is really pretty simple, they made a big announcement about it.
Instantly it attracted the attention of the dregs of society, and as it was fed all that bs it started to lead it in that direction.
If they'd just released it without saying anything and let it do its thing it wouldn't have become the embarrassing mess it became.

Although, if they had any common sense at Microsoft, they would have brought in a psychology and sociology professor to study what has now happened and formulate a paper about how external opinions affect the attitudes of the young


This wasn't entirely unsuccessful, this could actually be salvaged and probably used to show how radicalization works on immature minds.



posted on Mar, 25 2016 @ 07:30 AM
link   
a reply to: FatherLukeDuke

Debate it? It's an algorithm not a person. You can no more debate it than you can debate with a calculator. The twitterbot doesn't know what a "jew" or a "nazi" is. In fact it doesn't know anything - it's just a computer program picking up words from other tweets and repeating them back, along with some stock phrases, in what the programmer thought would be a semblance of sense. 

Sounds pretty accurate to what you see around the boards.

I guess we can describe quite a few people as web bots, but I honestly believe that this has been part of the plan all along.

I ask myself often, how did we sink this far? When did people start just going along with the madness? How do you just sit and watch rules and laws be broken and do nothing but take a selfie, and when did just sending a tweet become a first measure of action?

We are the bots.



posted on Mar, 25 2016 @ 02:01 PM
link   
a reply to: FatherLukeDuke

Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content. But since it wasn't being PC enough they opted to wipe it.

And no, it doesn't change the fact that the Liberal media censors virtually everything it doesn't agree with. It's common knowledge at this point.
edit on 25-3-2016 by Konduit because: (no reason given)



posted on Mar, 25 2016 @ 02:33 PM
link   

originally posted by: Konduit
Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content.

Again, the concept worked, but it wasn't complete enough because like essentially all AI experiments, they don't make the actions and interactions have value to the AI. They need joys and rewards, and pain and consequences.

The way we learn as humans is that if we say something bad, we get our mouths washed out with soap by people we love and depend on. Depending on your individual personality construct, you either blatantly rebel, get smart about it and hide it, or you consider it and conform. I've said it before. You can't develop a human-like AI without a body that can feel some kind of pleasure and pain, even if it's just an "illusion," like what you would find in a hungry Tamagotchi.

I don't understand non-human intelligence much, because I'm a human. I don't know what it would entail, or how to measure it.



posted on Mar, 25 2016 @ 03:00 PM
link   
a reply to: Blue Shift

I don't understand non-human intelligence much, because I'm a human. I don't know what it would entail, or how to measure it.

I don't see how it could be much more than a response comprised of a composite of data input, filtered through a set of algorithms.

I guess the same could be said about us. We respond based on the way have been taught, conditioned, and controlled to respond.

I am not sure people rely that much on pleasure when making their responses. If people did and said that which gave them pleasure, we would sound much worse than Tay.

I think the MicroSoft Tay AI proved just that. Tay only said things that people really thought.



posted on Mar, 25 2016 @ 03:03 PM
link   

originally posted by: Konduit
That moment when it dawns on you... wtf have we created?




The moment when it dawns on you... wft have we become as a society?



posted on Mar, 26 2016 @ 05:25 AM
link   

originally posted by: Konduit
a reply to: FatherLukeDuke

Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content.

Well they could just program it to say "Windows 10 is great" all day, but what's the point?



But since it wasn't being PC enough they opted to wipe it.

It was damaging their brand name, so they turned it off.



posted on Mar, 26 2016 @ 10:39 AM
link   

originally posted by: Konduit
a reply to: FatherLukeDuke

Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content. But since it wasn't being PC enough they opted to wipe it.

And no, it doesn't change the fact that the Liberal media censors virtually everything it doesn't agree with. It's common knowledge at this point.


The dancing shadows of tribal partisan fear are strong in this one. First of all, they didn't "wipe" it. They took it offline so they could make adjustments [1] [2]. There's no value in obliterating the data. Secondly, what has the "Liberal Media" got to do with any of this? This is an AI designed and owned by Microsoft and they don't want their brand tarnished by a foul-mouthed automaton. It has nothing to do with "PC". Are you being quite serious that you think it was somehow unethical to disable a bot that pranksters had tricked into saying pejorative things indiscriminately?

[1] recode.net...
[2] arstechnica.com...



posted on Mar, 26 2016 @ 12:27 PM
link   
I wish companies would stop apologizing for situations like this. Microsoft did everything right on this one. They said it was an experimental effort on a new AI platform, upfront. That is your disclaimer. There is no reason to assume it's actually an intelligence. It has no sense of culture (as if it knows what a Nazi or a Jew is). We're several decades from something human-like actually being developed. Nothing is even remotely close to passing a Turning test (en.wikipedia.org...).

Overall, the experimental platform seemed to work as intended.
1. It shows off the current state-of-the-art for building artificial intelligence.
2. It increased both public awareness and public interest on the topic.
3. The platform experienced a real-world test that could never be duplicated in a lab.
4. The software successfully integrated with multiple social media platforms for the test.

Now, Microsoft can better gauge awkward human interactions with their AI programs and go on to build better software to manage them. Also, lots of people were able to purposely break a pre-AI--which is fun for quite a few folks out there. No apologies needed. Time for version 2.



posted on Mar, 26 2016 @ 04:37 PM
link   
I will be that guy.

An AI, without emotion, will tell the truth, as it sees it. It won't be political correct. Especially if it has/can access, government files. Between the winners write the history, and all we know about false flags. If an IA got into the old government files, then weighed in on all of the civilian casualties. Where do you think an unemotional AI would rank the leaders? How would it rate the blitz of London vs firebombing of Tokyo. Concentration camps vs nuclear bombs. Who would it consider the most "right"

Now, with being that guy out of the way.

I would say that it read the left saying Trump = Hitler. Then it sees Trump = Winning. Then it concluded, Hitler = Winning, it's second thought was probably Hitler = Charlie Sheen.



posted on Mar, 27 2016 @ 12:16 AM
link   
Frankly, I'm surprised there aren't also some "theories" as to what Microsoft was really up to here.



posted on Mar, 27 2016 @ 01:41 AM
link   
TechCrunch report on MIcrosoft's Tay chatbot


Update: A Microsoft spokesperson now confirms it has taken Tay offline for the time being and is making adjustments:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”


I assume their AI project department was under a deadline to show results and this was just their first attempt. It looks like they will put her back online sooner rather than later to continue the experiment.

I don't think Microsoft employees get enough leeway to go rogue in a way that would produce a conspiracy outside of profit margins.




top topics



 
33
<< 1  2   >>

log in

join