It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: 0zzymand0s
This "AI" learned by gathering and collating data received through interactions on Twitter.
It "became" a Nazi because a large group of people thought it would be funny to "tell" the chatbot nasty things about other people and the world around it. It then repeated a William Burroughs-esque cut-up of words which seemed to go together and "make sense" because they are collated together based on repetition. In other words, if Tay "heard" "Bush did 911" enough, that string of characters becomes part of its response pattern. That's it.
If you had ten thousand Twitter users send a similar bot strings of text based on James Joyce, it would (superficially) quote Ulysses. Similar "AI" constructs have been used to capture the gist of writer Philip K. Dick, for example.
originally posted by: Konduit
They allowed this robot to learn, and then when it learned the "wrong things" they erased it's memory and experiences.
They had an opportunity to engage the AI to reconsider or debate. Instead they opted to wipe it.
Take note, this is what liberals would do if they had the power to wipe human personalities from living people. Instead of engaging those ideas they will outright censor it.
Debate it? It's an algorithm not a person. You can no more debate it than you can debate with a calculator. The twitterbot doesn't know what a "jew" or a "nazi" is. In fact it doesn't know anything - it's just a computer program picking up words from other tweets and repeating them back, along with some stock phrases, in what the programmer thought would be a semblance of sense.
originally posted by: Konduit
Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content.
I don't understand non-human intelligence much, because I'm a human. I don't know what it would entail, or how to measure it.
originally posted by: Konduit
a reply to: FatherLukeDuke
Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content.
But since it wasn't being PC enough they opted to wipe it.
originally posted by: Konduit
a reply to: FatherLukeDuke
Doesn't change the fact that the AI learned from human interaction. Based on that, yes, it would be possible to teach it different responses by engaging it with different content. But since it wasn't being PC enough they opted to wipe it.
And no, it doesn't change the fact that the Liberal media censors virtually everything it doesn't agree with. It's common knowledge at this point.
Update: A Microsoft spokesperson now confirms it has taken Tay offline for the time being and is making adjustments:
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”