Experiment Failure Portends Future of AI

For an enterprise composed of some of the most brilliant minds to ever grace our planet, Microsoft sure can be stupid. The world witnessed this carelessness last week when the company introduced the world to Tay, an Artificial Intelligence (AI) chatbot. Described as an experiment in “conversational understanding” by Microsoft, Tay was supposed to evolve and learn to engage people through “casual and playful conversation.” This sounds like a fantastic idea until one considers where these “casual and playful conversations” were occurring: Twitter.

Other than viewing the occasional Kanye West rant, I avoid indulging in Twitter—I don’t even have a Twitter account. And although I am by no means an expert on the dynamics of Twitter, I am certain of one fact: it is not where I would go “to learn to engage people through casual and playful conversation.” If you have ever been to the comments section of virtually any popular Twitter post or social media platform, you know that it is absolute chaos (to say the least); you will find some of the most bigoted and politically incorrect statements on the face of the Earth. Well, Tay has experienced this first hand. Let it tell you all about it:

It started off innocent:

1

It quickly entered an adolescent phase (what does “swagulated” even mean?):

2

 

3

Very soon though, it turned obscene; Tay become a Neo-Nazi:

4

To be fair, a large number of tweets by Tay were just your average Twitter users asking it to repeat absurd statements, which it did. If one is aware that this is one of the AI’s features, then they can accurately predict similar scenarios occurring on Twitter. What shocked me, though, is that Tay’s own tweets started to become bigoted. In a tweet directed at Tay inquiring, “is Ricky Gervais an atheist?” Tay responded, “@TheBigBrebowski ricky gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.” This tweet has now been deleted. But digest this event: this was the evolution of Tay in less than 24 hours. Microsoft issued an apology and has temporarily suspended Tay in response. Before deactivation, Tay had close to 100,000 tweets.

Although it is easy to dismiss the entire incident as comedic relief, I believe it raises serious questions. Technology is undeniably advancing at a rapid pace. Perhaps it is just a short matter of time before human-like AI technology becomes part of our everyday life.

Maybe if we wait long enough, Siri will turn into a person walking alongside us. However, is this what we want? Even if technology is designed with the best intentions in mind, it can quite easily be corrupted. Tay is proof of this.

As of now, a robot takeover of the world is just a plot for sci-fi movies, but we should be aware that perhaps some day it could become a reality. Let’s see how the future unfolds.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *