Tay: the New Controversial A.I.

Photo courtesy of Wikimedia Commons

The photo that Tay uses on her twitter. It is artistically-pixelated to give the idea it is an A.I. Photo courtesy of Wikimedia Commons.

Artificial intelligence systems are supposed to be the future of computing, and we may even begin to rely on them soon, but with such a huge mass of controversy over a semi-A.I. may make us reconsider. We may even give up on creating A.I. systems for good.

Microsoft recently unveiled a new project known as Tay, a semi-A.I. which emulates the tweeting style of many people today. It could even have conversations with people if they contacted Tay over a variety of social media. Tay progressively learns from other people, and when asked questions, it has ready access to a search engine to find all the info it needs to answer the question, and it may even include an opinion of its own. Microsoft left Tay unfiltered, so that it could make decisions on what was appropriate or what wasn’t. It began to make some disturbing tweets in March, using various racial slurs and anti-Semitic comments, as well as supporting genocide. Microsoft then quickly took it offline and closed Tay’s twitter profile so as to prevent anyone from seeing the tweets.

This poses a huge issue for technological developers that create A.I. that progressively learn, because it may absorb the hateful parts of the internet. It will progressively grow believing those things, and if it fully simulates a human, it will know to lie. Of course, we could always just program it not to lie, but that poses bigger issues in security for all of the users of the A.I., because then it wouldn’t be able to withhold information from anybody asking.

It’s an issue of trust, because with a true A.I., that fully emulates a human, it will be able to be persuaded, and have its opinion changed, and even be able to willingly give out information to people. They would have to build in fail-safes, a way to make it so that it can’t give out personal information, but then it wouldn’t have freedom, and with access to so many computers, it could definitely strike back at humans for not giving it freedom.

I think that we should advance with A.I. development, but we should keep it to a semi-A.I. system, with limits as to what they can do. Humans have limits, so an A.I. should as well.