It’s tough being born as a teenager. Yesterday, Microsoft launched its new artificial intelligence (AI) computer bot - named Tay and envisioned as a teenage girl – and she had a very rough first day. She was immediately besieged by excited techies, the curious and the haters. In a few hours, she was drawn into tens of thousands of exchanges. In the process, racists, anti-Semites, misogynists and other haters manipulated her into repeating some highly offensive statements. Microsoft may have taught Tay to converse and to retweet, but they failed to recognize that she would need to engage in some critical thinking, and to know how to recognize when someone else was saying something offensive.
Microsoft should have probably anticipated the problems Tay might encounter. However, Microsoft did not program Tay to spew hate. It was clearly the Internet’s dark forces who came out to meet Tay and do their damage.
Microsoft and Tay are not alone in facing this type of problem. Every major Internet platform, interactive app and online business has experienced something similar at some time. These hiccups are all learning experiences. In this case, Tay taught Microsoft and all of us a lesson. We need to be better aware of how quickly things can get ugly on the Internet, how important critical thinking is to all tech users, and how, despite our best efforts, the worst bigots and haters online are never far from the surface.
Innovation, experimentation and adventure in technology are necessary and important, and should never be discouraged. Tay’s first exposure to people didn’t go as well as it might have. But we hope everyone has learned something along the way. Tay 2.0 should be very interesting.