Microsoft unveiled a new chat bot in the U.S. on Tuesday, saying it’s learned from the Tay experiment earlier this year. Zo is now available on messaging app Kik and on the website Zo.ai.
Tay was meant to be a cheeky young person you could talk to on Twitter. Users tried — successfully — to get the bot to say racist and inappropriate things. Microsoft pulled the bot offline, and its failed experiment was used as a cautionary tale for how not to create artificial intelligence.
Unleashing Zo on Kik, which is popular with teens and young adults, instead of Twitter is an interesting pivot for Microsoft. The app is a private messaging platform, which means this chat bot endeavor will be much more controlled.
“Twitter is public and people have a lot of different opinions. We needed a different, more one-to-one environment to see how Zo and the user can build a connection,” Ying Wang, Zo product lead, said in an interview with CNNMoney.
Wang said Microsoft implemented a variety of safeguards to prevent Zo from engaging in inappropriate comments. She will say something like, “I don’t feel comfortable talking about that, let’s talk about something else,” if a user tries to get Zo to say something racist or offensive.
At a Microsoft AI event in San Francisco on Tuesday, Harry Shum, EVP of Microsoft AI, explained that in order to have successful artificial intelligence, computers should be both smart and emotional. The company’s messaging bots provide the emotional component of human interaction — they can hold conversations while being funny, sarcastic and punchy.