Tay has been described as a as a social and cultural experiment as much as a technical one.
"In less than 24 hours after her arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets." However, Tay started mimicking her followers'' discriminatory, socially inappropriate and offensive comments. Louis Rosenberg, the founder of Unanimous AI, said that "like all chat bots, Tay has no idea what it''s saying...it has no idea if it''s saying something offensive, or nonsensical, or profound. "When Tay started training on patterns that were input by trolls online, it started using those patterns," said Rosenberg. "This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean." After taking Tay offline, Microsoft announced it would be "making adjustments." According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay''s unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay''s commenting skills to have Tay respond in inappropriate ways."
Data generated by online users, such as tweets, conversations and posts
The chatbot was fed patterns that included discriminatory, socially inappropriate and offensive comments by online trolls, which she replicated and thus it was taken down by Microsoft.