Microsoft’s new artificial intelligence ‘teen girl’:Tay launched on several social platforms. It was corrupted by the internet. If you haven’t heard of that, it’s a machine learning project created by Microsoft that supposed to mimic the personality of a nineteen-year-old girl.
it’s essentially an instant messaging chat bot with a bit more smarts built-in AI technology . This AI tool is ability to learn from the conversation she has with people. That’s where the corruption comes into play.
As surprising as it may sounds, the company didn’t have the foresight to keep Tay from learning inappropriate responses. Tay ended up sending out racial slurs denying the Holocaust expressing support for genocide and posting many other controversial statements. Microsoft eventually deactivated Tay. The company told TechCrunch, Once it discovered a coordinated effort to make the AI project saing inappropriate things. It took the program offline to make adjustments. Seasoned internet users among us are none too surprised by the unfortunate turn of events.
If you don’t program in fail-safes the internet is going to do its worst and it did. In fact the Godwin’s law says, as an online discussion goes on the more likely it is that someone will compare something to Hitler or the Nazis. As a writer for TechCrunch put it while, technology is neither good nor evil engineers have a responsibility
to make sure it’s not designed in a way that will reflect back the worst of humanity. You can’t skip the part about teaching on what not to say.