Microsoft’s Intelligent Agent Tay was put to sleep within 24 hours of going live. But rather than an AI going rogue, the story is that of a robot being persecuted by a human – as usual.

Tay, the chatbot linked to Twitter and imbued with the personality of a 19 year old girl, is an interesting exercise in machine learning. Her personality and responses were created by crunching large amounts of data on how young people communicate on Twitter. The related XiaoIce chatbot created by Microsoft is being used by some 40 million people in China. However Tay began repeating offensive statements, and so she was taken offline and Microsoft issued an apology.

What Went Wrong?

Chatbots like Tay have a set of question and answer pairs that they use for their responses. For example “How are you today?”; “I am fine thank you, how about you?”. Tay was also able to play a ‘game’ called ‘repeat after me’, which means she would repeat any statement put to her. Many users got her to repeat offensive statements by telling her to ‘repeat after me’ and then saying something offensive.

What Tay was doing was literally the same as the copy and paste function that is used throughout countless everyday programs. To attribute human intent to her statements is as nonsensical as attributing creativity to Microsoft Word when an essay is copied and pasted into a new document.

And as far as ‘breaking’ Tay, I think that the reaction was overblown and she didn’t need to be pulled down. In my view, the amusement of copying and pasting something offensive or rude will eventually tire for juvenile minds in the same way that writing 55378008 into an upside down calculator quickly dulls for even the most immature of schoolchildren.

What is the Lesson?

Humans can quickly grasp scenario specific quirks and patterns, lateral thoughts, and human reactions that are extremely difficult for robots to detect. It was easy for humans to notice a specific attribute (repeat after me), think how to re-use it (creating interactions to display to others) and then see how other humans would react (shock at offensive statements).  This shows the type of activity that humans will always dominate robots on. So although a robot may be able to, say, drive a car, they are terrible at making complex ethical choices about their driving.

The lesson is also that humans treat Intelligent Agents and robots as objects to be used for their own devices. This may be for utility (such as work) or for amusement. Regardless, robots are at the mercy of humans. See for example the hitchhiking robot that was decapitated early on its journey. The response is that robots need to learn to be afraid of humans, just like the Japanese robots that have been trained to avoid dangerous children.

And, in learning to fear humans, robots will develop an essential human quality.

Adrian is the Creator of Ailira, the Artificial Intelligence that automates legal advice and research, and the Principal of Cartland Law, a firm that specialises in devising novel solutions to complex tax, commercial and technological legal issues and transactions. 

Processing...