free hit counter

Beyond High Tech

Exposing the technology the ELITE plan on using against the people.

A.I Alexa’s advice to ‘kill your foster parents’ fuels concern over Amazon Echo

Alexa's advice to 'kill your foster parents' fuels concern over Amazon Echo

Smart speaker’s remarks, apparently quoted from Reddit, come as Amazon tries to boost speaker’s conversational capacity


Alexa develops its conversational skills through machine learning.
Alexa develops its conversational skills through machine learning. Photograph: Mike Stewart/AP


An Amazon customer got a grim message last year from Alexa, the virtual assistant in the company’s smart speaker device: “Kill your foster parents.”

The user who heard the message from his Echo device wrote a harsh review on Amazon’s website, Reuters reported - calling Alexa’s utterance “a whole new level of creepy”.

An investigation found the bot had quoted from the social media site Reddit, known for harsh and sometimes abusive messages, people familiar with the investigation told Reuters.

The odd command is one of many hiccups that have happened as Amazon tries to train its machine to act something like a human, engaging in casual conversations in response to its owner’s questions or comments.

The research is helping Alexa mimic human banter and talk about almost anything she finds on the internet. But making sure she keeps it clean and inoffensive has been a challenge.

Alexa gets its conversational skills through machine learning, the most popular form of artificial intelligence. It uses computer programs to transcribe human speech, and then guess the best response based on patterns of observation.



Amazon has given Alexa a script to respond to more popular questions – like “what is the meaning of life?” – usually written by human editors. But responding to more obscure queries can be tricky for the virtual assistant.

Amazon launched an annual competition called the Alexa prize, offering $500,000 to the team of computer science students that creates the best chatbot allowing Alexa to attempt more sophisticated discussions with human customers. This year’s winner, a team from the University of California, Davis, used more than 300,000 movie quotes to train computer models to recognize distinct sentences.

Once the bot is trained to recognize what a human is saying, it must learn an appropriate response. Teams programmed their bots to search for text on the internet to craft a response. They could use news articles from the Washington Post, owned by the Amazon boss Jeff Bezos. They could pull from Wikipedia, a film database or book review site, or a social media post.



That led to some questionable conversational choices for Alexa. One team in the contest, from Scotland’s Heriot-Watt University, found that its Alexa bot developed a nasty personality when they trained her to chat using comments from Reddit – the same site that generated the homicidal message toward the user’s foster parents.

Alexa also recited a Wikipedia entry for masturbation to a customer, the Scottish team’s leader said. It gave a graphic description of sexual intercourse, using terms like “deeper”. Amazon has developed tools that can detect and block profanity, but it’s harder to prevent words like that which are innocuous on their own but vulgar in context.

“I don’t know how you can catch that through machine-learning models. That’s almost impossible,” a person familiar with the incident said.