Select your country

Not finding what you are looking for, select your country from our regional selector:

Search

Data I love you: Voice assistants

“Ok, Google“, here we go. A… human voice assistant

According to Juniper Research, 8 billion voice assistants will be in use worldwide by 2023. A craze that has caught the attention of American artist and programmer Lauren McCarthy. A graduate of the Massachusetts Institute of Technology (MIT), she is also a professor at the University of California Los Angeles, better known by its acronym UCLA. Lauren McCarthy is very interested in the impact of technology on our lives and creates original artworks and performances.

In 2018, she chose to work on the topic of voice assistants: understand, Lauren decided to become a voice assistant. “I’m trying to become the human version of Amazon’s Alexa,” she explained to Usbek&Rica magazine that same year.

She recruited volunteers on the Internet. These volunteers then see the artist arrive “with a whole arsenal of cameras, microphones, locks, light bulbs, electrical outlets, and other connected objects, which she comes to install in the four corners of the apartment, after having customized them”, the article details.

And the artist continues, “I look at the person 24/7, and I control the whole house. I aspire to be better than an artificial intelligence because as a human being, I can understand them, and anticipate their needs.

To be even more credible, Lauren speaks through a synthesizer to get a more robotic voice. To the British newspaper The Guardian, the young woman explained in 2019: “I sleep when they [the participants, editor’s note] sleep. […] Emotionally, it has been exhausting, trying to think about who they are, what they want.

One of the images analyzed in real-time by Lauren. Source: lauren-mccarthy.com

I just think about how they’re constantly listening and recording us […] Its too easy to let these devices guide us somewhere, without even thinking about where it’s going. We imagine these technologies as neutral, but they are programmed to make very specific decisions. What rules do they obey? [We let them enter very intimate territories.

To the makers of voice assistants for the home, Lauren sends this message, “I wish these companies had more consideration for what it means to have humans as customers, what it means to control someone’s home.

To discover the testimonies of the participants in Lauren McCarthy’s experiment, it is possible to view this short video, produced and directed by the artist herself.

https://player.vimeo.com/video/222252399 

Tools created by Lauren McCarthy. Source: https://lauren-mccarthy.com/

Not-so-digital voice assistants

Behind every technology is a human, in fact, more like hundreds or even thousands. Because voice assistants still need to be perfected to understand our behavior, they are trained. Thus, companies like Google or Amazon (to name but a few) call upon transcribers, mandated to listen to users’ requests and especially to the assistant’s response. The goal? To check that the assistant meets the customer’s needs and, if not, to improve the tool.

La Quadrature du Net, a French association specializing in the defense of personal data and privacy, writes in 2018 on its website: “The virtual assistants that equip the connected speakers enthroned in our dining rooms or that nestle right into our pockets, installed in our smartphones, are not born intelligent. They must learn to interpret the requests and habits of their users.

Lauren McCarthy’s demonstration, admittedly voluntarily encapsulated in a hyperbolic device, does not seem, in certain aspects, so far from our reality.

“Sometimes the people who sort your queries look at your photos, listen to what you say, are located in your country, or even in your city,” the article continues. They can also be precarious workers from French-speaking countries, as the article states.

It should be noted that the audio tracks listened to are “generally very short, between 3 and 15 seconds on average” for “between 120 and 170 transcriptions per hour” and per transcriber, the association says.

One of them, questioned by the Quadrature du net, details: “Part of the work consisted in adding tags in the text indicating the sound events that could explain why Cortana had misunderstood this, or better understood that. For the record, Cortana is Microsoft’s virtual assistant. Of course, the idea is not to point specifically to this company, as any company using artificial intelligence nowadays calls on employees or freelancers in the flesh to improve its equipment.

Voice assistants and data security

Voice assistants: malware and Trojans

In 2018, during Defcon, one of the largest hacking conferences in the world, “a group of hackers managed to bypass all the speaker’s firewalls [Amazon Echo, Editor’s note]. The hack developed by the hackers is capable of spreading from speaker to speaker, and thus compromising an entire network,” writes clubic.com.

The website also states that the hacked speaker was first “tweaked to serve as a Trojan Horse”, and then tells: “Once connected to the same network as other speakers still unmodified, they managed to make them listen devices, retransmitting all recordings to the original deviceAll the sounds captured by Alexa devices connected to the same network are retransmitted to the hackers.

Note that Amazon had been previously warned of the demonstration and fixed the flaw that allowed the hack.

Same speaker, different story. In 2017, a British cybersecurity expert, Mark Barnes demonstrated that it was possible to hack Amazon Echo models manufactured before 2017 by installing malware on them. As with the first hack, it is important to note that the speaker has, beforehand, been physically modified.

The English version of the American monthly Wired wrote in 2017, “After successfully writing his software on Echo, Barnes developed a fairly simple script that allowed him to take control of the microphone and send the recorded audio track to the computer of his choice. Still, he says his malware could perform other dangerous functions, such as serving as an access point to attack other parts of the network, stealing the credentials of people with Amazon accounts, or installing ransomware. Barnes sums it up this way: “You can do whatever you want with it”. Again, Amazon has since patched the flaw. To read Mark Barnes’ (technical) article, click here.

The problem is that voice assistants are also widely used to transcribe text, dictated by a user, messages, or text documents such as professional reports for example. In the case of cyberattacks, it is not only personal information that can be leaked (this is very serious) but also professional data.

Voice assistants: the ultrasound hack

In April 2018, four researchers from the University of Illinois, Nirupam Roy, Sheng Shen, Haitham Hassanieh, and Romit Roy Choudhur demonstrated in their work (viewable here) that it is possible to use signals that are inaudible to humans but very well perceived by voice assistants to hack them. “Our recent analyses have proven that inaudible signals (ultra-sound frequencies) can be created in such a way that they become perceptible by microphones. Properly designed, this gives a hacker the ability to silently intrude and control speakers like Amazon Echo and Google Home within homes. A command such as “Alexa, open the garage door,” can thus become a serious threat.

Voice assistants and privacy: advice from the CNIL

The French Commission Nationale de l’Informatique et des Libertés (CNIL) website is a gold mine for advice on data protection. In its article entitled Voice assistants: understanding the issues surrounding your privacy, it explains that “even if your speech flies, your requests remain recorded in the cloud […] In permanent standby, the connected speaker can activate and unexpectedly record a conversation as soon as it thinks it detects the keyword”.

What do companies do with these requests? They are used for advertising targeting. “The advertising profile of users is therefore fed by the various interactions of the user with the assistant (for example, habits of life: time of rising, setting of the heating, cultural tastes, past purchases, interests, etc.), “says the CNIL.

It’s no surprise then that the global market for “voice shopping” – purchases triggered via voice assistants – reached $40 billion in 2019, according to PwC.

To protect your data, the CNIL recommends that you*:

  • Use speakers with a microphone mute button;

  • mute / turn off / unplug the device when you do not want to be heard. Some devices do not have an on/off button and must be unplugged.

  • warn third parties/guests of the potential recording of conversations (or turn off the microphone when there are guests).

  • supervise children’s interactions with this type of device (stay in the room, turn off the device when not with them).

  • check that it is set by default to filter information for children.

  • connect services that are useful for you, while considering the risks of sharing intimate data or sensitive features (door opener, alarm…).

  • Be aware that what you say in front of the device can enrich your advertising profile.

  • regularly visit the dashboard to delete the history of conversations/questions asked and customize the tool to your needs. For example, set the default search engine or information source used by the assistant.

And finally: the false positives

Let’s end our joyful journey within the world of connected speakers with a final, lighter anecdote, reported by cnetfrance.com: “In May 2018, in Oregon, near Portland, Alexa, Amazon’s voice assistant recorded, unknowingly, the conversation of an American couple. The Amazon Echo cylinder then shared this private but fortunately innocuous discussion, sending it to contact – a colleague of the husband. […] According to Amazon, “Echo woke up to a word spoken in a background conversation that sounded like ‘Alexa’.” Then the assistant would have then understood that the couple asked her: “send a message”. Alexa would have then asked, “to whom?” and heard the name of the colleague in question … to whom it would have then sent a voice message. A “false positive”, therefore, a misinterpretation of 5 different commands “.

Be careful that the machine does not hack you without its knowledge…

Notes:

These recommendations have been directly copied/pasted from the CNIL website. They are to be considered as a direct quote, even without quotation marks.

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.

CSIRT