The “willing suspension of disbelief” is the idea that we are willing to suspend judgment about the implausibility of the narrative for the quality of our own enjoyment.
We do it all the time. Two-dimensional video on our screens is smaller than life and flat and not in real time, but we ignore those facts and immerse ourselves in the stories as if they were real. While we watch a movie or a video, we don’t yell to the characters on the screen “Duck!” or “Look out!” when something is about to happen to them. We just passively enjoy the show.
The willing suspension of our privacy
We apply similar concepts to our online lives. Most of us are willing to give up our data (location, viewing, purchasing or search history) for our online enjoyment. Industry experts call this the “willing suspension of our privacy” because if you spent a moment to consider what your data was actually being used for, you may well refuse to let it happen.
The willing suspension of our agency
Which brings us to the next level: the willing suspension of our agency for our own enjoyment. This is past the point of giving up a “reasonable amount” of data or privacy to optimise the capabilities of our digital assistants. Suspension of our agency exposes our normally unmonitored physical activity, innocent mumblings and sequestered conversations.
Some people believe this is happening with Alexa, Google Home, Siri and other virtual assistant and IoT systems. It may well be…
First, let’s give it a name
Since we are discussing a combination of automatic speech recognition (ASR) and natural language understanding (NLU) engines that enable a system to instantly recognise and respond to voice requests, for this article, let’s call the interface an intelligent voice control system (IVCS).
How it works
You activate most commercial IVCSs with a “wake word.” For an Amazon Echo or Echo Dot, you can choose one of three possible wake words, “Alexa” (the default), “Amazon” or “Echo.” Unless you turn off the microphones (the Echo has seven) and use a mechanical button or remote control to activate its capabilities, Alexa Voice Service, the system that powers the Echo and Alexa, and other IVCSs are always listening for their wake word.
In Amazon’s case, it keeps approximately 60 seconds of audio in memory for pre-processing so the responses can be situationally aware and “instant.” Amazon says the listening is done locally, on the device, not in the cloud. So technically, the audio does not leave the premises.
Always listening does not mean always transmitting!
Yes, an IVCS is always listening AND recording. Which raises the question, “What does it do with the recordings it does not use?” In Amazon’s case, the official answer is that they are erased as they are replaced with the most current 60 seconds. So while the system locally stores approximately 60 seconds of audio preceding your wake word, it transmits only a “fraction of a second” of audio preceding your wake word, plus your actual query and the system’s response.
If you want to access this information then a feed can be found on the Home screen of your Alexa app. Alternatively, a full audio recording of each interaction is available via the Alexa App by navigating to Settings, scrolling to the bottom of the page and selecting ‘History.’
What happens to the approximately 60 seconds of audio recording preceding a wake word? The one that has a recording of the TV soundtrack, footsteps, the loud argument in the next room, the gunshot, etc.?
Amazon says it is erased and replaced with the next 60 seconds of audio. Skeptics say if a wake word is detected, the previous 60-ish seconds of audio is put in a database for further IVCS training and so in theory this data could be available to law enforcement via the appropriate channels just like browser history or communications data records.
But does it actually exist? Amazon says no. As for other systems? We’ll have to wait and see…
Data is more powerful in the presence of other data.
It is an immutable law of 21st-century living, which in this case means that the most serious threat to each of us is the profile that can be created with the willing suspension of our agency.
Most people have no idea how much information about them is available for sale. The willing suspension of agency has the potential to take us right up to the line that separates where we are now from a Big Brother future.
So… is Alexa dangerous?
Alexa is NOT dangerous. The data it collects is NOT dangerous. Nothing about an Amazon Echo is dangerous.
It’s awesome. A number of our team have them located around our homes and they provide great ease of interaction and are a fascinating (if not frustrating) source of entertainment for children… especially if one of them is named Alex!
It’s an amazing controller, great alarm clock, spectacular music and Amazon Prime interface, an exceptional news and weather reporter, and it does lots of other stuff you can look up online.
But… The world will be a very different place when Google, Amazon, Microsoft, Apple and other AI-empowered players have assembled 1st-party profile data that includes our agency.
It will make what they do with our current behavioral profiles look like primitive data processing.
If you are keen to understand Alexa and other smart home devices, you are in luck!
We at Blue Lights Digital offer exciting opportunities to learn from our industry experts about the subject, along with interactive and hands-on experiences with the devices. Just contact us directly to find out how you can get involved with our array of leading digital investigation support services and training courses. We look forward to working with you!
Full story courtesy of ShellyPalmer