Attackers could be listening to what you type: Smartphones can pick up and translate your keystrokes

Summary: By changing passwords and scanning for keyloggers you may think your personal information is safe. Think again. Smartphones can pick up what you type on a standard keyboard. Researchers report acoustic signals produced from typing on a keyboard can be picked up by smartphones. Those sounds can be processed to reveal exactly what you typed.

Source: SMU

You likely know to avoid suspicious emails to keep hackers from gleaning personal information from your computer. But a new study from SMU (Southern Methodist University) suggests that it’s possible to access your information in a much subtler way: by using a nearby smart phone to intercept the sound of your typing.

Researchers from SMU’s Darwin Deason Institute for Cybersecurity found that acoustic signals, or sound waves, produced when we type on a computer keyboard can successfully be picked up by a smartphone. The sounds intercepted by the phone can then be processed, allowing a skilled hacker to decipher which keys were struck and what they were typing.

The researchers were able to decode much of what was being typed using common keyboards and smartphones – even in a noisy conference room filled with the sounds of other people typing and having conversations.

“We were able to pick up what people are typing at a 41 percent word accuracy rate. And we can extend that out – above 41 percent – if we look at, say, the top 10 words of what we think it might be,” said Eric C. Larson, one of the two lead authors and an assistant professor in SMU Lyle School’s Department of Computer Science.

The study was published in the June edition of the journal Interactive, Mobile, Wearable and Ubiquitous Technologies. Co-authors of the study are Tyler Giallanza, Travis Siems, Elena Sharp, Erik Gabrielsen and Ian Johnson – all current or former students at the Deason Institute.

It might take only a couple of seconds to obtain information on what you’re typing, noted lead author Mitch Thornton, director of SMU’s Deason Institute and professor of electrical and computer engineering.

“Based on what we found, I think smartphone makers are going to have to go back to the drawing board and make sure they are enhancing the privacy with which people have access to these sensors in a smartphone,” Larson said.

SMU Simulated a Noisy Conference Room, But Typing Could Still Be Intercepted

The researchers wanted to create a scenario that would mimic what might happen in real life. So they arranged several people in a conference room, talking to each other and taking notes on a laptop. Placed on the same table as their laptop or computer, were as many as eight mobile phones, kept anywhere from three inches to several feet feet away from the computer, Thornton said.

Study participants were not given a script of what to say when they were talking, and were allowed to use shorthand or full sentences when typing. They were also allowed to either correct typewritten errors or leave them, as they saw fit.

“We were looking at security holes that might exist when you have these ‘always-on’ sensing devices – that being your smartphone,” Larson said.

“We wanted to understand if what you’re typing on your laptop, or any keyboard for that matter, could be sensed by just those mobile phones that are sitting on the same table.”

The answer was a definite, “Yes.”

But just how does it work?

“There are many kinds of sensors in smartphones that cause the phone to know its orientation and to detect when it is sitting still on a table or being carried in someone’s pocket. Some sensors require the user to give permission to turn them on, but many of them are always turned on,” Thornton explained. “We used sensors that are always turned on, so all we had to do was develop a new app that processed the sensor output to predict the key that was pressed by a typist.”

This shows a man typing on a laptop with a smartphone next to him
It might take only a couple of seconds to obtain information on what you’re typing, noted lead author Mitch Thornton, director of SMU’s Deason Institute and professor of electrical and computer engineering. The image is in the public domain.

There are some caveats, though.

“An attacker would need to know the material type of the table,” Larson said, because different tables create different sound waves when you type. For instance, a wooden table like the kind used in this study sounds different than someone typing on a metal tabletop.

Larson said, “An attacker would also need a way of knowing there are multiple phones on the table and how to sample from them.”

A successful interception of this sort could potentially be very scary, Thornton noted, because “there’s no way to know if you’re being hacked this way.”

The Deason Institute is part of SMU’s Lyle School of Engineering, and its mission is to to advance the science, policy, application and education of cyber security through basic and problem-driven, interdisciplinary research.

About this neuroscience research article

Source:
SMU
Media Contacts:
Press Office – SMU
Image Source:
The image is in the public domain.

Original Research: Closed access
“Keyboard Snooping from Mobile Phone Arrays with Mixed Convolutional and Recurrent Neural Networks”. Eric C. Larson et al.
Interactive, Mobile, Wearable and Ubiquitous Technologies doi:10.1145/3328916

Abstract

Keyboard Snooping from Mobile Phone Arrays with Mixed Convolutional and Recurrent Neural Networks

The ubiquity of modern smartphones, because they are equipped with a wide range of sensors, poses a potential security risk—malicious actors could utilize these sensors to detect private information such as the keystrokes a user enters on a nearby keyboard. Existing studies have examined the ability of phones to predict typing on a nearby keyboard but are limited by the realism of collected typing data, the expressiveness of employed prediction models, and are typically conducted in a relatively noise-free environment. We investigate the capability of mobile phone sensor arrays (using audio and motion sensor data) for classifying keystrokes that occur on a keyboard in proximity to phones around a table, as would be common in a meeting. We develop a system of mixed convolutional and recurrent neural networks and deploy the system in a human subjects experiment with 20 users typing naturally while talking. Using leave-one-user-out cross validation, we find that mobile phone arrays have the ability to detect 41.8% of keystrokes and 27% of typed words correctly in such a noisy environment—even without user specific training. To investigate the potential threat of this attack, we further developed the machine learning models into a realtime system capable of discerning keystrokes from an array of mobile phones and evaluated the system’s ability with a single user typing in varying conditions. We conclude that, in order to launch a successful attack, the attacker would need advanced knowledge of the table from which a user types, and the style of keyboard on which a user types. These constraints greatly limit the feasibility of such an attack to highly capable attackers and we therefore conclude threat level of this attack to be low, but non-zero.

Feel free to share this Neurotech News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.