Google Offers a Pair of Apps to Help the Deaf Community

Two new assistive apps coming to Android provide audio boosting and near-real-time text-to-speech translation for the deaf and hard of hearing.
Alyssa Foote

Google’s newest apps won’t gin up snappy replies to your emails or attempt to organize your mess of photos. What they will do is provide much more critical services to a community of users who may need them the most.

Two new mobile apps being rolled out today, Live Transcribe and Sound Amplifier, are aimed at the 466 million people—more than 5 percent of the world’s population—who are deaf or hard of hearing. The Live Transcribe app uses Google’s cloud-based, speech-to-text intelligence to offer text representations of spoken conversations as they’re happening, while Sound Amplifier relies on an Android-based dynamic audio processing effect to make speech and other sounds easier to hear.

During a demonstration with the press last month, a group of Google product managers showed how their presentations could be transcribed into text in near real time by Live Transcribe. In another corner of the room, Google had engineered a hearing loss simulator as part of the demo of Sound Amplifier. Slip on a set of headphones, and a Google employee cranked the simulator to reduce your hearing abilities. By using the new app, testers could swipe on a series of sliders to adjust volume, ambient noise, voice clarity, and the distribution of sound to left and right ears.

Google research scientist Dimitri Kanevsky, who has been deaf since age one, had a conversation with a colleague about an upcoming party while using Live Transcribe on his personal phone. The transcriptions appeared on his phone’s screen, but for the purposes of the demonstration, they were also cast onto a larger screen. While watching, we could see that not all of the wording was accurately transcribed (Kanevsky also has a thick accent). But the app was smart enough to pick up on the difference in meaning between the words “chili” and “chilly” when they were used in different contexts.

Kanevsky emphatically described how challenging and expensive it can be for deaf people to arrange for real-time transcriptions of conversations, both in personal environments and professional settings. This tool, he said, gives people much easier access to this kind of technology. Google also said it worked directly with Gallaudet University, a renowned private university in Washington, DC, for deaf and hard-of-hearing students, to get regular feedback during the development of the apps.

Google first demonstrated Sound Amplifier in May of last year at I/O, its annual developer conference. Now both Sound Amplifier and Live Transcribe are rolling out to Android users.

“We believe we’re opening up an entirely new space for deaf and hard-of-hearing users,” says Brian Kemler, one of Google’s product managers. “We’re not just building accessibility onto products. In this case we’re building for accessibility first.”

The capabilities in these new apps, however, won’t appear in Android as native functions, unlike other accessibility features built directly into settings on both Android and Apple’s iOS. One way to think about Sound Amplifier and Live Transcribe is that they’re less about helping a user navigate the phone itself and more about people using phones to navigate the world that exists around us when we’re not buried in our phones.

And while both will be available as free downloads from the Google Play app store, their initial rollout is fairly limited. Because of its technical requirements, Sound Amplifier can only run on phones loaded with Android 9 Pie. Live Transcribe will be installed on Google’s Pixel 3 smartphones, but otherwise is only available via a limited beta right now. (Also worth noting: Live Transcribe only works with an internet connection, which is needed to access the cloud-based transcription service; Sound Amplifier, on the other hand, runs locally on devices.)

Apple, Google’s only real competitor in mobile operating systems, has rolled out its own array of impressive features aimed at the blind and deaf community, and most are built directly into Apple’s operating systems. Apple CEO Tim Cook has even called accessible technology a “human right.” These features include VoiceOver, the free text reader built into iOS and macOS; a variety of vibrations and LED light flashes that alert people to incoming calls; and customizable gestures through something called Assistive Touch, which places a floating home button on your screen that can be programmed to perform different tasks. Some members of the deaf community even consider FaceTime to be one of the earliest and unintended implementations of iPhone accessibility features.

Roozbeh Jafari, an associate professor at Texas A&M University who researches embedded systems and signal processing, and who previously developed wearable tech to decode sign language, says he thinks Google’s approach with its new apps “is a pretty smart idea.” Standard hearing aids, he points out, often rely on button cell batteries and don’t have the same kind of processing power as our smartphones.

“What they’re trying to do is use edge computing and the state-of-the-art sensors that we already have,” Jafari says. “Sometimes you try to add a new system, but someone can’t afford adding even $50 to the device they already have. Meanwhile, smartphones are becoming so sophisticated, and are ubiquitous.”

And it’s easy to imagine a not-so-distant future, Jafari says, when accessibility apps like these are increasingly aware of a person’s needs and become self-adjusting. “[Our devices] already understand where we are. If you go to the Starbucks around the corner and you set noise canceling parameters, the next time you go there, the app would know. It’s not exactly rocket science,” Jafari says, “but it still needs to be executed well.”

Google’s Sound Amplifier app doesn’t do that now. It also doesn’t work in conjunction with Google Assistant, so you can’t use voice to launch the app or to control the sound levels. But, Google’s Brian Kemler says, these are features that the company has “absolutely considered for the future.” While it’s a challenge from a machine learning standpoint, it could potentially bring more assistive features to apps that are already, at their core, assistive.


More Great WIRED Stories