I was recently working on a file when my iPhone showed a message, “A sound has been recognized that may be a doorbell.” Indeed, the call had just been made.
This is one of the new collections of accessibility notifications for people who are hard of hearing. Apple is rolling out much of this lately, and Google’s Android has done the same†
In fact, the iPhone has quite a few sounds it’s trained to listen to: fire alarms, sirens, smoke detectors, cats and dogs, appliances (although Inot sure which devices), car horns, doorbells, door knocking, breaking glass, kettles, running water, baby crying, coughing and screaming. It should also disable the “Hey Siri” voice commands when listening to other sounds. It is not clear why that is the case; if the phone is already listening, why not just include the command “Hey Siri” in the list of items to listen to?
But what if this sound recognition can be adapted to perform the most important IT and operational tasks? Think of it as an option to customize the phone to listen to sounds specific to your business. Like the classic example of machine learning, could the phone hear a sound in a workspace and say, “That sounds like the XYZ component in that huge device is overheating.”
Or maybe the feature is even more useful, like detecting when a specific person is coming down the hallway. “Warn! Ken from Legal is coming. Hide now.” Or maybe you can place the phone near an open window so it can listen to the sound of your boss’s car approaching?
It can also become a malicious management tool that warns someone if no keyboard clicks are detected for a predetermined period of time. How about a handy identification? If caller ID is irrelevant, can it be programmed with all users’ voices so that it can mark the caller’s name? (A bad version would be identifying employees who call an anonymous hotline.)
Take this up a notch and a smartphone can be customized to identify all the sounds you want to help the business. We already know that video conferencing systems are always listening — even if you muted your microphone † but what if the phone could help identify who is actually talking? Some systems now offer that, but it’s not universal and it doesn’t even routinely work with systems that claim to have it.
Ever encountered a fast talker at work? What if the phone could listen and blow a slow and clearer interpretation into your earbud? Yes, it can also display a real-time transcript on the screen, but it’s hard to constantly look at that screen and go unnoticed. Earplugs are more discreet.
Then there are always real-time “voice-lying detection” alerts. Imagine having a conversation with your supervisor and hearing, “That’s probably a lie.” It can help during board or audience presentations by listening for a high volume of sighs or yawns, leading to a cautionary prompt: “Shutting down. You’re losing them.” Admittedly, a good speaker should know that, but if the speaker is focused on complex material, he or she may not notice that the audience is being distracted.
As Apple, Google and others work to perfect accessibility features that are truly useful and useful, it’s clear that so much more can be done with these devices.
Copyright © 2022 IDG Communications, Inc.