Patients are looking for more natural and personalized hearing sound experiences, and AI handles the complexity of real situations, in real time...
As the world’s population ages and is increasingly exposed to elevated sound levels in their daily lives, hearing loss has become a growing public health concern. Patients are looking for more natural and personalized hearing sound experiences, and artificial intelligence (AI) handles the complexity of real situations, in real time.
For instance, she illustrated, a woman is sitting on a bench, and children are playing next to her. She could have widely different intentions. The first would be to be able to hear her grandchild. The second would be to be able to isolate herself to read a scientific report. The third, finally, to be able to relax and to drift away in deep thoughts. “If the hearing aid only focuses on the environment, and a presumption of speech intelligibility, then it will go wrong with what she wants. This is where an AI algorithm and an AI application can help. If you tag the intent, the tuning of all the parameters in the hearing aids will be taken in the right direction.”
Widex claims its SoundSense Learn solution leverages real-time machine learning algorithms and data from over 40,000 hearing aid wearers to help patients personalize their hearing and create individual hearing programs. “The real world data is much better than what we can generate in the lab. It’s what people need in real situations, while it’s all theoretical in the lab.”
Data is anonymized and GDPR-compliant, Henningsen insisted. Basically, the sound is captured by the microphone and processed locally by the digital signal processor inside the hearing aid. “It’s not processed in the cloud, but on the ear of the user,” said Henningsen, specifying that only the parameter settings of the hearing aids and the user preferences are being sent to the cloud. “It’s anonymized in a way that we cannot connect the dots between the actual users.”
The company generates a unique anonymized ID, because if a customer wants to revoke his or her consent, “we need to be able to backtrace into the secure data structure and delete the file.” No sound is recorded.
All WS Audiology’s hearing aids have environment classifiers and sound classifiers. “Based on environmental sounds, you can identify a restaurant setting or a concert hall setting, but sometimes you are wrong,” said Henningsen. “Even though you are in a restaurant, your intention may not be to understand what somebody is saying across the table.” Here, the key is to amplify the granularity of the insights.
It is also important to reduce the processing delay to prevent sound distortion. Widex has introduced ZeroDelay, a signal pathway that reduces the processing delay between the microphone and the receiver to below 0.5 millisecond to overcome artificial sound. The direct sound signal and amplified sound signal from the hearing aid are indeed synchronized and create a non-distorted listening experience. “We have measured the response of the brain with classic hearing aids and with the fast signal processing pathway. We could see that the encoding of the really fast pathway is very natural and […] impacts how the brain actually processes the sound.”
Looking ahead, numerous questions arise: Will tomorrow’s AI-equipped hearing aids be able to mimic the human sense of hearing and perceive wearers’ preferences so accurately that they can predict intentions and behaviors? Will they surpass the ability of the audiologist to tailor the hearing aids to patients’ unique needs? Will the job itself become obsolete and force professionals to undergo retraining?
When she started her career as a clinical audiologist in the 1980s, Henningsen would ask her colleagues, “Wouldn’t it be cool if we could make a hearing aid that records situations that the user finds really difficult so that we could hear them and do the finetuning, because it was very hard to finetune based on the examples my patients were telling me.”
At the time, hearing aid technology was analog, and they all looked at her as though she was “crazy”, but “we are almost there with hearing aids basically tracking and adjusting to the environments.”
She continued, “Whether or not we will be able to predict is a good hypothesis. I think a hearing aid will learn the user’s listening pattern and adapt to a full-day journey.”
Often asked whether AI and machine learning will replace hearing care professionals, Henningsen replies that technology “adds a new dimension, insights, and opportunities”, but will not replace the audiologist’s counseling skills. “Hearing loss is a progressive condition, and patients need a specialist to guide them through the maze of changes and opportunities.” Rather than seeing the threat of automation, she encourages her counterparts to recognize how technology can make them take wiser, more insightful decisions for their patients.
Henningsen is convinced that the idea of empowering the user with AI will speak to younger generations of users, because they will want to do something about the sound they hear. “Younger people are probably more prone to learn from their own experience and do things themselves.”
When asked about the affordability of AI-based hearing aids, Henningsen said AI and machine learning applications are already available across Widex’s price points. “It is a price point ladder with several different models to choose from. If you have a very complex hearing loss or if you have a complex life with demanding listening situations, you need a more powerful and more granular device.” Nonetheless, “at this point of time, I would say that everyone can afford them [AI-powered hearing aids], and I don’t think it’s exclusive for the higher price points,” Henningsen concluded.
>> This article was originally published on our sister site, EE Times Europe.