AI Case Study
Starkey Hearing Technologies introduces a hearing aid which automatically adjusts to the acoustic environment using machine learning
Starkey Hearing Technologies has introduced machine learning capabilities into its Livio AI hearing aid. This allows for audio translation from other languages in conjunction with a smartphone, as well as automatically adjusting for background and ambient noise to improve user experience.
Healthcare Equipment And Supplies
Starkey Hearing Technologies' product, the Livio AI hearing aid, "uses a combination of directional microphones and machine learning algorithms to classify the wearer’s listening environment: chatting outside in a backyard surrounded by natural sounds, having a conversation in a loud restaurant, or listening to music at a concert. The hearing aid then adjusts to the best listening mode for the wearer’s acoustic conditions. The hearing aid can also translate between 27 languages. The language translation works in conjunction with a smartphone app. If an English speaker wearing the device says something to a Chinese speaker, the Livio AI system would translate the words and display them in Chinese characters on the English speaker’s smartphone screen. If the Chinese speaker said something in return, those words would be directly translated into spoken English in the ears of the hearing aid wearer. Starkey worked with several notable AI companies to integrate language translation. Beyond the Livio AI functions available at launch, Starkey plans to later add a medical alert system to detect falls among elderly wearers and automatically send text message alerts to emergency contacts."
Starkey Hearing Technologies is aiming to conquer the idea of hearing aids as a technology for older people rather than appeal to younger tech users: "most of the 466 million people worldwide who live with disabling hearing loss do not use hearing aids because of the relatively high cost and social stigma."
"The artificial intelligence running under the hood of the Livio AI system mostly relies on traditional machine learning algorithms rather than the potentially more powerful deep learning algorithms. That’s in large part because the hearing aid’s onboard computing power remains limited, and Starkey didn’t want critical AI-boosted listening or other functions to rely upon Internet access to additional cloud computing. Still, features such as language translation do rely upon having Internet access and access to cloud-based deep learning systems that can perform the computing-intensive natural language processing."