person with headphones

AI For Good: Bringing music to hearing-impaired people with Hear-AI

Submitted on Thursday, 13/06/2024

Dr. Andrew Hines, Dr. Alessandro Ragano, Dr. Kata Szita, Dr. Dan Barry, Dr. Davoud Shariat Panah, Dr. Carl Timothy Tolentino, Dr. Niall Murray

University College Dublin

Due to its complexity combining pitch, rhythm and other attributes, users of hearing-assistive devices cannot fully enjoy music. This may lead to social isolation and the lack of opportunities to learn an instrument or go to concerts.

Our project’s goal is to provide an AI tool that optimises music experiences to a listener’s personal needs in real-time using wearable devices, such as smart glasses or extended reality headsets.

First, we assemble a dataset of music-listening scenarios combining sound, video and physiological metrics. Using this dataset, we develop and train an algorithm that can change the attributes of music—for example, by modifying pitch content or reducing the volume of certain instruments

On completion of the above activities, we will be able to test it on devices that can record and deliver sound in real-time.

With this research we aim to provide the widest possible impact on hard-of-hearing people of any demographic. These devices have to be easy to use for anyone and subtle enough to be worn in public spaces without drawing too much attention.