Enabling Next-Generation AI Hearing Aids

Although the loss of audibility associated with age-related hearing loss is relatively easy to address via ap- propriate frequency-gain amplification used in today's hearing aids, difficulty hearing in noisy environments is not. Existing algorithms deployed in hearing aids cannot focus on the sound source of interest when similar sounds occur in similar proximity and/or intensity (e.g. focusing on your dinner companion over diners at a neighboring restaurant table). Audio source separation technology is designed to isolate a specific source in a complex audio scene, like a cocktail party. In this project, we are developing and deploying cutting-edge audio source separation on hearing-assistive devices, with the goal of amplifying only the desired sound in a complex scene. We are creating new machine learning algorithms and optimizations designed for real-time audio processing in resource-constrained computing environments like smartphones and hearing aids. We will develop new methods to let end-users personalize deep source separation models to their hearing and health needs. We will create system designs, including innovative hardware prototypes and interactive software interfaces to support a broad set of audio-centric machine learning computing scenarios.

This project is in collaboration with Rujia Wang at IIT, and Pam Souza, Bryan Pardo, and Prem Seetharaman at Northwestern University.

Avatar
Hussain Khajanchi
Undergraduate Student (REU)
Avatar
Iris Uwizeyimana
Undergraduate Researcher

Prospective Grad Student at the University of Toronto

Avatar
Kyle C. Hale
Assistant Professor of Computer Science

Hale's research lies at the intersection of operating systems, HPC, parallel computing, computer architecture.

Related