Hearing Technology
The lab of Prof. Verhulst adopts an interdisciplinary approach to study how sound and speech are encoded along the auditory pathway. By combining computational modeling with multi-channel EEG, otoacoustic emissions and psychoacoustics, we offer a comprehensive view on how hearing works, and how it is degraded after hearing loss. We develop EEG technology that can be integrated with wearables, and that offers sensitive clinical diagnostic methods that isolates the "hidden hearing loss" (synaptopathy) aspect of hearing problems. We offer model-based individual treatment strategies for the ever-growing noise-exposed and ageing population, and use our knowledge to develop effective prevention methods and an overall understanding of how people operate in an auditory context. Lastly, we develop technologies for augmented and machine hearing based on the remarkable properties of human hearing and the latest machine learning techniques.
Who we are
We bring together expertise from various fields to work on hearing science: Physics, Engineering, Psychology, Audiology and Machine Learning
Competences
- Computational model-based protocols for precision diagnostics of sensorineural hearing loss and hearing-aid algorithm design.
- Integrating auditory EEG technologies within wearables and smart-phone technologies.
- Machine-learning approaches to noise reduction and acoustic scene analysis.
- New technologies for auditory applications, soundscapes and virtual acoustics.
Our Research Methods
Collaborations
- Oldenburg University (Prof. Debener)
- Aalto University (Profs. Pulkki)
Running Projects
- EIC Transition Grant EarDiTech: Precision Hearing Diagnostics and Augmented-hearing Technologies
Finalized Projects
- FWO: Machine-hearing 2.0: Biophysically-inspired auditory signal processing for machine-hearing applications
- FWO: Modeling how sensorineural hearing loss affects auditory processing and physiological markers of hearing (AudiMod)
- ERA-NET CoSySpeech: The functional role of cochlear synaptopathy for speech coding in the brain
- BOF interdisciplinary project EarDiMon: Portable Hearing Diagnostics: Monitoring of Auditory-nerve Integrity after Noise Exposure (with co-PI Prof. Dhooge and collaborator Prof. Keppler)
- DFG SPP 1608: The Impact of Hearing Impairment on the Source Generators of Auditory Evoked Potentials
- Agir Pour l'Audition: Unravelling the causes of individual speech-in-noise deficits: disentangling outer hair-cell and inner hair-cell / auditory nerve sources (Dr Ponsot)
- ERC Proof-of-Concept CochSyn: A diagnostic test for cochlear synaptopathy screening in humans
- ERC Starting Grant RobSpear: Robust Speech Encoding in Impaired Hearing
Model Code and Software
Find the most recent and supported model code on our GitHub page.
Our 2018 model is also incorporated into the Auditory Model Toolbox (thanks to Alejandro Osses!)
Older repositories can be downloaded here:
- The 2012 cochlear + OAE model (Fortran/Matlab)
- The 2015-2016 cochlear + OAE model (Matlab/Python)
- The 2015-2016 cochlear + Auditory-nerve model (Matlab/Python)
- The 2018 model: human cochlea+OAE+AN+ABR+EFR. The code is also available on our HearingTechnology GitHub, email me your account and I'll add you as a collaborator (s.verhulst@ugent.be)
- The 2018 model: human cochlea+OAE+AN+ABR+EFR v1.1. This compile works on all platforms (Ubuntu, Mac and Windows). The code is also available on our HearingTechnology GitHub, where I can add you as a collaborator


