Abstract
Cochlear implant (CI) coding strategies can achieve impressive speech intelligibility results in quiet. However, one of the greatest difficulties for CI users remains speech intelligibility in noisy environments. To address this, CI processing has become increasingly complex by utilizing techniques such as noise reduction, dynamic range optimization, and beam-formers. And with success: incremental advances in CI performance have been and are being made. However, with this increase in complexity, it is becoming more difficult for audiologists and researchers to understand what signals are being presented to the CI recipient listening to complex stimuli such as speech. Computational models for relating sound input to array output exist, however, these models can prove insufficient in several ways: 1) They are not updated as often as clinical processors, 2) they apply simplifications to decrease computational time, 3) they lack a brand-neutrality needed to compare between processing of different manufacturers, and 4) with the onset of multiple directional microphones, creating realistic model inputs for complex, reverberant stimuli is difficult. Therefore, researchers interested in the exact stimulation presented to the cochlea, require an experimental setup that can measure clinical CI output. Additionally, a setup like this would allow precise adaptation of realistic, personalized stimulation tables retrieved from ecological sound stimuli, to be used in direct stimulation research. This work describes such a setup. It aims to provide all the information and code needed to recreate a similar setup, to give researchers better insight into the actual output of the CI. The setup is a chain of devices which starts and ends at a laptop running Python. Inside a sound treated room, a loudspeaker plays stimuli to the CI processor, which is set up in an ecological fashion by attaching it to the ear of a KEMAR dummy. An implant-in-a-box retrieves CI signals from the processor and, through a load board, transmits the voltage signals of each channel to two synchronized oscilloscopes.
An example experiment is used to illustrate the use of the system. Here, the F0 of Dutch spoken syllables were adapted programmatically, and the availability of F0 pitch cues in the CI output was studied using the experimental setup. The voltage output was processed to infer pulse timing and current information. From here, analyses were done to retrieve information related to pitch, such as spectral centroid, amplitude modulation, and salience. This output was compared to the outputs generated by CI processor models of BEPS+ (Advanced Bionics) and Nucleus Matlab Toolbox (Cochlear Ltd).
An example experiment is used to illustrate the use of the system. Here, the F0 of Dutch spoken syllables were adapted programmatically, and the availability of F0 pitch cues in the CI output was studied using the experimental setup. The voltage output was processed to infer pulse timing and current information. From here, analyses were done to retrieve information related to pitch, such as spectral centroid, amplitude modulation, and salience. This output was compared to the outputs generated by CI processor models of BEPS+ (Advanced Bionics) and Nucleus Matlab Toolbox (Cochlear Ltd).
| Original language | English |
|---|---|
| DOIs | |
| Publication status | Published - 11-Jan-2024 |
| Event | 15th Speech in Noise Workshop - Potsdam Museum, Potsdam, Germany Duration: 11-Jan-2024 → 12-Jan-2024 https://2024.speech-in-noise.eu/ |
Conference
| Conference | 15th Speech in Noise Workshop |
|---|---|
| Abbreviated title | SPIN2024 |
| Country/Territory | Germany |
| City | Potsdam |
| Period | 11/01/2024 → 12/01/2024 |
| Internet address |