Download the ISAAR 2015 programme

50 downloads 27991 Views 668KB Size Report
phone (+45 65 31 31 31) or e-mail ([email protected]). .... A special discount on publication fees will be applied for submissions to this special issue ... Samsung Medical Center yscho@skku. ..... S3.1 – Thu 27 Aug, 08:40-09:10.
5th International Symposium on Auditory and Audiological Research

ISAAR 2015 “Individual hearing loss – Characterization, modelling, compensation strategies”

August 26-28, 2015 Hotel Nyborg Strand, Denmark

Programme and abstracts

About ISAAR

The “International Symposium on Auditory and Audiological Research” is formerly known as the “Danavox Symposium”. The 2015 edition corresponds to the 26th symposium in the series and the 5th symposium under the ISAAR name, adopted in 2007. The Danavox Jubilee Foundation was established in 1968 on the occasion of the 25th anniversary of GN Danavox. The aim of the foundation is to support and encourage audiological research and development. Funds are donated by GN ReSound (formerly GN Danavox) and are managed by a board consisting of hearing science specialists who are entirely independent of GN ReSound. Since its establishment in 1968, the resources of the foundation have been used to support a series of symposia, at which a large number of outstanding scientists from all over the world have given lectures, presented posters, and participated in discussions on various audiological topics. More information can be found at www.ISAAR.eu. Proceedings from past symposia can be found at www.audiological-library.gnresound.dk. ISAAR Board Members Torben Poulsen Torsten Dau Ture Andersen Lisbeth Tranebjærg Jakob Christensen-Dalsgaard Caroline van Oosterhout

Technical University of Denmark Technical University of Denmark Odense University Hospital University of Copenhagen University of Southern Denmark Technical University of Denmark

ISAAR 2015 Organizing Committee Scientific Torsten Dau Jakob Christensen-Dalsgaard Lisbeth Tranebjærg Ture Andersen Sébastien Santurette

Technical University of Denmark University of Southern Denmark University of Copenhagen Odense University Hospital Technical University of Denmark

Administrative Torben Poulsen Caroline van Oosterhout

Technical University of Denmark Technical University of Denmark

Abstract, programme, and manuscript coordinator – Webmaster Sébastien Santurette

Technical University of Denmark Cover illustration by Wet DesignerDog (www.wetdesignerdog.dk) with thanks to Eva Helena Andersen

Welcome to ISAAR 2015

The general topic of the ISAAR 2015 symposium is "Individual hearing loss – Characterization, modelling, compensation strategies". The concept is to consider this topic from different perspectives, including current physiological concepts, perceptual measures and models, as well as implications for new technical applications.

The programme consists of invited talks as well as contributed talks and posters. The symposium is divided into five sessions, to which the following speakers have been invited:

1. Characterizing individual differences in hearing loss Judy Dubno, Larry Humes, Agnès Léger, Andrew Oxenham 2. Genetics of hearing loss Karen Steel, Hannie Kremer, Guy van Camp 3. Hidden hearing loss: Neural degeneration in ”normal” hearing Christopher Plack, Kate Fernandez, Hari Bharadwaj, Jane Bjerg Jensen 4. Modelling individual hearing impairment Enrique Lopez-Poveda, Michael Heinz, Volker Hohmann 5. Individualized diagnostics and compensation strategies Brent Edwards, Harvey Dillon, Deniz Başkent

In addition to these scientific presentations, one of the objectives of ISAAR is to promote networking and create contacts between researchers from different institutions in the fields of audiology and auditory research. ISAAR is a great opportunity for young scientists to approach more experienced researchers and vice-versa.

After the symposium, written versions of the presentations and posters will be published in a proceedings book. All participants will receive a copy of the ISAAR 2015 proceedings.

The organizing committee and the Danavox Jubilee Foundation wish you an interesting and fruitful symposium. Happy networking!

Wednesday 26 August 08:30-10:00

Registration and hanging of posters

10:00-10:10

Torsten Dau: Welcome and introduction to the symposium

Session 1:

Characterizing individual differences in hearing loss

10:10-10:40

Judy Dubno: Characterizing individual differences: Audiometric phenotypes of age-related hearing loss

10:40-11:10

Larry Humes: Individual differences in auditory perception among older adults with impaired hearing

11:10-11:30

Coffee break

11:30-12:00

Agnès Léger: Beyond the audiogram: Influence of supra-threshold deficits associated with hearing loss and age on speech intelligibility

12:00-13:30

Lunch

13:30-14:00

Andrew Oxenham: Characterizing individual differences in frequency coding: Implications for hearing loss

14:00-14:20

Sarah Verhulst: Interrelations between ABR and EFR measures and their diagnostic power in targeting subcomponents of hearing loss

Wednesday 26 August Session 1: Characterizing individual differences in hearing loss (cont.) 14:20-14:40

Kristina DeRoy Milvae: Is cochlear gain reduction related to speech-in-babble performance?

14:40-15:00

Federica Bianchi: Effects of cochlear compression and frequency selectivity on pitch discrimination of unresolved complex tones

15:00-15:30

Coffee break

Session 2: Genetics of hearing loss 15:30-16:00

Karen Steel: What mouse mutants tell us about deafness

16:00-16:30

Hannie Kremer: Genetic defects and their impact on auditory function

16:30-17:00

Guy van Camp: Genetic testing for hearing loss: Where are we today?

17:00-19:00

Poster session I

19:00-20:30

Dinner

20:30-23:00

Drinks in the poster area

Thursday 27 August Session 3: Hidden hearing loss: neural degeneration in “normal” hearing 08:40-09:10

Christopher Plack: Towards a diagnostic test for hidden hearing loss

09:10-09:30

Dan Goodman: Downstream changes in firing regularity following damage to the early auditory system

09:30-09:50

Coffee break

09:50-10:20

Kate Fernandez: If it's too loud, it's already too late

10:20-10:50

Hari Bharadwaj: Using individual differences to study the mechanisms of suprathreshold hearing deficits

10:50-11:10

Coffee break

11:10-11:40

Jane Bjerg Jensen: Immediate and delayed cochlear neuropathy after noise exposure in adolescent mice

11:40-12:00

Gerard Encina Llamas: Evaluation of cochlear processing and auditory nerve fiber intensity coding using auditory steady-state responses

12:00-13:30

Lunch

Session 4: Modelling individual hearing impairment 13:30-14:00

Enrique Lopez-Poveda: Predictors of individual hearing-aid treatment success

14:00-14:30

Michael Heinz: Neural modeling to relate individual differences in physiological and perceptual responses with sensorineural hearing loss

Thursday 27 August Session 4: Modelling individual hearing impairment (cont.) 14:30-14:50

Coffee break

14:50-15:20

Volker Hohmann: Modelling temporal fine structure and envelope processing in aided and unaided hearing-impaired listeners

15:20-15:40

Josef Chalupper: Modelling individual loudness perception in CI recipients with normal contralateral hearing

15:40-16:00

Robert Baumgartner: Modelling the effect of individual hearing impairment on sound localization in sagittal planes

16:00-16:20

Coffee break

Session 5: Hearing rehabilitation with hearing aids and cochlear implants 16:20-16:40

Birger Kollmeier: Individual speech recognition in noise, the audiogram, and more: Using automatic speech recognition (ASR) as a modelling tool and consistency check across audiological measures

16:40-17:00

Stefan Zirn: Coding of interaural phase differences in BiCI users

17:00-19:00

Poster Session II

19:00-20:30

Dinner

20:30-23:00

Drinks in the poster area

Friday 28 August Session 5: Hearing rehabilitation with hearing aids and cochlear implants (c.) 08:40-09:10

Harvey Dillon: Loss of speech perception in noise – causes and compensation

09:10-09:40

Deniz Başkent: Compensation of speech perception in hearing loss: How and to what degree can it be achieved?

09:40-10:00

Coffee break

10:00-10:30

Brent Edwards: Individualizing hearing aid fitting through novel diagnostics and self-fitting tools

10:30-10:50

Brian Moore: Preference for compression speed in hearing aids for speech and music and its relationship to sensitivity to temporal fine structure

10:50-11:10

Tobias Neher: Individual factors in speech recognition with binaural multimicrophone noise reduction: Measurement and prediction

11:10-11:30

Coffee break

11:30-11:50

Søren Laugesen: Can individualised acoustical transforms in hearing aids improve perceived sound quality?

11:50-12:10

Wouter Dreschler: A profiling system for the assessment of individual needs for rehabilitation with hearing aids based on human-related intended use (HRIU)

12:10-12:30

Torben Poulsen: Closing remarks

12:30-14:00

Lunch and departure

Venue and Travel Information

Venue The symposium venue is Hotel Nyborg Strand, Østersøvej 2, 5800 Nyborg, Denmark. The hotel is situated in the middle of Denmark (GPS coordinates: Lat: N 55º 19' 5.74", Long: E 10º 48' 43.88"). The distance from Copenhagen Airport (CPH) is about 134 km, about 1½ hour by rail or road. For more information, visit www.nyborgstrand.dk. You may contact the hotel by phone (+45 65 31 31 31) or e-mail ([email protected]). Travel information Air travel The nearest airport is Copenhagen Airport "Kastrup Lufthavn" (CPH). See www.cph.dk. From Copenhagen airport to Nyborg by rail From the airport you will find trains directly to Nyborg. One-way standard fare: DKK 240 (approx. EUR 32, USD 35, fare may vary depending on ticket type). Direct InterCity trains leave from the airport once per hour. Duration: 1h38m. For the return journey, direct trains run every hour from Nyborg to CPH airport. More trains are available with changes. Use www.journeyplanner.dk for timetable information and www.dsb.dk/en/ for online ticket reservations. From Copenhagen airport to Nyborg by road Travel from CPH airport to Hotel Nyborg Strand by car takes about 1½ hour (134 km or 83 miles). Note a one-way toll charge of DKK 235 or EUR 33 per vehicle for crossing the Great Belt Bridge. From Nyborg station to the hotel Nyborg railway station is about a 5-minute drive from Hotel Nyborg Strand. Taxi: DKK 60 (approx. EUR 8, USD 9). If you like walking, there is a 15-minute “Nature Path” between the railway station and the hotel. Use www.journeyplanner.dk to assist your planning of local transportation. Planning ahead On planning your return, prepare 2 hours for transport to Copenhagen Airport and another 2 hours for check-in and security check at the airport. The scientific programme will start on August 26 at 10 am and end on August 28 at 12:30 pm. Please plan your journey accordingly.

About the weather The weather in Denmark is unpredictable. Day temperatures between 15 and 25 degrees centigrade. Frequent showers and often windy. See www.dmi.dk for the current forecast.

Practical Information

Posters Hanging of posters:

Wed 26 Aug

08:30-10:00

Presenters of odd-numbered posters are encouraged to be present at their poster during the first dedicated poster session (Wed 17-19), presenters of even-numbered posters during the second dedicated poster session (Thu 17-19). Posters will remain on display throughout the symposium to allow further interaction outside these dedicated sessions. Talks Dedicated time with assistance for slide upload and technical tests in the auditorium: Wed 26 Aug Thu 27 Aug

09:00-09:30 and 17:00-17:15 17:00-17:15

A PC with PowerPoint software will be available in the auditorium. Contributed oral presentations should not exceed 15 min. in length (25 min. for invited talks), in order to leave at least 5 min. after each talk for questions and discussion. Meals and drinks The ISAAR registration fee includes all meals and social activities during the symposium and a copy of the symposium proceedings. Two glasses of wine will be served free of charge at dinner. Complimentary beer, wine, and soft drinks will also be available in the evenings in the poster area. Other drinks may be purchased at the hotel bar. Contact information For any questions concerning the programme or manuscripts, please contact: [email protected] For registration or venue information, please contact Hotel Nyborg Strand directly at: [email protected] For general information about ISAAR, or to contact the scientific committee, please write to: [email protected]

Manuscript Information

Manuscripts for ISAAR proceedings Authors are encouraged to submit a manuscript for their ISAAR contribution. Manuscripts from both oral and poster presentations will be published in the proceedings book and distributed to all participants after the symposium. Proceedings will also be accessible to all participants via the GN ReSound audiological library (www.audiological-library.gnresound.dk). All manuscripts must be submitted electronically at www.isaar.eu. Authors are requested to follow the manuscript guidelines and to use the templates available at www.isaar.eu. Manuscripts are limited to a maximum length of 8 pages for contributed papers and 12 pages for invited papers. The deadline for receipt of manuscripts is 01 September 2015. Special issue of Trends in Hearing Authors of accepted proceedings manuscripts will be given the opportunity to submit a full journal paper based on their ISAAR contribution to a special issue of open-access journal Trends in Hearing (see http://tia.sagepub.com/). Trends in Hearing remains the only fully open-access journal to specialize in topics related to hearing and hearing loss. All manuscripts should be submitted by 15 November 2015. Please see the journal website for online submission and guidelines. When submitting the manuscript, please indicate in the cover letter that the manuscript is intended for the ISAAR special issue. Overlap with material in the ISAAR book manuscript is permitted. All manuscripts will undergo peer review and authors should receive an initial decision on their manuscript by early January. We anticipate a publication date of the special issue in Spring 2016. A special discount on publication fees will be applied for submissions to this special issue (invited papers: free; contributed papers: $525; normal publication fee: $699). In cases where funds are not available to the authors, a fee waiver may be granted.

List of participants Name

Affiliation

E-mail

Ahn, Jung Ho

Asan Medical Center

[email protected]

Ahrens, Axel

Technical University of Denmark

[email protected]

Al-Ward, Sara Ater Baker

Oticon A/S

[email protected]

Andersen, Eva Helena

Technical University of Denmark

[email protected]

Andersen, Lou-Ann Christensen

University of Southern Denmark

[email protected]

Andersen, Sonja Christensen

Nordfyns Høreklinik

[email protected]

Andersen, Ture

University of Southern Denmark

[email protected]

Ausili, Sebastián

Donders Institute

[email protected]

Avila, Elena

Widex Spain

[email protected]

Bach, Rasmus

Oticon A/S

[email protected]

Baek, Seung Min

Inha University School of Medicine

[email protected]

Başkent, Deniz

University of Groningen

[email protected]

Baumgartner, Robert

Austrian Academy of Sciences

[email protected]

Bech, Birgitte

Hillerød Hospital

[email protected]

Behrens, Thomas

Oticon A/S

[email protected]

Beilin, Joel

Sivantos GmbH

[email protected]

Bendtsen, Benedikte

GN ReSound A/S

[email protected]

Berthelsen, Tina

Widex A/S

[email protected]

Bharadwaj, Hari

Massachussets General Hospital

[email protected]

Bianchi, Federica

Technical University of Denmark

[email protected]

Bille, Michael

Gentofte Hospital

[email protected]

Bisgaard, Nikolai

GN ReSound A/S

[email protected]

Boymans, Monique

AMC Clinical & Experimental Audiology

[email protected]

Bramsløw, Lars

Eriksholm Research Centre, Oticon A/S

[email protected]

Busby, Peter

Cochlear Ltd

[email protected]

Chabot-Leclerc, Alexandre

Technical University of Denmark

[email protected]

Chalupper, Josef

Advanced Bionics GmbH

[email protected]

Cho, Yang Sun

Samsung Medical Center

[email protected]

Chordekar, Shai

Audio-Medic

[email protected]

Choung, Da Eun

Ajou University Hospital

[email protected]

Choung, Yun Hoon

Ajou University Hospital

[email protected]

Christensen, Lisbeth

GN ReSound A/S

[email protected]

Christensen-Dalsgaard, Jacob

University of Southern Denmark

[email protected]

Cohen, Leslie Fainberg

Audio-Medic

[email protected]

Dau, Torsten

Technical University of Denmark

[email protected]

Daugaard, Carsten

DELTA

[email protected]

Depuydt, Bob

Amplifon

[email protected]

Derleth, Peter

Phonak AG

[email protected]

Dijkstra, Angelique

INCAS3/Pento

[email protected]

Dillon, Harvey

National Acoustic Laboratories

[email protected]

Di Marco, Jasmin

GN Otometrics A/S

[email protected]

Dingemanse, Gertjan

Erasmus Medical Center

[email protected]

Dreschler, Wouter A.

AMC Clinical & Experimental Audiology

[email protected]

Dubno, Judy

Medical University of Southern Carolina

[email protected]

Edwards, Brent

Earlens Corp.

[email protected]

Epp, Bastian

Technical University of Denmark

[email protected]

Ewert, Stephan

University of Oldenburg

[email protected]

Fereczkowski, Michal

Technical University of Denmark

[email protected]

Fernandez, Kate

Massachussets Eye and Ear Infirmary

[email protected]

Florentine, Mary

Northeastern University

[email protected]

Franck, Bas

Radboud University Medical Center

[email protected]

Fujisaka, Yoh-Ichi

Rion Co. Ltd

[email protected]

Gallardo, Andreu Paredes

Technical University of Denmark

[email protected]

Galster, Jason

Starkey Hearing Technologies

[email protected]

Garcia-Uceda, Jose

Radboud University Nijmegen

[email protected]

Gillies, Karin

Australian Hearing

[email protected]

Goodman, Dan

Imperial College

[email protected]

Gotschuli, Helga

HÖRwerkstatt Helga Gotschuli

[email protected]

Guérit, François

Technical University of Denmark

[email protected]

Gøtsche-Rasmussen, Kristian

Interacoustics Research Unit

[email protected]

Habicht, Julia

University of Oldenburg

[email protected]

Hamdan, Adel

NovaSon Acoustique Médicale

[email protected]

Hammershøi, Dorte

Aalborg University

[email protected]

Han, Hong Suong

Oticon A/S

[email protected]

Hannemann, Ronny

Sivantos GmbH

[email protected]

Hansen, Jonas

GN ReSound A/S

[email protected]

Hansen, Renata Jalles

Aarhus University Hospital

[email protected]

Harte, James

Interacoustics Research Unit

[email protected]

Hassager, Henrik Gert

Technical University of Denmark

[email protected]

Hau, Ole

Widex A/S

[email protected]

Heeren, Wiebke

Advanced Bionics GmbH

[email protected]

Heinz, Michael

Purdue University

[email protected]

Heuermann, Heike

Sivantos GmbH

[email protected]

Hockley, Neil

Bernafon AG

[email protected]

Hohmann, Volker

University of Oldenburg

[email protected]

Holtegaard, Pernille

Technical University of Denmark

[email protected]

Holube, Inga

Jade University of Applied Sciences

[email protected]

Humes, Larry

Indiana University

[email protected]

Husstedt, Hendrik

Deutsches Hörgeräte Institut GmbH

[email protected]

Haastrup, Astrid

GN ReSound A/S

[email protected]

Innes-Brown, Hamish

KU Leuven

[email protected]

Jagadeesh, Anoop

University of Oldenburg

[email protected]

Jensen, Jane Bjerg

University of Copenhagen

[email protected]

Jensen, Kenneth Kragh

Starkey Hearing Technologies

[email protected]

Jensen, Mille Marie Hess

CFD Rådgivning

[email protected]

Jensen, Ole Dyrlund

GN ReSound A/S

[email protected]

Jepsen, Morten Løve

Widex A/S

[email protected]

Jespersgaard, Claus

Oticon A/S

[email protected]

Johannesson, René Burmand



[email protected]

Jones, Gary

Oticon A/S

[email protected]

Joshi, Suyash Narendra

Technical University of Denmark

[email protected]

Jung, Joon Soo

Inha University School of Medicine

[email protected]

Jürgens, Tim

University of Oldenburg

[email protected]

Jørgensen, Søren

Oticon A/S

[email protected]

Karbasi, Mahdie

Ruhr University Bochum

[email protected]

Kempeneers, Myriam

Hoorcentrum Myriam Kempeneers/Amplifon

[email protected]

Kissner, Sven

Jade University of Applied Sciences

[email protected]

Kjærbøl, Erik

Bispebjerg Hospital

[email protected]

Kollmeier, Birger

University of Oldenburg

[email protected]

Kowalewski, Borys

Technical University of Denmark

[email protected]

Kremer, Hannie

Radboud University Medical Center

[email protected]

Kriksunov, Leonid

Audio-Medic

[email protected]

Kristensen, Bue

Interacoustics A/S

[email protected]

Kristensen, Sinnet G. B.

Interacoustics Research Unit

[email protected]

Kuhnke, Felix



[email protected]

Landsvik, Borghild

Oslo University Hospital, Rikshospitalet

[email protected]

Langner, Florian

University of Oldenburg

[email protected]

Latzel, Matthias

Phonak AG

[email protected]

Laugesen, Søren

Eriksholm Research Centre, Oticon A/S

[email protected]

Laureyns, Mark

Amplifon Centre for Research & Studies

[email protected]

Lee, Jun Ho

Seoul National University Hospital

[email protected]

Lee, Mee Hee

Ajou University Hospital

[email protected]

Lee, Minjae

Inha University School of Medicine

[email protected]

Léger, Agnès

University of Manchester

[email protected]

Le Goff, Nicolas

Oticon A/S

[email protected]

Lev Ran, Ehud

Audio-Medic

[email protected]

Lindvig, Jacob

Oticon A/S

[email protected]

Lissau, Else

GN ReSound A/S

[email protected]

Llamas, Gerard Encina

Technical University of Denmark

[email protected]

Lőcsei, Gusztáv

Technical University of Denmark

[email protected]

Lopez-Poveda, Enrique

University of Salamanca

[email protected]

Lundbeck, Micha

University of Oldenburg

[email protected]

Lunner, Thomas

Eriksholm Research Centre, Oticon A/S

[email protected]

MacDonald, Ewen

Technical University of Denmark

[email protected]

Madsen, Sara Miay Kim

Technical University of Denmark

[email protected]

Manh, Nina

Oslo University Hospital, Rikshospitalet

[email protected]

Marchl, Stefan

HÖRwerkstatt Helga Gotschuli

[email protected]

Marozeau, Jeremy

Technical University of Denmark

[email protected]

Mazevski, Annette

Oticon Inc.

[email protected]

McWalter, Richard

Technical University of Denmark

[email protected]

Mehlsen, Maria

GN ReSound A/S

[email protected]

Mehraei, Golbarg

Massachussets Institute of Technology

[email protected]

Micula, Andreea

Oticon A/S

[email protected]

Milvae, Kristina

Purdue University

[email protected]

Moncada Torres, Arturo

KU Leuven

[email protected]

Moore, Brian

University of Cambridge

[email protected]

Morimoto, Takashi

Rion Co. Ltd

[email protected]

Moritz, Maxi Susanne

Phonak AG

[email protected]

Møller, Troels

Aarhus University Hospital

[email protected]

Møller, Vibeke

Castberggård Job- og Udviklingscenter

[email protected]

Nakagawa, Tatsuo

Yokohama National University

[email protected]

Neher, Tobias

University of Oldenburg

[email protected]

Oetting, Dirk

Fraunhofer IDMT

[email protected]

Olsen, Ole Fogh

Oticon A/S

[email protected]

Owen, Hanne

Aarhus University Hospital

[email protected]

Oxenham, Andrew

University of Minnesota

[email protected]

Paludan-Müller, Carsten

Widex A/S

[email protected]

Park, Hong Ju

Asan Medical Center

[email protected]

Park, Kihyun

Inha University School of Medicine

[email protected]

Pedersen, Ellen Raben

University of Southern Denmark

[email protected]

Philips, Birgit

Cochlear Technology Center

[email protected]

Piechowiak, Tobias

GN ReSound A/S

[email protected]

Pislak, Stefan

Phonak AG

[email protected]

Plack, Christopher

University of Manchester

[email protected]

Poulsen, Torben

Technical University of Denmark

[email protected]

Rohweder, Reimer

Deutsches Hörgeräte Institut GmbH

[email protected]

Rosen, Stuart

University College London

[email protected]

Rønne, Filip

Eriksholm Research Centre, Oticon A/S

[email protected]

Sanchez, Raul

Technical University of Denmark

[email protected]

Santurette, Sébastien

Technical University of Denmark

[email protected]

Scheidiger, Christoph

Technical University of Denmark

[email protected]

Scheller, Thomas

Starkey

[email protected]

Schmidt, Jesper

Odense University Hospital

[email protected]

Schnack-Petersen, Rikke

Odense University Hospital

[email protected]

Schoonjans, Carine

Hoorcentrum Schoonjans

[email protected]

Seiden, Lene Rønkjær

Widex A/S

[email protected]

Serman, Maja

Sivantos GmbH

[email protected]

Sjolander, Lisa

GN ReSound A/S

[email protected]

Smeds, Karolina

Widex A/S, ORCA Europe

[email protected]

Steel, Karen

King’s College London

[email protected]

Stielund, Christina Wassard

Gentofte Hospital

[email protected]

Strelcyk, Olaf

Sonova

[email protected]

Strickland, Elizabeth

Purdue University

[email protected]

Stropahl, Maren

University of Oldenburg

[email protected]

Studsgaard, Ann Momme

Vejle Hospital

[email protected]

Sun, Keeeun

Inha University School of Medicine

[email protected]

Sørensen, Helen Connor

GN ReSound A/S

[email protected]

Theill, Jesper

Widex A/S

[email protected]

Thorup, Nicoline

Slagelse Hospital

[email protected]

Thyme, Peder

GN ReSound A/S

[email protected]

Torres, Arturo Moncada

KU Leuven

[email protected]

Tranebjærg, Lisbeth

University of Copenhagen

[email protected]

Udesen, Jesper

GN ReSound A/S

[email protected]

Uhm, Jaewoung

Inha University School of Medicine

[email protected]

Van Camp, Guy

University of Antwerp

[email protected]

Van Hengel, Peter

INCAS3/Pento

[email protected]

Van Oosterhout, Caroline

Technical University of Denmark

[email protected]

Vanpoucke, Filiep

Cochlear Technology Center

[email protected]

Vencovský, Václav

Academy of Performing Arts Prague

[email protected]

Verhulst, Sarah

University of Oldenburg

[email protected]

Walaszek, Justyna

Oticon A/S

[email protected]

Wargert, Stina

Sonova

[email protected]

Wendt, Dorothea

Technical University of Denmark

[email protected]

Wigley, Emily

Knowles Electronics

[email protected]

Wiinberg, Alan

Technical University of Denmark

[email protected]

Willberg, Tytti

Kuopio University Hospital

[email protected]

Winkler, Alexandra

Jade University of Applied Sciences

[email protected]

Wolters, Florian

Widex A/S, ORCA Europe

[email protected]

Zirn, Stefan

Implant Centrum Freiburg

[email protected]

Øygarden, Jon

HiST

[email protected]

Session 1: Characterizing individual differences in hearing loss

Chairs: Brian Moore and Ewen MacDonald

Wed 26 Aug, 10:10-15:00

S1.1 – Wed 26 Aug, 10:10-10:40 Characterizing individual differences: Audiometric phenotypes of agerelated hearing loss Judy R. Dubno* - Medical University of South Carolina, Charleston, SC, USA A significant result from animal studies of age-related hearing loss involves the degeneration of the cochlear lateral wall, which is responsible for producing and maintaining the endocochlear potential (EP). Age-related declines in the EP systematically reduce the voltage available to the cochlear amplifier, which reduces its gain more so at higher than lower frequencies. This “metabolic presbyacusis” largely accounts for age-related threshold elevations observed in laboratory animals raised in quiet and may underlie the characteristic audiograms of older humans: a mild, flat hearing loss at lower frequencies coupled with a gradually sloping hearing loss at higher frequencies. In contrast, sensory losses resulting from ototoxic drug and noise exposures typically produce normal thresholds at lower frequencies with an abrupt transition to 50-70 dB thresholds at higher frequencies. In addition to audiograms, evidence of metabolic and sensory phenotypes in older humans can be derived from demographic information (age, gender), environmental exposures (noise and ototoxic drug histories), and suprathreshold auditory function beyond the audiogram. Once confirmed with biological markers, well-defined audiometric phenotypes of human age-related hearing loss can contribute to explanations of individual differences in auditory function for older adults. [Supported by NIH] Corresponding author: Judy R. Dubno ([email protected])

S1.2 – Wed 26 Aug, 10:40-11:10 Individual differences in auditory perception among older adults with impaired hearing Larry E. Humes* - Indiana University, Bloomington, IN, USA Over the past several years, our laboratory has conducted studies of individual differences in the performance of older adults with varying degrees of hearing loss on a wide variety of auditory tasks. Typically, a range of psychophysical measures have been obtained for nonspeech acoustical stimuli from relatively large samples of subjects. In addition, several studies have included measures of speech perception, especially aided and unaided speech perception in backgrounds of competing noise or speech. The most recent work on individual differences in auditory perception among older adults will be reviewed with special emphasis on two datasets: (1) one with measures of threshold sensitivity and temporal processing from 245 young, middle-age, and older adults; and (2) another with a wider range of auditoryperception measures from 98 older adults. [This work was supported, in part, by a research grant, R01 AG008293, from the National Institute on Aging.] Corresponding author: Larry E. Humes ([email protected])

S1.3 – Wed 26 Aug, 11:30-12:00 Beyond the audiogram: Influence of supra-threshold deficits associated with hearing loss and age on speech intelligibility Agnès C. Léger* - School of Psychological Sciences, University of Manchester, Manchester, England Christian Lorenzi - Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Institut d'Etudes de la Cognition, École Normale Supérieure, Paris, France Brian C. J. Moore - Department of Experimental Psychology, University of Cambridge, Cambridge, England Christine Petit - Unité de Génétique des Déficits Sensoriels, CNRS URA 1968, Institut Pasteur, Paris, France Sensorineural hearing loss and age are associated with poor speech intelligibility, especially in the presence of background sounds. The extent to which this is due to reduced audibility or to supra-threshold deficits is still debated. The influence of supra-threshold deficits on intelligibility was investigated for normal-hearing (NH) and hearing-impaired (HI) listeners with high-frequency losses by limiting the effect of audibility. The HI listeners were generally older than the NH listeners. Speech identification was measured using nonsense speech signals filtered into low- and mid-frequency regions, where pure-tone sensitivity was near normal for both groups. The older HI listeners showed mild to severe intelligibility deficits for speech presented in quiet and in various backgrounds (noise or speech). The intelligibility of speech in quiet and in noise was also measured for a large cohort of older NH and HI listeners, using linear amplification for listeners with mild to severe hearing losses. A measure was developed that quantified the influence of noise on intelligibility while limiting the contribution of linguistic/cognitive factors. The pure-tone average hearing loss accounted for only a third of the variability in this measure. Overall, these results suggest that speech intelligibility can be strongly influenced by supra-threshold auditory deficits. Corresponding author: Agnès C. Léger ([email protected])

S1.4 – Wed 26 Aug, 13:30-14:00 Characterizing individual differences in frequency coding: Implications for hearing loss Andrew J. Oxenham*, Kelly Whiteford - University of Minnesota, Minneapolis, MN, USA Our ability to perceive changes in frequency or pitch is remarkably accurate. This high sensitivity, along with its degradation at high frequencies, has led to analogies with the exquisite sensitivity to interaural time differences (ITDs) and to the proposal that phaselocking in the auditory nerve is used to code frequency. Here we use individual differences between normal-hearing listeners in an attempt to tease apart different contributions to frequency perception. We tested 100 listeners in frequency-modulation (FM) detection at low and high rates, thought to be mediated by phase-locking and place cues, respectively, along with amplitude-modulation (AM) detection, binaural (time and level) disparity detection, and frequency selectivity, all around a frequency of 500 Hz. Strong correlations were found between FM and ITD detection, in apparent support of the timing hypothesis. However, equally strong correlations were found between these measures and other measures, such as AM detection, which are not thought to rely on phase-locking. Information about frequency selectivity did not improve the predictions of either fast or slow FM. The results suggest that FM detection in normal hearing is limited neither by peripheral phase-locking nor by peripheral frequency selectivity. Alternative modeling approaches using cortical noise correlations are considered. Corresponding author: Andrew J. Oxenham ([email protected])

S1.5 – Wed 26 Aug, 14:00-14:20 Interrelations between ABR and EFR measures and their diagnostic power in targeting subcomponents of hearing loss Sarah Verhulst*, Anoop Jagadeesh - Department of Medical Physics, Oldenburg University, Oldenburg, Germany Given the recent classification of sensorineural hearing loss in outer-hair-cell loss and a temporal coding deficit due to auditory-nerve fiber loss, this study evaluated how brainstem response measures can be more effectively used in the diagnostics of subcomponents of hearing loss. We studied the relationship between auditory-brainstem (ABR) and envelopefollowing response (EFR) measures, and how they relate to threshold and compression (DPOAE) measures in 32 listeners with normal to mild hearing losses. The relationships between the resulting click ABR wave-I and V level-series and EFRs to 75-dB-SPL broadband noise of different modulation depths indicate that the EFR strength-vs-modulation-depthreduction and ABR measures are likely to inform about different aspects of hearing loss. Because ABR latency and strength correlated with each other, and the ABR latency-vs-level slope with hearing thresholds, we suggest that cochlear spread of excitation, and to a lesser extent neuropathy, is responsible for differences in ABR measures across listeners. The EFR slope measure did not correlate with any other metric tested and might reflect temporal coding aspects of hearing irrespective of the degree of cochlear excitation (or outer-hair-cell loss). We are further strengthening this hypothesis using a human ABR model in which the subcomponents of hearing loss can be controlled. Corresponding author: Sarah Verhulst ([email protected])

S1.6 – Wed 26 Aug, 14:20-14:40 Is cochlear gain reduction related to speech-in-babble performance? Kristina DeRoy Milvae*,S, Joshua M. Alexander, Elizabeth A. Strickland Purdue University Department of Speech, Language, and Hearing Sciences, West Lafayette, IN, USA Noisy settings are difficult listening environments. With some effort, individuals with normal hearing are able to overcome this difficulty when perceiving speech, but the auditory mechanisms that help accomplish this are not well understood. One proposed mechanism is the medial olivocochlear reflex (MOCR), which reduces cochlear gain in response to sound. It is theorized that the MOCR could improve intelligibility by applying more gain reduction to the noise than to the speech, thereby enhancing the internal signal-to-noise ratio. To test this hypothesized relationship, the following measures were obtained from listeners with normal hearing. Cochlear gain reduction was estimated psychoacoustically using a forward masking task. Speech-in-noise recognition was assessed using the QuickSIN test (Etymotic Research), which generates an estimate of the speech reception threshold (SRT) in background babble. Results were surprising because large reductions in cochlear gain were associated with large SRTs, which was the opposite of the hypothesized relationship. In addition, there was a large range for both cochlear gain reduction and SRT across listeners, with many individuals falling outside of the normal SRT range despite having normal-hearing thresholds. Interpretation of these results will be discussed. Corresponding author: Kristina DeRoy Milvae ([email protected])

S1.7 – Wed 26 Aug, 14:40-15:00 Effects of cochlear compression and frequency selectivity on pitch discrimination of unresolved complex tones Federica Bianchi*, Johannes Zaar, Michal Fereczkowski, Sébastien Santurette, Torsten Dau - Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark Physiological studies have shown that noise-induced sensorineural hearing loss (SNHL) enhances the amplitude of envelope coding in auditory-nerve fibers. As pitch coding of unresolved complex tones is assumed to rely on temporal envelope coding mechanisms, this study investigated pitch-discrimination performance in listeners with SNHL. Pitchdiscrimination thresholds were obtained in 14 normal-hearing (NH) and 10 hearing-impaired (HI) listeners for sine-phase (SP) and random-phase (RP) unresolved complex tones. Eight HI listeners performed at least as well as NH listeners in the SP condition. In the RP condition, seven HI listeners performed worse than NH listeners. Cochlear compression estimates obtained in the same HI listeners were negatively correlated with the difference in pitchdiscrimination thresholds between the two phase conditions. The effects of degraded frequency selectivity and loss of compression were considered in a model as potential factors in envelope enhancement. The model revealed that a broadening of the auditory filters led to an increase of the modulation power at the output of the filters in the SP condition and to a decrease for the RP condition. Overall, these findings suggest that HI listeners benefit from enhanced temporal envelope coding regarding pitch discrimination of unresolved complex tones. Corresponding author: Federica Bianchi ([email protected])

Session 2: Genetics of hearing loss

Chairs: Lisbeth Tranebjærg and Jakob Christensen-Dalsgaard

Wed 26 Aug, 15:30-17:00

S2.1 – Wed 26 Aug, 15:30-16:00 What mouse mutants tell us about deafness Karen P. Steel* - King's College London, London, England Progressive hearing loss is very common in the human population and can start at any age from the first decade of life onwards. Single gene mutations have been implicated in progressive hearing loss in a handful of extended families where linkage analysis can be used to pinpoint the causative mutations, but for most cases there are no clues to the causes. It is likely that a combination of environmental factors and genetic predisposition underlies hearing loss in many cases, making it difficult to study directly. Mouse mutants offer an alternative approach to identifying genes that are essential for maintenance of normal hearing. We have generated a large number of new mouse mutants with known genes inactivated and screened them for hearing deficits by auditory brainstem response recording (ABR) at 14 weeks old. Out of the first 900 new mutant lines screened, 25 new genes not previously suspected of involvement in deafness have shown raised thresholds. Several of these have been followed up with ABR at different ages and show progressive increases in thresholds with age. Examples of primary defects in the hair cells, in synapses below inner hair cells, and in maintenance of endocochlear potential have been discovered, emphasising the heterogeneous nature of progressive hearing loss. These genes represent good candidates for involvement in human progressive hearing loss. Corresponding author: Karen Steel ([email protected])

S2.2 – Wed 26 Aug, 16:00-16:30 Genetic defects and their impact on auditory function Hannie Kremer* - Hearing & Genes, Department of Otorhinolaryngology and Department of Human Genetics, Radboud University Medical Center, Nijmegen, The Netherlands Defects in more than 100 genes can underlie hearing loss. For congenital or early childhood hearing impairment, genetic causes are estimated to account for about half of the cases. Agerelated hearing loss is the result of an interplay between many different genetic and environmental factors in an individual. For hearing impairment with an onset between early childhood and ageing, the relative importance of genetic and environmental causes is not well known. Defects in a large subset of deafness genes affect hair-cell function (a.o., mechanotransduction) but also other processes such as development of the endocochlear potential can be affected in hereditary deafness. Identification of deafness genes has contributed to our understanding of cochlear function at the molecular level. Importantly, correlations have been unveiled between genetic defects and the auditory phenotype in puretone audiograms and, more recently, psychophysical characteristics. Therefore, etiological studies including genetic diagnostics after failure in neonatal hearing screening are not only important for genetic counseling of families but can provide important information on prognosis and rehabilitation. Furthermore, a genetic diagnosis can uncover the hearing impairment to be part of a syndrome (e.g., Usher syndrome) and early monitoring or intervention can be initiated for associated medical problems. Corresponding author: Hannie Kremer ([email protected])

S2.3 – Wed 26 Aug, 16:30-17:00 Genetic testing for hearing loss: Where are we today? Guy Van Camp* - Medical Genetics, University of Antwerp, Antwerp, Belgium Hearing loss is the most common sensorial disorder in children, with an incidence of 1 in 500 newborns. Most cases are caused by mutations in a single gene. However, DNA diagnostics for hearing loss are challenging, since it is an extremely heterogeneous trait. Although more than 50 causative genes have been identified for the nonsyndromic forms of hearing loss alone, diagnostic application of the scientific progress has lagged behind. The reason for this is the cost: Screening all the known causatives genes for hearing loss in one patient with the current golden standard for DNA diagnostics, Sanger sequencing, would be extremely expensive. Consequently, current routine DNA diagnostic testing for hearing loss is restricted to one or two of the most common causative genes, which identifies the responsible gene in only 1020% of cases. Recently several reports have shown that “next generation DNA sequencing techniques” allow the simultaneous analysis of panels consisting of 50 or more deafness genes at a reasonable cost. In addition, whole exome sequencing techniques offer the possibility to analyze all human genes, and get a genetic diagnosis even for genes not present in these gene panels. It is to be expected that these new tests will greatly improve DNA diagnostics over the next years. Corresponding author: Guy Van Camp ([email protected])

Session 3: Hidden hearing loss: Neural degeneration in "normal" hearing

Chairs: Deniz Başkent and Andrew Oxenham

Thu 27 Aug, 08:40-12:00

S3.1 – Thu 27 Aug, 08:40-09:10 Towards a diagnostic test for hidden hearing loss Christopher J. Plack*, Garreth Prendergast, Karolina Kluk, Agnès Léger, Hannah Guest, Kevin J. Munro - The University of Manchester, Manchester Academic Health Science Centre, Manchester, England Cochlear synaptopathy, due to noise exposure or ageing, has been demonstrated in animal models using histological techniques. However, diagnosis of the condition in individual humans is problematic. Wave I of the transient-evoked auditory brainstem response (ABR) is a noninvasive electrophysiological measure of auditory nerve function, and has been validated in the animal models. However, in humans wave I amplitude shows high variability both between and within individuals. The frequency-following response (FFR), a sustained evoked potential reflecting synchronous neural activity in the rostral brainstem, is potentially more robust than ABR wave I. However, the FFR is a measure of central activity, and may be dependent on individual differences in central processing. Psychophysical measures are also affected by inter-subject variability in central processing. Differential measures, in which the measure is compared, within an individual, between conditions that are affected differently by cochlear synaptopathy, may help to reduce inter-subject variability due to unrelated factors. There is also the issue of how the metric will be validated. Comparisons with animal models, computational modelling, human temporal bone histology, and auditory nerve imaging are all potential options for validation, but there are technical and practical hurdles, and difficulties in interpretation. Corresponding author: Christopher Plack ([email protected])

S3.2 – Thu 27 Aug, 09:10-09:30 Downstream changes in firing regularity following damage to the early auditory system Dan F. M. Goodman* - Imperial College, London, England Alain de Cheveigné - Ecole Normale Supérieure, Paris, France Ian M. Winter - University of Cambridge, Cambridge, England Christian Lorenzi - Ecole Normale Supérieure, Paris, France We use an abstract mathematical model that approximates a wide range of more detailed models to make predictions about hearing loss-related changes in neural behaviour. One consequence of neurosensory hearing loss is a reduced ability to understand speech, particularly in noisy environments, which may go beyond what would be predicted from reduced audibility. Experimental results in mice showing that there can be a permanent loss of auditory nerve fibres following "temporary" noise-induced hearing loss are promising, but the downstream consequences of this loss of fibres has not yet been systematically investigated. We approximate the stationary behaviour of chopper cells in the cochlear nucleus with a stochastic process that is entirely characterised by its mean, standard deviation, and time constants. From this we predict that the classification of choppers as transient or sustained will be level-dependent, and we verify this with experimental data. We also predict that chopper regularity will decrease following deafferentation, causing sustained choppers to behave as transients. While the function of choppers is still debated, one suggestion is the coding of temporal envelope, widely agreed to be essential for understanding speech. Deafferentation could therefore lead to a disruption of the processing of temporal envelope, and consequently degrade speech intelligibility. Corresponding author: Dan F. M. Goodman ([email protected])

S3.3 – Thu 27 Aug, 09:50-10:20 If it's too loud, it's already too late Katharine Fernandez*, Sharon Kujawa - Massachusetts Eye and Ear Infirmary, Boston, MA, USA The earliest sign of damage in hearing losses due to noise and aging is cochlear synaptic loss. We evaluated two types of noise exposure: one that produces permanent damage to the inner hair cell-afferent nerve synapse without hair cell loss and another that produces no synaptopathy or hair cell death. Adult mice were exposed for 2 hours using an 8-16kHz OBN at either 91 or 100 dB SPL. Cochlear function was assessed via distortion product otoacoustic emission (DPOAE) and auditory brainstem responses (ABRs) from 1 h to 20 months post exposure. Whole mounted tissues and plastic sections were examined to quantify hair cells and cochlear neurons. Our 100 dB SPL synaptopathic noise elicited a robust, but reversible, threshold shift; however suprathreshold ABR amplitudes and cochlear synapses at high frequencies were permanently reduced by up to 45%. With age, synaptopathy was exacerbated compared to age-matched controls and the area of damage spread to include previously unaffected lower frequencies. In contrast, the 91 dB exposure produced a robust temporary threshold shift but without acute synaptopathy. In animals aged to 1 year post exposure, no signs of accelerated synaptic loss or cochlear dysfunction were evident. We conclude that there is an interaction between noise and aging that is largely influenced by acute synaptopathy. Corresponding author: Katharine Fernandez ([email protected])

S3.4 – Thu 27 Aug, 10:20-10:50 Using individual differences to study the mechanisms of suprathreshold hearing deficits Hari M. Bharadwaj*, Golbarg Mehraei - Massachusetts General Hospital, Charlestown, MA, USA Inyong Choi, Barbara G. Shinn-Cunningham - Boston University, Boston, MA, USA About one in ten adults complaining of difficulty communicating in noisy settings turn out to have “normal hearing” (NH). In the laboratory, NH listeners from the general population exhibit large individual differences in suprathreshold perceptual ability. Here, we present a series of experiments using otoacoustic emissions, electrophysiology, and neuroimaging that seek to reveal the mechanisms that influence individual differences in performance in suprathreshold listening tasks. We find that both subcortical temporal coding and cortical oscillatory signatures of active listening independently correlate with performance. Interpreted in conjunction with animal models of neural degeneration in acoustic overexposure and aging, our results suggest that one factor contributing to performance differences among NH listeners arises from hidden hearing deficits likely originating at the level of the cochlear nerve. Further, our results show that cortical signatures of active listening may help explain why some listeners with good subcortical coding still perform poorly. Finally, we comment on the roles of subcortical feedback circuits (olivocochlear efferents and middle-ear muscle reflexes) and individual differences in anatomical factors in the the interpretation of electrophysiological measures and the diagnosis of hidden hearing damage. Corresponding author: Hari M. Bharadwaj ([email protected])

S3.5 – Thu 27 Aug, 11:10-11:40 Immediate and delayed cochlear neuropathy after noise exposure in adolescent mice Jane Bjerg Jensen* - Massachusetts Eye and Ear Infirmary, Eaton Peabody Lab., Boston, USA; Department of Otology and Laryngology, Harvard Medical School, Boston, USA; Department of Biomedical Sciences, CFIM, University of Copenhagen, Copenhagen, Denmark Andrew C. Lysaght, M. Charles Liberman - Massachusetts Eye and Ear Infirmary, Eaton Peabody Lab.,Boston, USA; Department of Otology and Laryngology, Harvard Medical School, Boston, USA; Program in Speech and Hearing Bioscience and Technology, Division of Health Science and Technology, Harvard and Massachusetts Institute of Technology, Boston, USA Klaus Qvortrup - Department of Biomedical Sciences, CFIM, University of Copenhagen, Copenhagen, Denmark Konstantina Stankovic - Massachusetts Eye and Ear Infirmary, Eaton Peabody Lab., Boston, USA; Department of Otology and Laryngology, Harvard Medical School, Boston, USA; Program in Speech and Hearing Bioscience and Technology, Division of Health Science and Technology, Harvard and Massachusetts Institute of Technology, Boston, USA Our objective was to determine whether a cochlear synaptopathy, followed by neuropathy, occurs after noise exposure that causes temporary threshold shift (TTS) in adolescent mice, and to explore differences in molecular networks. Exposing 6 weeks old CBA/CaJ mice to 8-16 kHz bandpass noise for 2 hours, we defined 97 dB sound pressure level (SPL) as the threshold for neuropathic noise associated with TTS, and 94 dB SPL as the highest non-neuropathic noise level associated with TTS. Mice exposed to neuropathic noise demonstrated immediate cochlear synaptopathy and delayed neurodegenerative neuronal loss. To gain insight into molecular mechanisms that may underlie TTS, we performed network analysis (Ingenuity® Pathway Analysis) of genes and proteins reported to be involved in noise-induced TTS. The analysis revealed 6 significant molecular networks, and one was new to the inner ear: Hepatocyte Nuclear Factor 4 alpha (HNF4α). We characterized Hnf4α expression in the murine cochlea from 6 weeks to 18 months of age, and discovered that Hnf4α expression decreased 16 months after exposure to neuropathic noise. We localized Hnf4α expression to spiral ganglion neurons and cochlear supporting cells. Our data contribute to the mounting evidence of cochlear neuropathy underlying “hidden” hearing loss and points to a novel orchestrator from the steroid receptor superfamily. Corresponding author: Konstantina Stankovic ([email protected])

S3.6 – Thu 27 Aug, 11:40-12:00 Evaluation of cochlear processing and auditory nerve fiber intensity coding using auditory steady-state responses Gerard Encina Llamas*, Bastian Epp - Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark James Michael Harte - Interacoustics Research Unit, Kgs. Lyngby, Denmark Torsten Dau - Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and particularly outer-hair cell function. Auditory steady-state responses (ASSR) reflect coding of the stimulus envelope. Recent research in animals shows that noise over-exposure, producing temporary threshold shifts, can cause auditory nerve fiber (ANF) deafferentation in predominantly low-spontaneous rate (SR) fibers. It is hypothesized here that deafferentation of low-SR fibers can lead to a reduction of ASSR amplitude at supra-threshold levels. ASSR input/output (I/O) functions were measured in two groups of normal-hearing adults at stimulus levels ranging from 20 to 90 dB SPL. First, multi-frequency ASSR I/O functions were obtained using a modulation depth of 85%. Secondly, ASSR were obtained using a single sinusoidally amplitude modulated (SAM) tone at four modulation depths (25, 50, 85, and 100%). Results showed that ASSR growth functions exhibit compression of about 0.25 dB/dB. The slope for levels above 60 dB SPL showed more variability across subjects. The slope of ASSR I/O functions could be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL might be used to evaluate the integrity of intensity coding of low-SR fibers. Corresponding author: Gerard Encina Llamas ([email protected])

Session 4: Modelling individual hearing impairment

Chairs: Birger Kollmeier and Torsten Dau

Thu 27 Aug, 13:30-16:00

S4.1 – Thu 27 Aug, 13:30-14:00 Predictors of individual hearing-aid treatment success Enrique A. Lopez-Poveda*, Peter T. Johannesen, Patricia Pérez-González University of Salamanca, Salamanca, Spain William S. Woods, Sridhar Kalluri - Starkey Hearing Research Center, Berkeley, CA, USA José L. Blanco - University of Salamanca, Salamanca, Spain Brent Edwards - Starkey Hearing Research Center, Berkeley, CA, USA Hearing aid (HA) users report large differences in their level of satisfaction as well as in their level of performance with their HAs, and the reasons are still uncertain. We aimed at predicting HA treatment success from a linear combination of demographic variables, HA settings, behavioral and physiological estimates of cochlear mechanical dysfunction, behavioral estimates of auditory temporal processing abilities, and a measure of cognitive function. HA treatment success was assessed objectively using the speech reception threshold in noise, and subjectively using various standardized questionnaires. Success measures and predictors were obtained in 68 HA users with bilateral, symmetric, sensorineural hearing loss. Stepwise, multiple linear regression was used to design predictive models of treatment success as well as to assess the relative importance of the predictors. The results suggest that once the HA gain is sufficient for both the speech and the noise to be above the audibility threshold, temporal processing ability is the most important predictor of speech-in-noise intelligibility; other variables (e.g., cochlear mechanical dysfunction, HA settings, or cognitive status) did not emerge as significant predictors. Subjectively assessed success was only weakly correlated with cognitive abilities and it could not be predicted based on the present set of predictors. Corresponding author: Enrique A. Lopez-Poveda ([email protected])

S4.2 – Thu 27 Aug, 14:00-14:30 Neural modeling to relate individual differences in physiological and perceptual responses with sensorineural hearing loss Michael G. Heinz* - Purdue University, West Lafayette, IN, USA A great challenge in diagnosing and treating hearing impairment comes from the fact that people with similar degrees of hearing loss often have different speech-recognition abilities. Many studies of the perceptual consequences of peripheral damage have focused on outerhair-cell (OHC) effects; however, anatomical and physiological studies suggest that many common forms of sensorineural hearing loss (SNHL) arise from mixed OHC and inner-hair-cell (IHC) dysfunction. Thus, individual differences in perceptual consequences of hearing impairment may be better explained by a more detailed understanding of differential effects of OHC/IHC dysfunction on neural coding of perceptually relevant sounds. Whereas it is difficult experimentally to estimate or control the degree of OHC/IHC dysfunction in individual subjects, computational neural models provide great potential for predicting systematically the complicated physiological effects of combined OHC/IHC dysfunction. This presentation will review important physiological effects in auditory-nerve (AN) responses following different types of SNHL and the ability of current AN models to capture these effects. In addition, the potential for quantitative spike-train metrics of temporal AN coding to provide insight towards relating these differential physiological effects to differences in speech intelligibility will be discussed. Corresponding author: Michael G. Heinz ([email protected])

S4.3 – Thu 27 Aug, 14:50-15:20 Modeling temporal fine-structure and envelope processing in aided and unaided hearing-impaired listeners Stephan D. Ewert, Steffen Kortlang, Volker Hohmann* - Medizinische Physik/Cluster of Excellence Hearing4All, Universität Oldenburg, Oldenburg, Germany Sensorineural hearing loss typically manifests in elevated thresholds and loudness recruitment mainly related to outer hair cell (OHC) damage. However, if these factors are partly compensated for by dynamic range compression in hearing aids, temporal coding deficits might persist affecting temporal fine structure (TFS) and amplitude modulation (AM) processing. Moreover, such temporal coding deficits might already exist in elderly listeners with unremarkable audiometric thresholds as “hidden” hearing loss, likely caused by damage of inner hair cells (IHC) and/or subsequent stages. In individual hearing-impaired (HI) listeners, both OHC and IHC damage might affect perception to a different degree. To assess the consequences and relative role of both, a simple functional model is proposed which mimics coding of TFS and AM features based on simulated probabilistic auditory nerve responses. The model combines two possible detection mechanisms based on phase-locking and AM. OHC and IHC damage were incorporated and adapted to predict frequency modulation discrimination and discrimination of phase-jittered sweeps in elderly normal-hearing and in HI listeners. The role of external noise present in the stimulus itself and internal noise as a consequence of temporal coding deficits are assessed for processing of speech signals using dynamic compression and noise reduction algorithms. Corresponding author: Volker Hohmann ([email protected])

S4.4 – Thu 27 Aug, 15:20-15:40 Modelling individual loudness perception in CI recipients with normal contralateral hearing Josef Chalupper*, Stefan Fredelake - Advanced Bionics, European Research Center, Hannover, Germany For users of cochlear implants (CI) with close to normal hearing on the contralateral side, a thorough balancing of loudness across ears potentially improves localization and spatial release from masking. Adjusting the fitting parameters of the CI, however, can be a tedious process as individual electric loudness perception is affected by a multitude of specific parameters of electric stimulation, e.g., amplitude of current, pulse rate, pulse width, number and interaction of electrodes and inter-phase gap. Theoretically, psychoacoustic loudness models could help to reduce the effort for loudness balancing in clinical practice. In contrast to acoustic hearing, however, loudness models for electric hearing are not used frequently, neither in research nor in clinical practice. In this study, the “practical” loudness model by McKay and McDermott was used to simulate behavioral data for electric hearing and the “Dynamic Loudness Model” [Chalupper and Fastl, 2002] for acoustic hearing. Analogous to modeling acoustic loudness of individual hearing-impaired listeners, the transformation from excitation (here: current) to specific loudness needs to be adjusted individually. For some patients, model calculations show deviations from behavioral data. In addition to loudness growth, information on electric field overlap between electrodes is required to predict individual electric loudness. Corresponding author: Josef Chalupper ([email protected])

S4.5 – Thu 27 Aug, 15:40-16:00 Modeling the effect of individual hearing impairment on sound localization in sagittal planes Robert Baumgartner*,S, Piotr Majdak, Bernhard Laback - Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria Normal-hearing (NH) listeners use monaural spectral cues to localize sound sources in sagittal planes, including up-down and front-back directions. The salience of monaural spectral cues is determined by the spectral resolution and the dynamic range of the auditory system. Both factors are commonly degraded in impaired auditory systems. In order to simulate the effects of outer hair cell (OHC) dysfunction and loss of auditory nerve (AN) fibers on localization performance, we incorporated a well-established model of the auditory periphery [Zilany et al., 2014, JASA 135] into a recent model of sound localization in sagittal planes [Baumgartner et al., 2014, JASA 136]. The model was evaluated for NH listeners and then applied on conditions simulating various degrees of OHC dysfunction. The predicted localization performance significantly degraded with increasing OHC dysfunction and approached chance performance in the condition of complete OHC loss. When further applied on conditions simulating losses of AN fibers with specific spontaneous rates (SRs), predicted localization performance for moderately loud sounds depended much more on the survival of low- or medium-SR fibers than of the more frequent high-SR fibers. This result is particularly important given the recent finding that noise-induced cochlear neuropathy seems to be selective for fibers with low and medium SRs. Corresponding author: Robert Baumgartner ([email protected])

Session 5: Individualized diagnostics and compensation strategies

Chairs: Karolina Smeds and Jeremy Marozeau

Thu 27 Aug, 16:20-17:00 Fri 28 Aug, 08:40-12:10

S5.1 – Thu 27 Aug, 16:20-16:40 Individual speech recognition in noise, the audiogram, and more: Using automatic speech recognition (ASR) as a modelling tool and consistency check across audiological measures Birger Kollmeier*, Marc René Schädler, Anna Warzybok, Bernd T. Meyer, Thomas Brand - Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany How well do the various audiological findings fit together and how can this information be used to characterize the individual hearing problem of each patient – preferably in a way which is independent from his or her native language? A procedure to find solutions for this fundamental diagnostic problem in rehabilitative audiology is proposed and discussed: It builds on the closed-set Matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages [review by Kollmeier et al., 2015, Int. J. Audiol. online first]. The results can be predicted by an individually adapted, referencefree ASR system which utilizes the limited vocabulary of the Matrix test and its fixed syntactic structure for training and yields a high prediction accuracy for normal listeners across certain noise conditions [Schädler et al., submitted]. The same setup can be used to predict a range of psychoacoustical experiments and to evaluate the required individual settings of the physiologically and psychoacoustically motivated front end of the recognizer to account for the individual hearing impairment. Hence, a minimum set of assumptions and individual audiological parameters may be used to characterize the individual patient and to check the consistency across his or her available audiological data in a way comparable across languages. Corresponding author: Birger Kollmeier ([email protected])

S5.2 – Thu 27 Aug, 16:40-17:00 Coding of interaural phase differences in BiCI users Stefan Zirn*, Susan Arndt, Thomas Wesarg - Department of Oto-RhinoLaryngology of the Medical Center, University of Freiburg, Freiburg, Germany The ability to detect a signal masked by noise is improved in normal-hearing (NH) listeners when interaural phase differences (IPD) between the ear signals exist either in the masker or the signal. We determined the impact of different coding strategies in bilaterally implanted cochlear implant (BiCI) users with and without fine-structure coding (FSC) on masking level differences. First, binaural intelligibility level differences (BILD) were determined in NH listeners and BiCI users using their clinical speech processors. NH subjects (n=8) showed a significant BILD of 7.5 ± 1.3 dB**. In contrast, BiCI users (n=7) without FSC (HDCIS) revealed no significant BILD (0.4 ± 0.6 dB) and with FSC (FS4) a barely significant BILD (0.6 ± 0.9 dB). Second, IPD thresholds were measured in BiCI users using either their speech processors with FS4 or direct stimulation with FSC. With the latter approach, synchronized stimulation providing an interaural accuracy of stimulation timing of 1.67 μs was realized on pitch-matched electrode pairs. The resulting individual IPD threshold was lower in most of the subjects with direct stimulation than with their speech processors. These outcomes indicate that some BiCI users can benefit from increased temporal precision of interaural FSC and adjusted interaural frequency-place mapping presumably resulting in improved BILD. Corresponding author: Stefan Zirn ([email protected])

S5.3 – Fri 28 Aug, 08:40-09:10 Loss of speech perception in noise – Causes and compensation Harvey Dillon*, Elizabeth Beach, Ingrid Yeend, Helen Glyde - NAL, The HEARing CRC, Sydney, Australia Jörg Buchholz - NAL, Macquarie University, The HEARing CRC, Sydney, Australia Jorge Mejia, Tim Beechey, Joaquin Valderrama - NAL, The HEARing CRC, Sydney, Australia Mridula Sharma - Macquarie University, The HEARing CRC, Sydney, Australia This paper reports on two of the probably many reasons why hearing-impaired people need better signal-to-noise ratios (SNRs) than others to communicate in background noise, and shows the effectiveness of beamforming in addressing this deficit. The first reason is inaudibility of high-frequency sounds, even when aided, as these sounds have the largest head diffraction effects, which are the key to enabling better-ear glimpsing, which most facilitates speech understanding in spatialized noise. The second (probable) reason is reduced resolution arising from noise damaging high-level nerve fibres. Early data from a comprehensive experiment examining this behaviourally and electrophysiologically will be presented. Wireless remote microphones most improve SNR, but cannot always be used. Next best are superdirectional binaural beamformers. These improve performance over conventional directional microphones by 1 to 5 dB improvement in speech reception threshold in noise (SRTn). The presentation will show how the degree of improvement depends on the manner of evaluation. Evaluations performed at SNRs typical of realistic listening conditions, whether based on perceived quality, change in acceptable background noise level, or change in SRTn, are greater than when SRTn is evaluated at very negative SNRs. Corresponding author: Harvey Dillon ([email protected])

S5.4 – Fri 28 Aug, 09:10-09:40 Compensation of speech perception in hearing loss: How and to what degree can it be achieved? Deniz Başkent*, Pranesh Bhargava, Jefta Saija, Carina Pals - University of Groningen, University Medical Center Groningen, Groningen, The Netherlands Anastasios Sarampalis - University of Groningen, Department of Psychology, Groningen, The Netherlands Anita Wagner, Etienne Gaudrain - University of Groningen, University Medical Center Groningen, Groningen, The Netherlands Perception of speech that is degraded due to environmental factors, such as background noise or poor room acoustics, can be enhanced using cognitive mechanisms. Two such mechanisms are the top-down perceptual restoration of degraded speech using cognitive and linguistic resources, namely phonemic restoration, and increasing cognitive resources needed for speech comprehension, namely listening effort. Reduced audibility and sound quality caused by hearing loss, similar to external factors, negatively affect speech intelligibility. However, it is not very clear if hearing-impaired individuals and hearing-device users can as successfully use the cognitive compensation mechanisms, due to the interactive effects of internal and external speech degrading factors, aging, and hearing device front-end processing. Our recent research has shown that degradations due to hearing loss or due to external factors can be compensated. However, when the two are combined, the benefits of top-down compensation can be limited. Front-end processing and aging can also influence the compensation, but not always in a predictable manner. These findings indicate that new methods need to be incorporated into audiological practices and device development procedures to capture such complex and interactive effects of cognitive factors in speech perception with hearing loss. Corresponding author: Deniz Başkent ([email protected])

S5.5 – Fri 28 Aug, 10:00-10:30 Individualizing hearing aid fitting through novel diagnostics and selffitting tools Brent Edwards* - EarLens Corp., Menlo Park, CA, USA The audiogram has long been considered a poor representation of a person’s hearing impairment, as evidenced by the poor predictive capability that the audiogram has towards hearing aid benefit. Preference for the setting of sophisticated hearing aid features such as frequency lowering and noise reduction can depend on many factors, making the audiologist’s job of fitting these features to individual patients difficult. Novel diagnostics and outcome measures have been developed to aid with compensation strategies, including approaches that are regulated by the patient. Wireless integration of hearing aids with smartphones allows for these approaches to take place outside of the clinic and in the patient’s real-world experience, helping individualize the treatment for their hearing loss. This talk will review these developments and their potential effect on the future role of hearing healthcare professionals in the provision of hearing aid technology. Corresponding author: Brent Edwards ([email protected])

S5.6 – Fri 28 Aug, 10:30-10:50 Preference for compression speed in hearing aids for speech and music and its relationship to sensitivity to temporal fine structure Brian C. J. Moore* - Department of Experimental Psychology, University of Cambridge, Cambridge, England Aleksander Sęk - Institute of Acoustics, Adam Mickiewicz University, Poznań, Poland Multi-channel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore [2008, Trends Amplif. 12, 300-315] suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Eighteen hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2 method. On each trial a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2 kHz and by two measures of sensitivity to interaural phase at low frequencies. The results for the DLFs, but not the measures of sensitivity to interaural phase, provided some support for the suggestion that preference for compression speed is affected by sensitivity to TFS. Corresponding author: Brian C. J. Moore ([email protected])

S5.7 – Fri 28 Aug, 10:50-11:10 Individual factors in speech recognition with binaural multimicrophone noise reduction: Measurement and prediction Tobias Neher*, Jacob Aderhold - Medizinische Physik and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany Daniel Marquardt - Signal Processing Group and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany Thomas Brand - Medizinische Physik and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany Multi-microphone noise reduction algorithms typically produce large signal-to-noise ratio (SNR) improvements, but they can also severely distort binaural information and thus compromise spatial hearing abilities. To address this problem Klasen et al. [2007, IEEE Trans. Signal Process.] proposed an extension of the binaural multi-channel Wiener filter (MWF), which suppresses only part of the noise and in this way preserves some binaural information (MWFN). The current study had three aims: (1) to assess aided speech recognition with MWF and MWF-N for a group of hearing-impaired listeners, (2) to explore the impact of individual factors on their performance, and (3) to test if a binaural speech intelligibility model [Beutelmann and Brand, 2010, JASA] can predict outcome. Sixteen elderly hearing aid users took part. Speech recognition was assessed using headphone simulations of a spatially complex speech-in-noise scenario. Individual factors were assessed using audiometric, psychoacoustic (binaural), and cognitive measures. Analyses showed clear benefits from MWF and MWF-N, and also suggested sensory and binaural influences on speech recognition. Model predictions were reasonably accurate for MWF but not MWF-N, suggesting a need for some model refinement concerning binaural processing abilities. Corresponding author: Tobias Neher ([email protected])

S5.8 – Fri 28 Aug, 11:30-11:50 Can individualised acoustical transforms in hearing aids improve perceived sound quality? Søren Laugesen*, Niels Søgaard Jensen, Filip Marchman Rønne, Julie Hefting Pedersen - Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark All ears are acoustically different, but nevertheless most hearing aids are fitted using average acoustical transforms (open ear gain, real ear to coupler difference, and microphone location effect). This paper presents an experiment which aimed to clarify whether benefits in terms of perceived sound quality can be obtained from fitting hearing aids according to individualised acoustical transforms instead of average transforms. 18 normal-hearing test subjects participated, and hearing-aid sound processing with various degrees of individualisation was simulated and applied to five different sound samples, which were presented over insert phones in an A/B test paradigm. Data were analysed with the Bradley-Terry-Luce model. The key result was that individualised acoustical transforms measured in the “best-possible” way in a laboratory setting were preferred over average transforms. This result confirms the hypothesized sound-quality benefit of individualised over average transforms, while there was some variation across test subjects and sound samples. In addition, it was found that representing the individualised transforms in lower frequency resolution was preferred over the representation in fine spectral detail. The analysis suggests that this may be because of an artefact of the low-resolution representation which added a slight boost in the 6-8 kHz frequency range. Corresponding author: Søren Laugesen ([email protected])

S5.9 – Fri 28 Aug, 11:50-12:10 A profiling system for the assessment of individual needs for rehabilitation with hearing aids, based on human-related intended use (HRIU) Wouter A. Dreschler*, Inge Brons - Department of Clinical & Experimental Audiology, AMC, Amsterdam, The Netherlands A new profiling system has been developed for the reimbursement of hearing aids, based on individual profiles of compensation needs. The objective is to provide an adequate solution: a simple hearing aid when possible and a more complex aid when necessary. For this purpose we designed a model to estimate user profiles for human-related intended use (HRIU). HRIU is based on self-report data: a modified version of the AIADH, combined with a COSI-approach. AVAB results determine the profile of disability and COSI results determine the profile of targets. The difference between these profiles can be interpreted as the profile of compensation needs: the HRIU profile. This approach yields an individual HRIU profile with scores on six dimensions: detection, speech in quiet, speech in noise, localization, focus, and noise tolerance. The HRIU-profile is a potential means to determine the degree of complexity and/or sophistication of the hearing aid needed, that can be characterized by a product-related intended use profile (PRIU). Post-fitting results show improvements in the 6 dimensions and determine whether the hearing aid is adequate. Also it provides well-standardized data to evaluate the basic assumptions and to improve the system based on practice-based evidence. This new approach will be highlighted and some first results will be presented. Corresponding author: Wouter A. Dreschler ([email protected])

Poster Sessions I and II

Posters will remain on display throughout the symposium. Presenters will be at their posters: Wed 26 Aug, 17:00-19:00 (odd-numbered posters) Thu 27 Aug, 17:00-19:00 (even-numbered posters)

P.1 – Wed 26 Aug, 17:00-19:00 Influences of chronic alcohol intake on hearing recovery of CBA mice from temporary noise-induced threshold shift Joong Ho Ahn*, Myung Hoon Yoo - Department of Otolaryngology, Asan Medical Center, University of Ulsan College of Medicine, Ulsan, South Korea Objective: To investigate the effects of chronic alcohol intake on hearing recovery of CBA mice from noise-induced temporary threshold shift (TTS). Methods and Materials: We divided CBA mice with normal hearing into 2 groups: control (n=6), 1 g/kg alcohol group (n=13). In the alcohol group, ethanol was administrated intragastrically via feeding tube daily for 3 months. In the control group, normal saline was administrated for 3 months. TTS was induced by 1hour exposure to 110 dB broad-band noise. Hearing thresholds were checked with click ABR before noise exposure, just after exposure, and 1, 3, 5, 7, 14 days after exposure. Anatomical findings with immunohistochemical study and western blots for HIF1-α were also evaluated. Results: In the alcohol group, average hearing thresholds were significantly higher than in the control group after 3 months before noise exposure at 4, 8, 16 kHz. But after noise exposure, the alcohol group showed no significant difference of hearing recovery at all tested times when compared with the control group. HIF1-α expression was decreased in the alcohol group compared with the control group before and after noise exposure. Conclusion: Low-dose chronic alcohol provoked elevated thresholds before noise exposure in CBA mice. However chronic alcohol intake didn’t influence the recovery from TTS. Corresponding author: Joong Ho Ahn ([email protected])

P.2 – Thu 27 Aug, 17:00-19:00 Are temporary threshold shifts reflected in the auditory brainstem response? Lou-Ann Christensen Andersen*,S, Ture Andersen - Institute of Clinical Research, University of Southern Denmark, Odense, Denmark Ellen Raben Pedersen - The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark Jesper Hvass Schmidt - Institute of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Audiology, Odense University Hospital, Odense, Denmark Background: Temporary hearing loss in connection with excessive exposure to sound is described as temporary threshold shift (TTS). The auditory cortex has via the corticofugal descending auditory system the position to directly affect the medial olivocochlear system (MOCS) and the excitation level of the cochlear nucleus. One of the functions of MOCS may be to protect the inner ear from noise exposure. Objective: The primary purpose was to investigate the influence of auditory attention on TTSs measured with distortion product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) using noise, familiar, and unfamiliar music as auditory exposure stimulus, respectively. The secondary purpose was to investigate a possible difference in the magnitude of TTS after exposure to the three different sound stimuli. Method: Normal-hearing subjects were exposed to the three different sound stimuli in randomized order on separate days. Each stimulus was 10 minutes long and the average sound pressure level was 100 dB linear (96-97 dBA). Pre and postexposure DPOAEs at 2, 3, and 4 kHz and ABRs at 4 kHz were conducted immediately after the sound exposure. Results: Temporary results show a tendency towards an increase in the ABR amplitude for Jewit I, representing action potentials in the spiral ganglion neuron, from the left ear immediately after sound exposure. Corresponding author: Lou-Ann Christensen Andersen ([email protected])

P.3 – Wed 26 Aug, 17:00-19:00 Best application of head-related transfer functions for competingvoices speech recognition in hearing-impaired listeners Lars Bramsløw*, Marianna Vatti, Renskje K. Hietkamp, Niels Henrik Pontoppidan - Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark Speech separation algorithms, such as sum/delay beamformers, disturb spatial cues. When presenting separated speech sources over hearing aids, should the spatial cues be restored? The answer was sought by presenting speech sources to a listener via headphones, either directly or after application of head-related-transfer functions (HRTF) to simulate free-field listening. The HRTF application provides both a monaural effect in the form of a substantial high-frequency gain due to the outer ear (pinna and ear canal) and a binaural effect composed of interaural level and time differences. The monaural effect adds audibility, which is crucial for hearing-impaired listeners and the binaural effect adds spatial unmasking cues, which may also be beneficial. For the presentation of two competing voices, we have measured the relative monaural and binaural contributions to speech intelligibility using a previously developed competing voices test. Two consecutive tests, using 13 and 10 hearing-impaired listeners with moderate, sloping hearing losses were conducted, combining different HRTF conditions and horizontal plane angles. Preliminary analysis indicates that hearing-impaired listeners do benefit from HRTF application and that the monaural gain component of the HRTF is the main contributor to improved speech recognition. Corresponding author: Lars Bramsløw ([email protected])

P.4 – Thu 27 Aug, 17:00-19:00 Predicting masking release of lateralized speech Alexandre Chabot-Leclerc*, Ewen N. MacDonald, Torsten Dau - Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark Locsei et al. [2015, Speech in Noise Workshop, Copenhagen, pp.46] measured speech reception thresholds (SRTs) in anechoic conditions where the target speech and the maskers were lateralized using interaural time delays. The maskers were speech-shaped noise (SSN) and reversed babble (RB) with two, four, or 8 talkers. For a given interferer type, the number of maskers presented on the target’s side was varied, such that none, some, or all maskers were presented on the same side as the target. In general, SRTs did not vary significantly when at least one masker was presented on the same side as the target. The largest masking release (MR) was observed when all maskers were on the opposite side of the target. The data could be accounted for using a binaural extension of the sEPSM model [Jørgensen and Dau, J. Acoust. Soc. Am. 130(3), 1475–1487], which uses a short-term equalization–cancellation process to model binaural unmasking. The modeling results suggest that, in these conditions, explicit top-down processing, such as streaming, is not required and that the MR could be fully accounted for by only bottom-up processes. However, independent access to the noisy speech and the noise alone by the model could be considered as implicit streaming and should therefore be taken into account when considering “bottom-up” models. Corresponding author: Alexandre Chabot-Leclerc ([email protected])

P.5 – Wed 26 Aug, 17:00-19:00 Longterm changes of music perception in Korean cochlear-implant listeners Yang-Sun Cho*, Sung Hwa Hong - Department of Otolaryngology, Sungkyunkwan University, Samsung Medical Center, Seoul, South Korea The purpose of this study was to assess long-term post-implant changes in music perception in cochlear implant (CI) listeners. The music perception ability of 27 participants (5 men 22 women) was evaluated with the Korean version of the Clinical Assessment of Music Perception test which consists of pitch discrimination, melody identification, and timbre identification. Also, a questionnaire was used to quantify listening habits and level of musical experience. Mean postoperative durations of first and second test were 12.8 and 30.9 months. Participants were divided into 2 groups: good or poor performance in the first test with reference to the average of each performance. Pitch discrimination of the second test in the good performance group showed no difference with the first test (p=0.462), but in the poor performance group, the pitch discrimination score significantly improved (p=0.006). The second test results of the good performance group were still better than the poor performance group (p=0.002). In the melody identification test, the two groups showed no change at the second test. The timbre test result was the same feature as the pitch test. The poor performance group in the timbre test had improved in the second test (p=0.029). Scores for listening habit and level of musical experience significantly decreased postoperatively (p=0.06 and p