Hearing loss in other languages

Do the people who do not speak English experience hearing loss in different ways than English speakers? The answer is no. All humans experience hearing loss the same way. Then why should you keep reading? To learn some really fascinating things about hearing technology. Though hearing loss happens the same to all peoples through heredity, loud noise exposure, destructive medicines call ototoxics, and aging; how hearing aids can and need to be programmed for different languages is a growing technological ability than was never possible before the digital revolution and the development of very sophisticated software algorithms. “There is no inherent reason why the vocal output of an English speaker should be any different from a person speaking Chinese. However, the speech recognition is based on which sounds are linguistically distinctive or important in that language. The frequency band importance of a Chinese speaker has greater value in the lower frequencies than for English because Chinese relies more on pitch changes in the lower-frequency vowels.” (The Hearing Review 10/1/2008 Dr. Marshall Chasin)

Even if we have never traveled around the world, TV and the internet have brought the languages of the world to our ears. Some languages are guttural, some tonal, some more nasal etc. Hearing loss in any language misses the important frequency clues that pronounce all words in the world. From that same previously quoted study here is an overview about hearing aid programming changes that would help a person their native language, hear better. It is a very technical paragraph, but wade through and I will summarize it for you.

“In general, one can say that if nasals are linguistically more important (distinctive) in a language (eg, Portuguese), then more gain should be specified in the 125-2000 Hz region where nasals have their greatest energy spectrographically. The same frequency region is important for tonal languages and timed languages since the tone or the time-lengthening (or morae) is manifested on the lower frequency vowels and nasals. In languages where palatalization is important (eg, Russian), the important frequency region is from 3000 to 3500 Hz, and in languages where retroflexion is important (eg, Mandarin Chinese), the important frequency region is from 2700 to 3000 Hz. In SOV languages (eg, Japanese and Hindi), more gain at low intensity input levels for

WDRC (Wide Dynamic Range Compression) should be specified to ensure audibility of sentence final postpositions. In Arabic (Semitic) languages, there are many high-frequency consonants (velars, uvulars, and pharyngeal fricatives) that indicate a need for more high-frequency gain than would be specified for an “English program.”

What Dr. Chasin is saying is that different parts of sounds need different broadcast emphasis though the hearing aids for different kinds of languages. Amazingly, we live in a time when hearing aid technology can do that and more. Just like smart phones have built-in language translators, hearing aids are appearing in the market place that can translate other languages in real time, in your ears, as you hear it spoken. Maybe by now you are saying what does all of this mean for me as someone who doesn’t get to far away from home here in beautiful Central Pennsylvania? I wanted you to ponder just how complex languages can be for the hearing impaired and what strides have been made in hearing aid technology. Whatever your hearing loss, or language, hearing aids can help you hear so much better.

If you have the symptoms of hearing loss let a professional help you find out why. The hearing professional will help you sort out the technology level to meet your need, your budget, and answer your hearing need questions.


Jeffrey L. Bayliff, NBC-HIS, is owner, Hear the Birds Hearing Aid Center, Lock Haven, PA.