My father has hearing loss and it's bad enough that not only does he use hearing aids, he is constantly turning them up until they feedback and start ringing. He doesn't hear the ringing, but I do, and I have some hearing damage of my own. And he wonders why his hearing aids go through batteries so fast.
Their description of how hearing loss works gives me some ideas on how I can help my father manage his hearing loss better than just constantly buying batteries.
However, I do have one complaint with the article, and that's their (mis)use of terminology, specifically, dynamic range. Dynamic range is not, as they claim, the range of frequencies one can hear, from lowest to highest (e.g. 20Hz-20kHz). That's bandwidth. What dynamic range is, is the ratio of quietest to loudest sounds possible, often expressed in dB.[1]
For example, as they mention, human hearing has about 120dB of dynamic range. An audio CD can encode a dynamic range of 96dB. The 24-bit files professional audio studios work with can represent up to 144dB of dynamic range.
Perhaps it's a pedantic distinction, but using already existing terms for what you mean to say is less likely to cause confusion than misusing one that means something else.
It would be like an article about a camera talking about its fast f/1.4 shutter speed.
http://en.wikipedia.org/wiki/Lens_speed
"Lens speed refers to the maximum aperture diameter, or minimum f-number, of a photographic lens. A lens with a larger maximum aperture (that is, a smaller minimum f-number) is called a "fast lens" because it delivers more light intensity (illuminance) to the focal plane, achieving the same exposure with a faster shutter speed."
1) I will build it myself and figure out the details, to hell with the audiologists and their racket.
2. I will carry the microphone and amplifier in my shirt pocket. No more feedback.
Ironically, with the advent of personal electronics, everybody wears some sort of gizmo on their body, so I think we could just persuade the elderly that hearing aids don't need to be invisible any more.
Also, I found that the best way to get rid of feedback was to get an ear mold rather than using an "open-fit" mold. This is a clear separation between the speaker and the microphone and pretty much solves the problem in my experience.
EDIT: I noticed late that you addressed the cosmetic issue in your post. I don't see the elderly changing but our generation just might.
The frequency content of a sound is directional, and moreso as the frequency increases. As a test, listen for a difference in the highs with your (computer/stereo/home theater/whatever) speakers both aligned with your ears, and not. Due to this directionality, a microphone in your shirt pocket would pick up a sound differently than one near your ear.
The brain does some very fancy and clever tricks based on the differences in the timing and phase of a sound as it arrives in both ears to determine things like relative position and distance. Having the hearing aid mics receiving sound similarly to your ears would should make it easier for the brain to continue to do these nifty tricks.
Otherwise, yeah, put the amp and other circuitry in your shirt pocket or wherever else is convenient. It should be a lot easier and cheaper to fit an amplifier and multi-band compressor in your shirt pocket, than in a tiny sliver of plastic that has to fit behind the earlobe.
This is how hearing aids used to be:
http://www.hearingaidmuseum.com/gallery/Transistor%20(Body)/...
That won't necessarily fix the problem. Feedback comes from bad electronic/DSP design. Physical mic/speaker positioning makes certain solutions harder, but in practical designs it's not (usually) the limiting factor.
Phones of all kinds, Skype, etc include adaptive echo/feedback cancellation already. It's a well-understood technology - adaptive cancellation has been used since the 1960s - although the fact that it exists is maybe not as well known as it could be.
(Protip - if Skype starts feeding back, you can often reset the adaptive filter by clapping once loudly.)
As for hearing aids - the first papers mentioning multiband compression date from the 1980s, so there's nothing new here, except maybe a lack of research.
My mother's hearing is pretty bad now, and I had to talk a professional audiologist through tuning her aids for maximum intelligibility. I couldn't fault his personal skills, but the consonant/vowel heuristic he'd learned in training was oversimplified and not giving good results.
He'd basically set up a phone curve filter, but in fact you need some low-mid for good intelligibility, especially on male voices. Once he dialed that back in everyone was happy.
Thing is, we needed three one hour sessions to get it in the ballpark. The real problem with aids isn't the technology, it's the fact that setting up a good prescription is really difficult and time-consuming - even more so for elderly people who may have problems describing what they're hearing.
This may also be a sign that the hearing aid wasn't fitted properly — or the shape of your father's ear has changed a bit since it was fitted. Poor fit makes it much more likely for this to happen (coz speaker and mike have a more open channel between 'em, and more ambient noise gets in so folk turn 'em up more). (My dad has similar problems until he got a new ear canal mould taken.)
It may also be a sign that you're dad's hearing loss has changed since the hearing aid was chosen and he needs it adjusting, or a different type.
Feedback is usually a sign that something needs fixing ;-)
What I mean to say is that hearing aids are a piece of technology that it tuned (pimped?) so much that there is very little buffer.
Today, my new hearing aids detect feedback and compensate, but of course that will eliminate sounds I need to hear, as well.
>Dynamic range is the difference between the loudest and quietest sound you can hear.
Here's more info: https://www.apple.com/accessibility/ios/hearing-aids/
That said, if I had an older hearing aid or didn't have this one, I'd definitely use this app. They are spot on about hearing loss and how it's more than just a volume thing. In fact, most of hearing loss is really an understanding thing. I can hear your voice just fine -- I just can't hear 100% of it, so the words don't make sense to me right away.
It was one of the most fascinating things I've ever heard: it turns out that not only the cochlea performs a Fourier transform of the sounds we're hearing, but it can also selectively amplify some frequencies, by vibrating the very same hair that detect the sounds.
Sometimes the mechanism that amplifies some sounds goes wrong and that's why sometimes old dogs seem to emit a high pitched sound from their ears, and also the cause of some forms of tinnitus.
If you have some time to kill do go read the wikipedia pages of the cochlea and hair cells, it's really fascinating stuff!
[1] http://www.rockefeller.edu/research/faculty/labheads/JamesHu...
Or do you only hear it from dogs?
I'm profoundly deaf. This is a technical term classifying the degree of hearing loss; to give you a sense of where this fits, the typical classification range is mild, moderate, severe, profound, total.
Between a combination of hearing aids and lip-reading, I've done a reasonable job of integrating into a hearing society. Not perfect, but ok.
I've often wished for a different approach to correcting hearing. It crystallized for me after I read this article by Jon Udell: http://blog.jonudell.net/2014/12/09/why-shouting-wont-help-y...
In that article, what Jon found was that his mom would hear best if you spoke at a low to medium volume close to her ear - this worked better than any shouting at a greater distance could accomplish.
And it should be easy for you to simulate - get a friend to talk to you from 50' away - you can still hear them, but there's some detail loss that wouldn't happen if they're 3' away.
I still benefit - a lot - from MBC, but if someone could come up with a way to make the incoming sound sound as if it were right beside me, man, that would really help me understand people clearly.
One non-technical solution, that people use to ensure that deaf people can understand you clearly is to enunciate consonants audibly. An example of this is is the word "red" - it becomes "erREDdead". I don't know if there's a name for this so I can't point you to a page describing how to extra-enunciate all the letters. As useful as it is, people speaking to me like that always makes me feel like I'm dumb, because they sound dumb saying it. Clearly I have issues :-)
But if it helps you hear, I think it's great!
That article you linked to was interesting. Thanks.
That way a person can judge the improvement that MBC gives to a person with hearing loss, instead of just judging the reduction of quality to that of a person with perfect hearing.
But again, excellent article!
If you talk to her and then turn on a tap to get a glass of water the conversation over, the fridge motor comes on conversation over, any non-verbal sounds is noise that obscures all words to her. "What?" is the response to nearly every word from anyone mouth has to be repeated twice except in a dead silent room. She listens to the TV on level 20 and it's very draining to everyone around the person.
But she won't get a hearing aid! She's 70 years-old but refuses to even discuss it. It's odd how if you say to a person who can't see they may need glasses it's OK but if you say to a person hard of hearing the may need a hearing aid it's like you said the most obnoxious thing ever to say to anyone.
I really hope this taboo goes away as new generations get more and more comfortable with augmenting their senses.
Is there a way of highlighting how often she says "what?" like a tally chart? I would try that with my mum to get a message across. She'd likely be deeply offended, but I think the message would get across.
It's probably partly due to hearing loss and also just habit because I know I will say something like "Are you going to go for your walk now or later?" she says "What?" and I repeat "Are you go.." and she interrupts as if she knew all along what I had said ugh! Although context helps many situations.
I also find she mistakenly interrupts current conversations between people if they are discussing something she interrupts the conversation with a new conversation since she can't hear. She also has a hard time with conversation flow as if on a phone conference call when everyone interrupts each other due to not being able to follow the flow of the conversation.
http://www.ft.com/intl/cms/s/2/7bf03be0-94de-11da-9f39-00007...
Apple eventually won and the case was dismissed, but the lower maximum value followed from that lawsuit.
This renders my iPhone almost useless as an iPod replacement when travelling with big headphones, it's just not enough juice for a train or bus ride (not to mention a flight).
I find it especially bothersome because on older, non-digital radios, I was able to lightly tune the volume to somewhere just on the threshold of audible, which is the perfect level for falling asleep to.
why would you try to use the speaker if there's noise around you? I mean, why wouldn't you - like - put it up against your ear instead?
- Impromptu "conference call" with a colleague physically next to you and another one remote. - Need to be typing to take minutes, retrieve information relevant to the call, etc. - Too many calls already and arm is tired of holding the phone and listening.
Use a pocket battery-powered amp and a small portable loudpeaker? That'll get you both more gain and better sound than the tinny built-in speaker.
http://i.imgur.com/vKn7oTf.png
Audio compression, especially when using psychoacoustic principles, helps by lowering the noise of the unwanted sounds e.g "probably not a human voice" or "not a bear" in this case and increasing certain frequencies for a person's particular hearing range so they can "see" the image better.
I recall reading that vowels are easier to hear than consonants or maybe it is vice versa? "Hello how are you today" may seem like "Hll hw r tdy" which to the hearing impaired person may seem like "How am I tidy?" or something totally incomprehensible but their brain makes up something close (incorrectly) by filling in the blanks.
The Monty Python sketch "I'd like to buy a hearing aid" feels like what I go through daily when trying to communicate with my mother.
I showed it to my mother thought it was funny, sometimes when she thinks she knows what I said but it's not even close it's like the sketch.
An MBC uses intelligent design instead of a one-size-fits-all method. With the right data about your hearing pattern, it can mash the full sound into your range so that you get all the information you need.
Audio engineer here. That is patently untrue. MBC is a super-useful technique and is indeed helpful for mitigating hearing loss in relatively transparent fashion, but it does not and cannot bring sounds from outside someone's audible hearing range back within it. It will dynamically rebalance incoming audio in inverse proportion to the degree of hearing loss within a set of frequency ranges, but many kinds of sensorineural hearing loss involve the death of cilia cells (the tiny hairs thatvibrate at particular frequencies, much like the bins of of an FFT) which can result in a total loss of perception at or above certain frequencies.
http://en.wikipedia.org/wiki/Sensorineural_hearing_loss
To 'mash the full sound into your range' requires a technique known as frequency shifting, but that's problematic because it destroys the harmonic relationships of the incoming material and sounds disorienting, at best.
In any case, I think the illustration of the bear on the tricycle is absurdly simplistic and makes me wonder to what the degree the pp designers really grasp the underlying concept. A much more appropriate parallel would have been to show an image with a severe Gaussian blur, which more closely parallels the actual experience of hearing loss in terms of both empirical measurement (higher frequencies tend to be more severely attenuated in cases of induced hearing loss) and subjective experience (blurring hinders edge detection, which is analogous to transient detection in audio, and which has a large role in speech intelligibility.
http://en.wikipedia.org/wiki/Gaussian_blur
If you're struggling with hearing loss, then you should really, really consult an audiologist, work out the basis of your hearing loss (which is sometimes as simple as impacted earwax), and work out a treatment strategy. If you're suffering from degenerative hearing loss then listening to overly-compressed music could actually accelerate it, and listening on headphones or earbuds (many of which bias the sound for increased impact) could also contribute to the problem. It's a truism in the pro audio world that most people are awful at self-measurement and tend to over-equalize in the absence of proper experimental control protocols.
I apologize for the rather negative tone of the post; I appreciate the people at SoundFocus are trying to provide people with something useful and helpful at minimal cost, by leveraging the pretty good audio hardware in their phone. However, hearing loss tends to be a one-way thing, and I think that offering a product to that market without a clinician on the team is a bad idea. There's a lot more to being an 'audio ninja' than understanding the fundamentals of DSP.
MBC can "mash the full sound into your range" if "range" means dB range at each freq. band. But since the author previously (ill)defined dynamic range as a frequency-related term, the reader reads that passage and thinks he's referring to frequency shifting instead.
Now, I want to go test out what it would be like to 'compress' frequencies. Something like a notch filter that shifts nearby frequencies around the target frequency away into regions above and below. It adds noise, essentially, within the compressed range, but maybe it's tolerable and is useful for someone with a narrow band hearing loss. It could potentially be interesting musically.
Maybe such a filter exists, but I am not familiar with it.
I haven't tried using this for precision stuff - over a small range it might well improve intelligibility at the expense of only minor distortion. I tend to reach for it when I want to give sounds an extra weird dimension, it sounds somewhat orthogonal to the normal harmonic distributions we're familiar with.
This is exactly right. Basically, you separate the audio into arbitrary frequency bands, and then apply compression to each band to control its volume independent of what is going on in the rest of the spectrum.
I was incredibly frustrated by reading the article, since their explanation of multiband compression was incredibly misleading. I get what they're doing and why multiband is helpful (it sounds like they're basically bringing up the volume in the parts of the spectrum where the user's hearing is less sensitive than healthy hearing would be), but that was a poor explanation of how multiband compression works.
It's a great layman's explanation, but if you have a better one I'd love to see it.
In the soundcloud samples, if I can't hear anything above a certain frequency, making them louder isn't going to help. You can drop the frequency of those things, but my guess is that it's going to sound pretty ugly. It would be interesting to listen to a sample that has everything above a certain frequency pitch-shifted downwards.
You raise some great points - it is indeed impossible to use an MBC to bring sounds back into a user's range of hearing if they have lost all sensibility at that particular frequency. However, the loss of hearing at a particular frequency is not binary - it tends to start with a reduction in dynamic range at that frequency, as the cilia start to get worn out / destroyed.
So if you have loss @ 3 KHz, you don't often completely lose all hearing, but your dynamic range which normally is 0dB -> 100 dB (over-simplification here) might now be 30 dB -> 100 dB.
What an MBC will do here is compress the range at that frequency band, so your 100dB of range is now 70dB of range.
I might be wrong on this, but I recall a hearing specialist advising me not to use earbuds at all, or at least limit the use to max 1 hour at a time. A dynamic headphones, such as the good old Superlux HD 681 would be "better" in the long term. (Not trying to advertise for that headphone, it's just one of the few that is cheap, good, and I can rewire myself and even add a plug so I can easily buy new aux cables).
However I cannot wear my headphone longer than 6 hours without it getting annoying. And running with a big headphone is a big no, but than again, I'm nowhere near to running longer than 30 minutes in a row.
Anyone who has his own thoughts on this topic?
The issue isn't necessarily about all "earbuds," the issue is that many earbuds (including the ones included with Apple iOS products) don't seal the ear canal very well, so a listener is exposed to outside sounds in addition to sounds from the player. Since the outside sounds have a tendency to mask the sounds coming from the earbuds, the listener will often turn up the volume to better hear the audio material and therefore be exposed to SPL's that can cause hearing damage over long exposure times.
The advice to use something like the Superlux HD 681 is that circumaural headphones offer some (not a lot, but some) shielding from outside noise, so a user won't be tempted to increase the volume level as much. Active noise canceling headsets and in-ear-monitors (like Etymotics-brand) provide better sound isolation so that users can keep the volume at more moderate levels.
For those wondering which earbuds do provide decent isolation, there are, as you stated, the pretty expensive "in-ear-monitors", but you can also go for cheaper earbuds based on isolating memory foam like the JVC marshmallows, which are pretty cheap and provide decent isolation.
I've also used reusable earplugs (I spent about £15) for all gigs, nightclubs etc. Sometimes I use them in noisy pubs.
> However I cannot wear my headphone longer than 6 hours without it getting annoying.
That's a really long time. Wouldn't it be best to reduce it?
The main charity for deaf people in the UK, which seems to have renamed itself from the "Royal National Institute for the Deaf" to "Action on Hearing Loss", has a campaign called [Don't lose the music](http://www.dontlosethemusic.org.uk/).
And 6 hours is seldom, it's the max, but average I do go above the suggested 1 hour.
One thing that works really well for preserving your hearing is taking breaks - this is especially true of loud concerts (step out for a bathroom break once an hour), but also holds true if you're going to listen to headphones at high volumes for many hours straight.
Their rule of thumb is if you are incapable of hearing and understanding what people are telling you without taking out your earbuds, they're too loud.
I tried this out, and found that I'm actually fine with the volume bar being maybe 30% of what I used to. Basically, put your earbuds in, play some music, and try to talk to someone. If you're unable to then you might be playing your music way too loud.
(For people using earbuds to drown workplace noise out, I'd suggest an ambient noise program like noiz.io. You can train your brain to filter ambient background noise, and its better than trying to 1-up the sounds in your office)
High quality isolating earbuds should make it pretty difficult to tell what other people are saying even with no music playing! Moving to a mid-range pair of Etymotic noise isolating earbuds was an absolute revelation for me and allowed me to use significantly lower volumes when listening to music. I can't recommend high quality isolating headphones/earbuds enough.
They deliver way too much sound in narrow frequency ranges, usually the mid-range, say 1-4Khz, while not enough in the bass, and it's a mixed bag higher up through 10Khz+
People will turn it up, until they are hearing everything, and they do very significant damage to those "hot spot" frequency ranges, despite the overall perception of volume seeming reasonable to them.
Add overly compressed music, and or crappy audio output, and the need to drive it loud happens to nearly everybody.
There will be a whole generation of people, some who we are already seeing struggle with this, requiring adaptive sound options more early in life than is typical.
Quiet is better than loud. The more sound isolation your phones provide from the outside world, the quieter you can play your music and enjoy it.
If you're sitting with headphones on for six hours, maybe you should get up a few times.
I'd say it's more from the pushing on the sides of my head, not so much from the volume of the music. But you are right, and I have been thinking about stopping listening to music trough headphones when working, and use speakers.
> If you're sitting with headphones on for six hours, maybe you should get up a few times.
I have to say that my current working desk and rhythm is pretty much perfect as far as I know. My eyes never get tired, the screens are a good distance away from me, and I don't have to look down. I have a decent computer chair, and I take breaks about every hour, to go outside or get a new cup of coffee (but I do that with my headphones/earbuds in).
Heck, no employee can offer a better working situation if you ask me, I'm all for the working remote - office not required-idea! :P
In order to summarize what I've read so far: This promotion article about SoundFocus is clearly not written with help of a professional from within the hearing aid industry nor from someone with clinical experience. The author shows to be good with language, probably an engineer who makes links with technical terms as if he knows what he is talking about.
I find this article very misleading and not a help for the hearing disabled or their relatives. It reminds me of a very useful course I once followed:'physiology of the ear for physicists'. It would be good if the author or developers find something similar.
I realize that my post breaths some arrogance and of course it is easy to burn something down. But yes, I know better. And yes, I could have written this article that would market SoundFocus properly (in a similar style if you like) with only useful and correct information.
Maybe I should... ?
Cheers -a professional-
Besides things already mentioned by others, talking about dead regions, upwards spread of masking perhaps temporal scatter and tuning curves would have made it a lot more juicy.
One flaw: time and volume are NOT self explanatory. Think about recruitment and such. Another flaw: multi-band compression is not doing what is suggested by the picture of the bear where the bicycle is missing. This visual example fits much better to a person with a dead region for whom frequency compression is applied. And this is not a one-size-fits-all method. Different techniques are available for this particular phenomenon (as there is frequency shifting as well).
Anyway... let's mention a positive side. I appreciate the attempt of communicating on the topic and it was a good try to make things clearer for some. Better luck next time.
Happy bro?
Work beside a machine humming at a particular frequency and you will loose that frequency even if the sound doesn't seem loud at the time. And simply jambing in a pair of earplugs doesn't make you immune. They have limits.
> As you look at the waveform, the problem should become apparent. Sound is a 3-dimensional construct, but we can only represent 2 dimensions on a textbook or a monitor. In the waveform representation, we see Time on the x-axis plotted against Volume on the y-axis.
The two axes in most waveform plots are sound pressure and time, not volume and time. The fact that the waveform depicted is nearly symmetrical reflected across the x axis should hint that this is the case.
> Dynamic range is the range of frequencies and volumes that are audible to the human ear
But reading on, it looks like it is used in the correct sense albeit it a specialised one. The article discusses dynamic ranges per frequency. Talking about multi-band compression confirms that.
So what is the explanation of why they can't tolerate certain loud noises? I feel like the article was going to address that aspect of hearing loss as well but never did.
I've noticed as I age that noise which I would formerly have disregarded without thought has become an irritant - that is, I can't/won't tolerate it - not because it's physically uncomfortable, but because my wetware apparently can no longer do the signal processing that earlier made such noise irrelevant to auditory comprehension.
The article says:
>Well, if you get hearing damage at a specific frequency, you’ll start to lose sensitivity to the quiet sounds at this frequency. However, your sensitivity to loud sounds remains the same.
If their sensitivity to loud sounds remained the same on the one hand, why would they be unable to tolerate certain sounds on the other hand. Seems contradictory.
Just like the choppy adobe flash player on OSX!
Is anyone else as irked about the authors choice of the word dimensions as much as I am? I can't read past it. Wouldn't "factors" be a better fit?
[1] Technically, frequency is a function of time too (and timbre a function of the interaction of multiple frequencies and envelope changes, another function of time) but these are all independent uses of time.
The article is extremely muddled from a technical point of view. When dealing with perceptions it is extremely important to distinguish physics and physiology. In optics we have radiometric (physical) vs photometric (perceived) values: https://en.wikipedia.org/wiki/Photometry_%28optics%29#Photom...
It appears in the article they are doing some kind of implicit averaging over the ear's response function at each frequency, which may make sense in terms of perceptions but makes very little sense in terms of physics.
A much better visual analog would be a blurred photograph rather than a cropped one. "Turning up the volume" simply increases the brightness of the images, which doesn't do a damned thing to reduce the blurring.
One thing that people with normal hearing don't get is how much information is in the high frequencies, which are where the most loss normally occurs, although there are also "notch" losses that happen to people whose ears are routinely subject to loud noises in narrow bands.
We tend to think of "high frequency" sounds in terms of single notes, but in speech the high frequencies are most important in the unvoiced constants, the "s" and "th" sounds and similar. Losing the high frequencies blurs the edges of speech, often making the shape of it unrecognizable. Frequency-dependent enhancement sharpens the edges and brings it back into useful focus.