For some reason, this strikes me as particularly awful. Not that police don't want to be filmed, that's predictably repugnant for its own clear reasons. Its that they had no problem ripping a prosthesis off someone before even bothering to try to understand it because it looked different. What's next, wrestling old folk to the ground and ripping out their hearing aids because they might be recording devices? Will there be an unwritten "normalcy code" that disabled people will have to follow to avoid assault?
You're lying to yourself if you honestly believe that doesn't exist today.
I don't have it myself, but did study music with someone who had it, and since then it's always fascinated me.
Somewhat related: my main research area is actually in sonification (representing data through non-speech sound) - imagine listening to changes in the stock market through changes in pitch, or loudness, or tempo. We can use sonification for the visually impaired, communicating data and patterns in new places, as this guy has done. But we can also use it to revolutionize how we interact with computers - we can be mobile, multitasking, visually overloaded, and still process data through sonification. IMO a potentially revolutionary technology!
The field was pioneered by Paul Bach-y-Rita (http://en.wikipedia.org/wiki/Paul_Bach-y-Rita) who most notably invented a setup that allowed blind people to "see" via a camera connected to a vibrating grid attached to their their backs, effectively substituting visual with haptic input.
In a nutshell, there is nothing intrinsically "visual" about neurons in the visual cortex, nor are neurons in the, e.g., auditory cortex exclusively tuned towards sound - the brain is plastic enough to "make sense" of a new type of input signal, which typically takes a couple of weeks.
My co-founder Peter König at EyeQuant.com - a neuroscience professor at the University of Osnabrueck - is working on similar projects with his feelspace group, where they created a compass-belt that vibrates whereever north is, taking sensory substitution a step further by effectively creating a new sensory modality of direction (Wired article: http://www.wired.com/wired/archive/15.04/esp.html)
As an excellent philosophical take on this I would recommend Alva Noe's "Action in Perception": http://www.amazon.com/dp/0262140888/
Also, I really want to get one of those compass-belts. It seems like an incredible experiencee. I wonder how it feels to not have it on though. Losing a sense mustn't be the nicest experience.
More sensitive EM field detection could be vaguely useful, but to be useful it would probably require a bit of pre-processing (e.g. scaling the entire frequency range of the currently used EM spectrum into a range we can hear, see or feel) and maybe some protocol-specific hardware (decoding radio, video, wireless etc).
Other interesting candidates for extra-human senses beyond just increasing the range and sensitivity of existing senses would be EM/light polarization (insects can see polarization in the sky, so they can navigate by the sun's direction even when it's hidden by clouds) and magnetic field direction (which exists in bacteria, invertebrates, and birds, and may exist in some mammals.)
He is limited to hearing one note at a time, therefore he can only perceive one color at a time?
Am I missing the point?
But, this is for someone who literally could not perceive color at all but can still see. If he waves it around he can probably quickly tell that the wall is blue or white and then focus on extracting details from things he finds interesting.
Taking onboard what you said, a system could work to match sound volume with direction to see in a higher resolution.
I.e. It would superimpose all color in range together with those closer to the center having a higher amplitude. So the colors directly ahead would be player louder and peripheral colors played quietly.
Or perhaps this is how it works already....
Roughly how many different notes could you hear at a time?
One way of doing this would be to map from the electromagnetic wavelength of the color to a corresponding audible frequency.
Audible range is something like 32 to 32768 Hrz. Assuming speed of sound in air of 343 m/s, this translates to wavelength range of 0.010437 to 10.6875 m . This can then be mapped onto the visible spectrum of 390 to 750 nm.
light_range = (750 * 10^-9) - (390 * 10^-9)
sound_range = 10.6875 - 0.010437
light_wavelength = (light_range * ((343 / sound_freq) - 0.010437) / sound_range) + (390 * 10^-9)
Here's something to get you started.
Not gonna like though, I thought this was going to be about http://en.wikipedia.org/wiki/Aphex_Twin
Definitely going to like. Inadvertently said the opposite of what I meant.
I hadn't considered artificial melding of the senses like this. Came out of left field, so the name of the first (only?) Brit I know of with synethesia popped into my head.
Why?