Smyth Research Realiser A8 Processor and Headphones
Soon after, John Koss and David Clark introduced dynamic stereo headphones with high-fidelity aspirations. These were more comfortable, and had a less distorted, more balanced soundbut that made the shortcomings of headphones only more apparent. Why, even with the great RCA Living Stereo LPs, was the sound always inside my head? Played through my Jensen TF3B speakers, these recordings gave me the weight and presence of an orchestra in a space. When I switched to the 'phones, there was notable clarity, but everything was coming from a hard, knotty singularity slightly north of my pituitary gland. Turning up the volume increased the internal pressure until I thought my head would explode. Even solo instruments seemed unnaturally concentrated in that same hard knot. On top of that, every time I moved or turned, the entire Chicago Symphony Orchestra turned with me.
Since then, I've tried many other 'phones and earbuds (the latter I find physically intolerable), and have heard increasing advances in harmonic accuracy, resolution of detail, and lowered distortion. I even own a pair of Grado SR-60sbut they're in their box, and the box is covered in dust. No headphones have ever succeeded in re-creating the acoustic context of a performance nearly as well as a decent stereo system. And compared to multichannel? Fuggedaboudit!
Of course, there are binaural recordings, which are made by placing tiny microphone capsules in the ear canals of a dummy head (Kunstkopf), and these go a long way toward competing with standard stereo recordings played through speakers. However, few Kunstkopf recordings have been made, and fewer still with worthwhile musical content. Nonetheless, they demonstrate that, just as room acoustics influence a speaker-based system, our own uniquely shaped heads and ears influence what we hear.
This head-related transfer function (HRTF)meaning that the same sound will be heard quite differently depending on the direction of its sourceis based on a number of mechanisms which are not entirely independent. One is that the two ears hear slightly different versions of any sound not directly on the midline or median plane (a vertical plane bisecting the head into two halves), due to their lateral position and the presence of the head between them. The differences in amplitude, frequency response, and phase are analyzed by the brain to determine the source of the sound in the horizontal plane. Sound from the left stereo speaker is heard by both ears, and can thus be localized to, say, 35° left of center. Feed that left signal into the left earphone and it is heard at 90° left of center. Mix some of it into the right earphone and the perception is that it moves closer to the midline, but still at 90° from the center. That's why, with headphones, one can hear a soundstage that spreads continuously from left to right but never forward, up, or backunless there's some digital signal processing (DSP) afoot.
Another mechanism depends on the shape of the pinnathe visible part of the external ear. Generally, there is a small protrusion just in front of the opening of the ear canal; above it begins a much larger, helical structure that arches back around to end, at the bottom, in the ear lobe. The overall shape of the pinna comprises many parts that, from person to person, greatly differ in size, shape, and thickness. A sound must pass over, around, or through these structures to enter the ear canal and, depending on the direction from which it arrives, its frequency response and phase will be modified in different ways. You can test this by cupping your hands behind, above, or in front of your ears, to hear exaggerations of these effects. The shapes of the pinnae contribute greatly to our ability to distinguish sounds originating above from those below, and those in front from those behind. Inject the sounds directly into the ear canal by earbud and such localizations are impossible, DSP trickery aside.
Binaural recordings listened to through headphones can simulate what the Kunstkopf heard, but here's the rub: It ain't what you would have heard, because your HRTF is the result of the unique configurations of your pinnae, the distance between your ears, the size and shape of your head, and even what's going on between your ears. By that last I don't mean your thought processes, but the anatomy of your cranial bones and their many resonant cavities. Thus, even binaural recordings are a generalized spatial representation of what the microphones picked up. Considering the acuteness of our sound-locating abilities, binaural recordings are analogous to looking at the world through someone else's corrective lenses.
Smyth Research Realiser A8
Stephen and Mike Smyth, late of DTS, founded Smyth Research in 2004 to market products based on the Smyth Virtual Surround (SVS) algorithm, which attempts to make it possible to listen to normally recorded audio material over headphones by subjecting the audio signal to DSP simulation of the hearing mechanisms needed for full spatial perception. Their description of the SVS technology can be read at www.smyth-research.com/technology.html, and their product is the Smyth Realiser A8 Package, which costs $3360 and includes the Realiser A8 processor box, headphones, and all accessories.