NwAvGuy's Heaphone Amp Measurement Recommendations Page 2

4 - GAIN: Guideline: Headphone and source dependant. Most manufactures specify gain but many people don't realize it can change with the load impedance. Figuring out how much gain you need involves knowing the output of your source, how loud you want to listen, and the sensitivity/impedance of your headphones. It’s also usually desirable to have some excess gain beyond the minimum requirement. Here are the basics:

  • Optimal Gain – Ideally you want just enough gain. An amp with too much gain will increase noise, cramp the volume control's range, and possibly increase the risk of headphone damage and channel balance problems.
  • Headphone Requirements – See the Output Power section. The HD600, for example, needs at least 2.3 Vrms.
  • Source Output – If it’s not listed for your source, most battery powered sources output 0.5 – 1.0 Vrms, USB DACs 1.0 – 1.5 Vrms, and home gear is typically 2.0 – 2.5 Vrms.
  • Excess Gain – It’s useful to have 3 – 6dB of extra gain available to boost the volume of “quiet” recordings. Multiplying by 1.4 or 2.0 gives 3 or 6dB of excess gain respectively.
  • Calculating Required Gain – The rough formula is: ( Vout / Vin ) * ExcessGainFactor. With the HD600 using a 1 Vrms source with 6dB excess gain you get (2.3 / 1 ) * 2 = 4.6X. To convert that to dB it’s 20 * LOG (4.6) = 13dB. If the amp has a significant output impedance things get more complex. For more see: All About Gain
  • Measurement – Gain is simply actual Vout / Vin at maximum volume using loaded voltages with 1Khz at 0 dBu of output. It should be specified as a ratio (i.e. 5X) and in dB for at least the minimum and maximum load impedance.

5 - DISTORTION: Guideline: Less Than 0.05%. Distortion is anything that alters the music. Some believe certain kinds of distortion are pleasant and argue recording artists add distortion so amplifiers should too. A technique called audio differencing allows listening to only the distortion produced by an amplifier. Few describe distortion as pleasant when heard by itself. And there’s a big difference between applying controlled amounts of distortion to a single instrument, such as an electric guitar, versus uniformly and involuntarily applying it to everything you listen to. It’s like an artist using a dab of green to enhance a painting versus being forced to always view the world through green tinted glasses. Just as clear eyeglasses allow seeing the world more accurately, a low distortion amplifier allows hearing music more accurately. Here’s how distortion is typically measured:

  • THD+NTHD is Total Harmonic Distortion and it’s a measure of the unwanted harmonics from an essentially perfect sine wave. The “+N” means plus noise and everything else that doesn’t belong.
  • Loads – It’s critical to measure distortion using a proper load that simulates real use. Unloaded measurements are far more flattering but useless. Lower impedance loads generally increase distortion.
  • Sweeps –THD+N is more revealing when you measure it at multiple levels and frequencies. Graphing THD+N vs output at 1Khz reveals the maximum output of a device and can be done at several different loads. Graphing THD+N vs frequency reveals how a device performs across the audio spectrum at a constant level such as 775 mV.
  • IMDInter-Modulation Distortion measures how two or more tones interact and can reveal additional sources of distortion. SMPTE and CCIF measure low and high frequency IMD respectively. The IMD spectrum graphs are often more revealing than just the raw numbers. Look for the height and number of distortion products (especially above -80dB), “spread” at the base of the signals, and any rise in the overall noise floor. Ideally the graphs use the same reference as the noise spectrum (i.e. dBu).
  • Guideline – Distortion of 0.5% means the added crud is only about -45dB below the music. Using a volume control calibrated in dB you’ll find that’s easily audible. As mentioned above, noise needs to be -85dB below the signal to be inaudible which works out to only 0.005% THD+N. But music masks distortion so 0.01% (-80dB) is considered acceptable. Around 0.05% things become more questionable. That's -66dB below the signal which is roughly the noise level playing decent vinyl. Anything over 0.05% is arguably problematic. The ear is most sensitive to distortion from 100hz to 10Khz. For more see Music vs Sine Waves.

6 - FREQUENCY AND PHASE RESPONSE: Guideline: +/- 0.5dB and 1 degree. These can be graphed together:

  • Frequency Response – This is mainly a concern with tube amps, single-ended amps, and capacitor coupled outputs. All usually roll off the deep bass into lower impedances and a few corrupt the highs as well. Generally the amplifier standard for “flat enough” is less than 1dB total variation from 20hz to 20Khz and it’s critical to test with the lowest realistic impedance. Response specified without dB limits and load is nearly meaningless.
  • Phase Response - This is a sensitive indicator of what's happening outside the audio band and can alter the “soundstage” and spatial perceptions. The error should be under 1 degree from 10hz to 10Khz where spatial information is most critical.

7- CHANNEL BALANCE: Guideline: under 1dB. Analog volume controls create channel balance error--especially at low settings. The worst case difference from -45dB to full volume is the most revealing measurement and should be under 1dB.

8 - CROSSTALK: Guideline: -60dB or better. The 3 wire headphone jack and volume control usually dominate crosstalk performance in amplifiers using a conventional ground but other gear may show degraded crosstalk due to less optimal ground designs. Crosstalk should ideally be measured with the loads connected directly to the ground terminal of the headphone plug using a low and high impedance load at half volume. Much below -60dB acoustic crosstalk dominates.

9 - SQUARE WAVE RESPONSE AND SLEW RATE: Guideline: 2 V/uS. The fastest possible slew rate from CD audio is 0.2 V/uS per volt of RMS output. Even formats like SACD and 24/192 are extremely unlikely to need higher slew rates as that would require a near 0dBFS signal above 20Khz. Headphone gear normally has an output of 10 Vrms or less hence a slew rate of 2 V/us will cover all requirements. A fast rise-time 10Khz squarewave driving an amplifier to just under clipping into 600 ohms can be used to measure the slew rate with a 60+ Mhz oscilloscope. A low level 10Khz square wave, driving a reactive load, is a good indicator of amplifier stability and compensation. There should be minimal overshoot or ringing with square corners. These measurements cannot be properly done with an audio analyzer or RMAA.

SUMMARY: The above measurements paint a reasonably complete picture of amplifier performance and allow for making solid comparisons. The closer an amplifier comes to meeting all the guidelines the more accurate and transparent it will be.

Recommended Page Layout for Headphone Amp Measurements

Editor's End Note - I want to again thank NwAvGuy, this information has been very helpful in getting me started with my headphone amp measurement program. I very much look forward to the day when your mock-up above turns into dozens of data sheets for both commercial and DIY headphone amps. Thanks for putting so much time and effort into this summary explaination, the reference links, and the in-depth supporting information on your blog. It's quite a contribution, thanks!

ARTICLE CONTENTS

COMMENTS
LFF's picture

Very nice article! Very nice!

Extremely informative and to the point. Great job NwAvGuy!

Reticuli's picture

I knew it! NWAVGUY is a member of Daft Punk!

Limp's picture

Absolutely stellar.
Two of the driving forces of awesomeness in the hobby getting together.

khaos's picture

Clear, concise, with links for a more in-depth study. Thanks!

sgrossklass's picture

Nice article overall, just a minor goof when it comes to phase response: Within 1° between 10 Hz and 10 kHz? Methinks there's a couple of zeros missing there. 10° and 100 Hz would make a lot more sense.

Personally, I would also like to see a 1 kHz distortion spectrum. Not a terribly big fan of THD+N either, it just doesn't correlate too well with hearing impressions. (Hence the former.) Maybe it doesn't have to be the fancy GedLee metric (which is kinda hard to evaluate in practice), but some kind of weighting would definitely be advantageous (maybe D. E. L. Shorter's from 1950).

sgrossklass's picture

Oh, and measuring nominal power at 1% THD is reasonably silly in something claiming to be hi-fi these days. In many concepts this will be at the onset of clipping already (beyond the THD "knee"), which means they'll sound audibly bad, while oldschool concepts with little feedback may still be acceptable - with places swapped at a little less power. If you want to go all DIN 45500, at least keep in mind that this norm called for power to be sustained for 30 minutes or so.

SpaceTimeMorph's picture

... if the THD graphs vs. output power are given you have the option of taking the cusp proceeding the upturn in THD as the max power output. For headphone stuff, THD @ 1% is the standard (as silly as it is to do so), so quoting that number still provides a good comparison point among different equipment.

sgrossklass's picture

This topic just sprang to mind as one of the little things that can ruin your day. Few amps would have enough DC offset to upset linearity in very sensitive cans, but it should definitely be checked (in all gain stages at normal volume settings, plus a defined load of like 100..300 ohms for AC-coupled outputs so as not to be fooled by coupling cap leakage currents). If a headphone barely needs 10 mVrms to output 90 dB SPL with noteworthy amounts of distortion (and there are a few among those measured, mostly IEMs), 10 mV of offset is anything but negligible. Some insensitive 600 ohmers wouldn't care one bit, of course. For rating DC offset, I'd use 10-dB steps: <3 mV (good), <10 mV (OK), <30 mV (meh). I hope you have a good, well-calibrated multimeter...

Power-on/off noises are a closely related area. While truly dangerous voltage spikes have been the exception, they can nonetheless be quite annoying in practical use.

NwAvGuy's picture

...for all the encouragement!

@sgrossklass, you are correct about the phase but just one zero is missing. Thanks for catching that and I let Tyll know. It should read 100hz - 10Khz for 1 degree of error. Even AC coupled amps (with direct coupled outputs) can manage < 1 degree at 100hz. The idea is to keep the phase error low in the range where spatial information is most important. But it's not an exact science as there is considerable debate over how audible phase error is. But I would consider 10 degrees at 100hz fairly marginal. Such an amp would have significant frequency response error in the deep bass.

1% THD is indeed the benchmark standard for maximum power and it is the onset of clipping in most amps. As SpaceTimeMorph mentioned, what's most important is to apply the same standard across the board so amps can be fairly compared. By graphing THD+N vs output you also get to see the distortion at lower levels so if you want to know the max output at 0.05% it's there. Setting the standard much lower than 1% would cause a lot of single-ended and tube amps to fail the test completely (they often have distortion approaching 1% at any output level) and require using a different standard making comparison difficult. Manufactures and other reviewers also typically use 1%.

I agree it's worth checking DC offset with a DMM and at least subjectively checking for turn on/off transients. Neither really impacts the sound quality of an amp and Tyll wanted to keep the results page to the essentials. If there's room, the DC offset could be added but it's really more of a pass/fail sort of measurement. I would agree anything over 30 mV deserves a red flag.

SullivanG's picture

Is it worth mentioning the polarity? (absolute phase) I am sensitive to this, for low frequency waveforms. Thanks for the great write-up.

13mh13's picture

It seems the Guy's been bad-mouthin' Tyll and the so-called "subjective" community over on his Blog. He -- and his small gang of supporters -- don't raise any worthy claims -- just keep repeating/rehashing the same ol' line that a select group of narrow-minded "objectivists" have been pushing for years. If they REALLY had meritorious, science-supported claims, the hi-fi world may give them more than 2sec air-time.
On a more interesting note, just saw this on Head-Fi re a possible IP rip-off by NwAvGuy. If true, then "may the best man win."

Colin Shaw's picture

A couple of comments on phase and output impedance, motivated by a change from measuring specs on some designs I have been working on to mostly just listening to them and trying to understand the reason for what I perceive to be good qualities and improvements.

 

The output impedance of the amplifier is definitely an concern.  Let's talk in terms of damping factor, or the relative value of load impedance inclusive of mechanical properties to the output impedence of the amplifier.  In my experience the specifics of it are quite dependent on the type of load, motivated in great part by work with open baffle loadspeakers.  A very light, high sensitivity driver can sound fantastic for natural tone music if it is used in a system with a relatively low damping factor, the reason being that there is more expression of resonance.  This, however, does not seem to be a general maxim, but rather a fairly special case.  Also, depending on the type of amplifier (Zen type amplifiers as a prototypical case here), a difficult region for reproduction, particularly with higher impedence headphones is where the driver is near resonance and the impedence is much higher than nominal.  I suppose the point of my comment is that while there may be some rules of thumb, it is not a one size fits all issue, and it depends on the load, the amplifier and the way that the two of them work with what you are listening to.  

 

On the issue of phase I have a complex comment.  The ability of an amplifier to accurately track the signal has more to do with bandwidth than is generally appreciated, in my opinion.  Our ears are amazing devices that have been honed over many generations of selection to allow us to escape wild cats and whatnot, and have an amazing ability to discriminate phase for that purpose.  Moverover, they are well suited to doing so in a frequency dependent manner.  What I have found is that amplifiers with enormous bandwidth tend to sound (to me) more accurate and more enjoyable.  Much of the reason for my concern stems from considering jitter in digital sources, where a very small temporal error results in diminished accuracy.  I decided to try to apply this concept to amplifiers and see what happens, and the result (to me) is that with high bandwidth (some of the amps are above 1MHz), the higher audible frequencies become much more detailed and pure sounding.  There is an issue of whether or not you can hear the frequency, and there is an issue of whether or not you can tell that the phasing of the signal aligns in a way that sounds most sincere and true to your perception of the source.  Any amplifier has some limitation on upper frequency response, but by pushing it out there as far as you can, the phase limitations in the audible portion are minimized.  I can't claim to have any expert knowledge or rule of thumb ideas on this issue, but I can claim that I tend to prefer the higher bandwidth amplifiers, and I believe this has a great deal to do with accurate phasing.  

Tyll Hertsens's picture

Thanks for the comment. Makes me think I need to try to make my frequency response plot go up much higher than 20kHz. I'll work on that.

MGbert's picture

I'm an engineer, but not an electrical engineer.  I can run formulas, though, so I'm wondering if you could help me out since I'm not sure how to interpret some of these measurements.

I just got a Parasound Zdac and my headphones are a vintage pair of 1980 AKG K240 Sextett's.  The Zdac makes them sound great, but the volume needs to be cranked up near max (9 out of 10) for the volume to hit about 90 dB.  Your measurement sheet has the AKG, at 90 dB, as requiring 893 mV, 1.27 mW, and 630 ohms at 1K Hz.  Doing the math, that equates to a current of 1.42 milli-amps*.  Which of these parameters define how the Zdac will behave with other headphones?  If I were to get a pair of AKG Q701's, will the fact that they would need 5.29 milliamps to get to 90 dB (based on 318 mV and 1.68 mW at 60 ohms, also from your measurement sheet) mean the Zdac will never get them to 90 dB based on available current and or wattage, or does the fact that the Q701's need less voltage at 90 dB than the K240's mean the Q701's will do fine?  Thanks in advance.

MGbert

* Math being Amperage = SQRT(Wattage/Ohms) at 90 dB

X