Testing the Audibility of Break-in Effects

Subjectivist: "Man, I got my headphones last week and they're breaking in nicely."
Obectivist: "Yer nuts, dude, it's your head breaking in to the sound of your new headphones."
Subjectivist: "Leave me alone, troll, take your objectivism to 'Sound Science.' We have the minds of Gods and poets, and don't need your weights and measures to know what we know what we know."
Objectivist: "What can I say to someone who's their own placebo?"
Subjectivist: "Break-in exists ... I've heard it ... I stamp my feet three times and you will go away."
Objectivist: "Lol ... you couldn't blind test your way out of a paper bag!"

And so it goes.

Let's try to clear a bit of this up, eh?

Break-In Testing So Far
If you've been following along, you'll know I had three brand new Quincy Jones Q701 headphones that I'm using (green, black, and white) to see if we can measure the effects of break-in.

I did the first exploratory break-in test on the green pair some time ago, in which we saw some changes and learned what we might look for in subsequent testing.

I designed a second more complex test based on what we learned on the first test, and we saw the changes over time more clearly.

Now, we have an avenue to do a break-in test on the third pair and really run it through the ringer. But first, I thought it would be good to do a subjective test to see if I could hear the difference between a brand new pair and one that's been broken in considerably.

Subjective Break-In Testing
The green Q701 that was used in the first test has been on my bench playing pink noise at about 90dB for well beyond 1000 hours at this point. The white Q701 remains sealed, brand spanking new in its box. I thought it's important for our exploration into break-in to find out if your could hear some difference between the two.

So, I called my buddy Brian (screen name "NA Blur" on Head-Fi) here in Bozeman, and arranged a time for him to come over and help me out with the test. I made some score sheets, colored a coin, and set up the gear on my dining room table.

The Test
This was a single blind test where I did not know which headphone was being placed on my head, but Brian did know. (In double blind tests, Brian wouldn't know which was which either.) In order to take some influence out of Brian's hands, a coin was flipped at the beginning of each test that would indicate which headphone he would place on my head. At no time during the test would Brian indicate whether I was guessing correctly or not, and I would not know the score until the end of each trial.

I did notice that the music playing computer in front of me did provide some reflections in which I could potentially see the headphones on my head, so the screen was tilted way back to prevent seeing any reflection. As Brian was putting headphones on my head, there was no way to see the headphones, or feel the difference between the headphones. I had no idea at all which headphone was on my head.

Before testing began, I listened to both headphones to try to perceive what the differences were between the two. I thought it was fairly clear that the broken-in pair was smoother sounding then the new pair. Brian did the same and thought he heard a difference as well.

Then we began testing. I went first and had a somewhat difficult time. I was using a Tiger Okoshi track with a strong trumpet solo that I knew could sound harsh if not well reproduced. As we progressed through this test, Brian said maybe I should try another track ... which was a bit of a hint that I wasn't doing so well, but we were on our first trial so I figured I'd switch the music I was using mid-stream. I switched to a Pinback driving rock track that was very dense with sound, and which could sound harsh and pinched when poorly presented.

By the time we finished 19 guesses, I called it quits and found that I had gotten 13 out of 19 correct, which is statistically significant, but I thought I could do better.

Brian tried it as well. At this point I hadn't told him which headphone was the broken-in pair, and he didn't want to know, he also took only a very short (probably too short) listen before starting his trial. When he was done he had gotten about 65% wrong. We think he had gotten mixed up with the sounds and which color headphone made what sound. It seemed evident he was hearing differences, but misidentifying which headphone was which.

Then I sat down for my second trial with a pretty good sense of what I was listening for and what music to play.

Blind testing is not easy. Even when there is a fairly clear difference it can be quite disorienting not knowing whether you are guessing correctly or not. Opportunity is rife for self-doubt, anxiety, and second-guessing oneself. But I've done a fair bit of blind testing of prototype amplifiers, so I knew what I was in for. As the second test progressed, I relaxed and relied on my previous experience, and used a technique I think works very well for this sort of thing.

I don't actually try to listen for the sound for a problem, or differences in sound. I relax and listen to the music normally, as if for enjoyment, then I pay attention to and monitor how I feel about the music. So I'm not trying to be critically aware of the sound as much as I'm critically aware of my reaction to the sound. It's worked very well for me in the past.

ARTICLE CONTENTS

COMMENTS
Tyll Hertsens's picture
The problem is that it's not like a drug trial where the effect you are looking for is an involutary change in symptoms. In this case, I have to make a conscious choice: Is this A or B? If I hadn't listened to the headphones at all, the first time I test the result is meaningless as it would be a totally random guess. My first series of tests were still somewhat like that as I was listening to a tune that wasn't as good at highlighting the differences. Again, I was simply showing I could fairly easily tell the difference beteen cans, so hearing them before the test did nothing but allow me to have a starting point to correlate the sound with a particular pair.
bikermanlax's picture

I too was skeptical about break-in, agreeing that much of the difference was just learning to hear the sound of the new headphones.

The last pair I bought I didn't bother to burn in at all. Based on Tyll's recommendation I bought some 1350's (after having a horrible experience with their doppelganger, the T50). I noticed a certain dissonance in the middle range of many classical piano sonatas. The dissonance was there in many different recordings and vintages. I simply stopped listening to piano sonatas as the rest of the sound was excellent.

After somewhere between 20 and 50 hours of play, I stumbled back into a piano sonata and the dissonance was now gone. I went back and listened to the same recordings that had been a problem when the phones were new. Great sound now. Burn-in? Seems to be the best explanation to me.

xnor's picture
The question is if the drivers changed or if you simply got used to the new sound signature, which to me seems to be the more likely explanation in this case and also many other burn-in reports.
pbarach's picture

If this were actually a true double-blind test, then neither Tyll nor Brian would know which set of cans had been burned in. Unfortunately, Brian knew (and I know that because Tyll said which ones had been burned in at the onset of the video). So it's a SINGLE blind test.

Tyll Hertsens's picture
And I've said as much. Still interesting though, eh?
AstralStorm's picture

Now, why haven't you actually added a measurement of the both headphones after the test? You do have the test rig.
If there is an audible difference, there must be a measurable one as well. (Yes, the ears are very precise, 0.5 dB in frequency response is enough for a trained listener like yourself.)

Also, was the trial count preset or "until we get bored or find a significant difference"?
The tests without a preset trial count are far lower statistical power - Bayes' equation with random prior (0.5) would have to be used instead of Bernoulli's trial; normal error which is assumed by significance metrics doesn't apply. (e.g. ANOVA or Fischer's test)
--

I'd suspect the earpads do break in, as they're pretty loose mechanical parts subject to a lot of load, but likely not the driver. The test for the latter is simple - first measure before "burn-in", and later after playing music of choice for x hours.
I'd bet (quite a lot of money even) that you won't be able to even measure the difference. (beyond typical placement issues, so do a bunch of measurements)

The similar phenomenon I've found with IEMs - the driver doesn't change one bit, but the tips do a lot - they get softer and fit better, especially foam ones.

udauda's picture

The data you acquired from the test are certainly interesting, but I wonder whether they are any meaningful in an objective way. Your pal was not blinded, and there was a long delay when your pal was removing & replacing Q701s from your head. (ITU recommendations suggest non-stop simultaneous transition among the A - B - X test chain during a subjective assessment test, not to mention that double-blind & level-matching are mandatory.)

Here's better way to execute this kind of test.
1. Record 2 different kinds test materials with the head-and-torso simulator with the new Q701 & broke-in Q701 on.
2. Apply a diffuse-field compensation to the recorded materials. (with a professional quality DAW software)
3. Run the materials with a double-blind ABX comparison test software(e.g. foobar2000) through one of those flat-diffuse field headphones.
4. Try to be as comfortable as you can while listening.
5. Post the result!

This will certainly remove all kinds of distractive test variables, and keep the test result valid objectively.

Tyll Hertsens's picture
I dunno man, I'd rather hear the real thing. I think I can deal with it.
udauda's picture

"I'd rather hear the real thing."

And that's kind of the notion J.Atkinson of Stereophile would take, whenever someone challenges him with a blind test :) I think we can be more scientifically rational than that.

Your data shall not pose any objective meaning if the test did not go with 'formality'. What I recommended you was merely one(it's called Binaural Room Scanning) of many existing binaural recording techniques, and even Harman Kardon utilizes the same kind of approach when measuring room acoustics & acoustic properties of an automobile. The result was so good, even they were able to convince Toyota with it!

And maybe you can take this idea to another level, such as, by sharing the recorded materials of Q701s with users, and make them post their own ABX results. Once enough number of data samples gathered, you may able to come up with something quite meaningful scientifically. (but you must prove that those two Q701s were sonically identical before breaking-in one)

SoulSyde's picture

Tyll,

I loved this article and the included video. Well done. I think you put a provocative argument to rest. I'm a firm believer of burn-in (break-in) in regards to dynamic drivers and tubes. The effects I believe are less noticeable with solid state op-amp driven amps (discrete excluded), BA headphones and DACs. But I'll leave that be.

Any thoughts on doing a "double blind" cable test? I would really enjoy your thoughts and opinions on this subject.

IUyyufe2's picture

Psn codigo gratis
Sony ha abierto inscripciones para un nuevo servicio de PlayStation Network beta para PS4. Cuando el sistema va en vivo el 2 de septiembre, los usuarios que se han inscrito podrán probar nuevas funciones en la columna vertebral digital que soporta la infraestructura de la red de la consola.
http://juegodetruco.net/codigo-psn-generador-como-tener-codigos-psn-gratis/

Yuwf12's picture

gps phone tracker
Then there is the hardware into the shell to complicate matters: first, the accelerometer is included, despite being more than sufficient as a pedometer, struggle to correctly detect the movements of the wrist less than blatant movements, with the result that to activate the display is always better to rely on the side button; heartbeat detector offers a precision not really less than surgical competition.

Pages

X