Launch Monitors and Evidence Based Medicine
I was recently at the golf course working with a clubfitter on selecting a driver that was optimal for my game. We went through lots of different clubheads and shafts, hitting each on a very advanced radar system that exactly measures launch characteristics and ballflight. I was struck at how quickly he was moving through different ideas, having me hit each variation only a few times before moving on to something else. Having fit clubs for many tour professionals, the gentleman I was working with clearly knew what he was doing – but at the same time I was struck how little he understood the mathematics of what was going on, and wondered if his advice was really as valid as he thought it was.
We were trying to hit a certain launch characteristic – about 2400 RPM of backspin with about 12 degrees of launch angle. The kicker was the variation in any given swing could produce quite a bit of difference, at least 500 RPM of spin and 1-1.5 degrees of launch. Being a stats geek I immediately realized that with this amount of variance in the sample groups, there was no way that 1-2 samples (swings) could really identify a true difference between clubs. Sure, I might really nail one with one particular club, but without a big series of shots that would only be an anecdote, not real data. By the end, we had settled on a particular shaft and head, and it did seems to launch the ball quite a bit further and higher than my previous club. But in the end, I wasn’t really sure if despite all the technology and experience involved in this process, we had really done anything more advanced than picking a bunch of different clubs off the rack and seeing which one felt best. Unless we hit enough with each club to overcome the variance in individual swings, all the radar was doing is putting a number on what I could already feel. It better described the anecdotal experience, but wasn’t actually identifying a real pattern. I pointed this issue out to the clubfitter, but he claimed he could tell the difference after just one swing. It seemed more to me that he was cherrypicking the few swings that he thought represented his conception of truth more than the rest.
In a lot of ways, we do the same thing with our medical experiences.
Docs worry about uterine ruptures in VBACs even though they are very rare. Midwives seem to hate misoprostol despite scads of data to suggest no increase in adverse outcomes. Despite the data, we fixate on the few anecdotal experiences that had an emotional impact on us.
We all like to think that we are evidence based, but how often are we really? I often find myself defending my points based on studies that agree with me, and tending to avoid the studies that don’t. I see the same behavior in my colleagues. Oftentimes we attack the methodology of the studies that have results we don’t like, and feel more academic for doing so. We even chortle at how foolish some researcher was for putting together a study so poorly, coming up with an answer that seemed so obviously wrong (listen to the AO podcast and you’ll hear Paul and I do exactly that on a regular basis.) But would we have attacked the study as hard if the answer had agreed with what we already thought?
Sometimes I even see two docs fighting over a point, using the same study to prove completely opposite points. Each one takes a small piece of a study and claims that bit is the most important piece.
While at some point this is all natural, and perhaps part of the scientific process, at times it gives me pause about evidence based medicine in general. I find myself asking whether or not all this research really advances what we do if people are just going to re-interpret the data based on what they already believe. I also find myself thinking about the most potent learning experiences of my career and realizing that to a one these were not the discovery of new data, but poignant anecdotes involving sick patients, difficult surgeries, or great teachers. Each such experience was an N of 1, and yet those series of N1 experiences have contributed far more to who I am as a physician than hundreds of the N1000 studies I have read.
I’ve tried to be completely evidence based at certain points, but always eventually run into a situation where the evidence just doesn’t seem to fit. At that point I’ve been faced with the choice – go with what the data says is right, or go what seems correct in the specific case. I think the latter is often the more correct path. Given the way that statistical mathematics eliminates outlying datapoints, one would expect that there would be individual clinical situations that do not follow the data. Understanding this, it behooves one to try to see those situations where the data isn’t going to fit, and when ones anecdotal experience might better direct one’s course. Sometimes these deviations are heralded by an alarm bell in one’s mind that seems to scream “SOMETHING IS DIFFERENT”. I think one has to listen to such alarms.
Ultimately, we respond to the experiences of our past. Some disparage this, and attack those that do as not being scientists. There is some truth to this, and some do take this too far. Some ignore clear directions in the data because of their personal experiences, and are probably missing out on a better way of practicing. But for the most docs, a large catalogue of anecdotal experiences is one of their greatest strengths.
Strict adherence to evidence based medicine seems a good idea in the sterile field of a thought experiment, but doesn’t really seem to work in practice. There are too many times when the data doesn’t fit. There are too many outliers that have been systematically eliminated from the data.
But can one take this too far? Some docs are so experienced that they no longer consider data at all. It isn’t that they don’t believe in data. Its that they truly have seen almost everything, and have something personal to draw upon in nearly any situation. I have worked with several docs like this, and they are quite impressive. One liked to say that his actions were justified by his decades of “unpublished data.” Us younglings like to snicker at how oblivious these greyhairs are to the literature and how out of touch they seem to be, but if we’re in a bind they are the ones we call for advice – and usually they know just what to do.
Despite his obliviousness to the statistical insignificance of his observations, my master clubfitter made me a driver that was better than anything I had ever hit. In the end, performance is what matters – and oftentimes a deviation from or even ignorance of the data is what we need to get there.