Home > Rants and Raves, Research Methodology > Launch Monitors and Evidence Based Medicine

Launch Monitors and Evidence Based Medicine

I was recently at the golf course working with a clubfitter on selecting a driver that was optimal for my game. We went through lots of different clubheads and shafts, hitting each on a very advanced radar system that exactly measures launch characteristics and ballflight. I was struck at how quickly he was moving through different ideas, having me hit each variation only a few times before moving on to something else. Having fit clubs for many tour professionals, the gentleman I was working with clearly knew what he was doing – but at the same time I was struck how little he understood the mathematics of what was going on, and wondered if his advice was really as valid as he thought it was.

We were trying to hit a certain launch characteristic – about 2400 RPM of backspin with about 12 degrees of launch angle. The kicker was the variation in any given swing could produce quite a bit of difference, at least 500 RPM of spin and 1-1.5 degrees of launch. Being a stats geek I immediately realized that with this amount of variance in the sample groups, there was no way that 1-2 samples (swings) could really identify a true difference between clubs. Sure, I might really nail one with one particular club, but without a big series of shots that would only be an anecdote, not real data. By the end, we had settled on a particular shaft and head, and it did seems to launch the ball quite a bit further and higher than my previous club. But in the end, I wasn’t really sure if despite all the technology and experience involved in this process, we had really done anything more advanced than picking a bunch of different clubs off the rack and seeing which one felt best. Unless we hit enough with each club to overcome the variance in individual swings, all the radar was doing is putting a number on what I could already feel. It better described the anecdotal experience, but wasn’t actually identifying a real pattern. I pointed this issue out to the clubfitter, but he claimed he could tell the difference after just one swing. It seemed more to me that he was cherrypicking the few swings that he thought represented his conception of truth more than the rest.

In a lot of ways, we do the same thing with our medical experiences.

Docs worry about uterine ruptures in VBACs even though they are very rare. Midwives seem to hate misoprostol despite scads of data to suggest no increase in adverse outcomes. Despite the data, we fixate on the few anecdotal experiences that had an emotional impact on us.

We all like to think that we are evidence based, but how often are we really? I often find myself defending my points based on studies that agree with me, and tending to avoid the studies that don’t. I see the same behavior in my colleagues. Oftentimes we attack the methodology of the studies that have results we don’t like, and feel more academic for doing so. We even chortle at how foolish some researcher was for putting together a study so poorly, coming up with an answer that seemed so obviously wrong (listen to the AO podcast and you’ll hear Paul and I do exactly that on a regular basis.) But would we have attacked the study as hard if the answer had agreed with what we already thought?

Sometimes I even see two docs fighting over a point, using the same study to prove completely opposite points. Each one takes a small piece of a study and claims that bit is the most important piece.

While at some point this is all natural, and perhaps part of the scientific process, at times it gives me pause about evidence based medicine in general. I find myself asking whether or not all this research really advances what we do if people are just going to re-interpret the data based on what they already believe. I also find myself thinking about the most potent learning experiences of my career and realizing that to a one these were not the discovery of new data, but poignant anecdotes involving sick patients, difficult surgeries, or great teachers. Each such experience was an N of 1, and yet those series of N1 experiences have contributed far more to who I am as a physician than hundreds of the N1000 studies I have read.

I’ve tried to be completely evidence based at certain points, but always eventually run into a situation where the evidence just doesn’t seem to fit. At that point I’ve been faced with the choice – go with what the data says is right, or go what seems correct in the specific case. I think the latter is often the more correct path. Given the way that statistical mathematics eliminates outlying datapoints, one would expect that there would be individual clinical situations that do not follow the data. Understanding this, it behooves one to try to see those situations where the data isn’t going to fit, and when ones anecdotal experience might better direct one’s course. Sometimes these deviations are heralded by an alarm bell in one’s mind that seems to scream “SOMETHING IS DIFFERENT”. I think one has to listen to such alarms.

Ultimately, we respond to the experiences of our past. Some disparage this, and attack those that do as not being scientists. There is some truth to this, and some do take this too far. Some ignore clear directions in the data because of their personal experiences, and are probably missing out on a better way of practicing. But for the most docs, a large catalogue of anecdotal experiences is one of their greatest strengths.

Strict adherence to evidence based medicine seems a good idea in the sterile field of a thought experiment, but doesn’t really seem to work in practice. There are too many times when the data doesn’t fit. There are too many outliers that have been systematically eliminated from the data.

But can one take this too far? Some docs are so experienced that they no longer consider data at all. It isn’t that they don’t believe in data. Its that they truly have seen almost everything, and have something personal to draw upon in nearly any situation. I have worked with several docs like this, and they are quite impressive. One liked to say that his actions were justified by his decades of “unpublished data.” Us younglings like to snicker at how oblivious these greyhairs are to the literature and how out of touch they seem to be, but if we’re in a bind they are the ones we call for advice – and usually they know just what to do.

Despite his obliviousness to the statistical insignificance of his observations, my master clubfitter made me a driver that was better than anything I had ever hit. In the end, performance is what matters – and oftentimes a deviation from or even ignorance of the data is what we need to get there.

About these ads
  1. April 6, 2011 at 8:04 am

    Get back to me when you win the open.

  2. April 6, 2011 at 10:27 am

    Great post. As normal humans, we’re horribly bad at math and science. Thinking in Bayesian or statistical terms isn’t intuitive.

  3. S. Marsh
    April 6, 2011 at 12:03 pm

    Nicholas, lovely post with a good analogy. Thanks for this reflection on the way we think.

  4. April 7, 2011 at 5:22 am

    I LOVE this article. The phrase by itself is worth the read: “There are too many outliers that have been systematically eliminated from the data.” This is one of my most fervent complaints about “the Data” In helping a woman have a baby or any other way that people aid others there is one important point to remember. What if you run into an outlier? You have to be able to think on your feet and use the correct method for that particular case.

    Nicolas seems to grasp that and I applaud it. It seems to me that too many physicians only look to the data, and only the data that they approve of. I believe in the Outliers. That is often where the truth lies.

  5. April 13, 2011 at 4:30 am

    Nice article i have always seen that we get lots of hints from our past.

  6. April 14, 2011 at 1:53 pm

    Can’t decide if that was actual praise or spam for a hotel in Mumbai….

  7. Bee
    April 15, 2011 at 4:36 pm

    I enjoyed that reflective post very much, especially this part; “I find myself asking whether or not all this research really advances what we do if people are just going to re-interpret the data based on what they already believe. I also find myself thinking about the most potent learning experiences of my career and realizing that to a one these were not the discovery of new data, but poignant anecdotes involving sick patients, difficult surgeries, or great teachers. Each such experience was an N of 1, and yet those series of N1 experiences have contributed far more to who I am as a physician than hundreds of the N1000 studies I have read.”

  8. April 24, 2011 at 3:24 am

    Well said, Nicholas. You are an honest academician and as an experienced “grayhair” I can say with certainty that I do not need a paper to prove that is not always the case. I don’t need a study to tell me we are all biased or that it is safer to cross the street with a green light. Thank you for your eloquent prose and common sense. A delight to read. What brand of driver was it anyway?

  9. April 24, 2011 at 4:32 am

    Thanks for reading Stuart!

    Brand isn’t the issue. Its about shaft characteristics to achieve a given launch. Fujikura Motore F3 S for me, with a TaylorMade R11 head because I like the color and that I can open up the face without losing loft.

  10. April 26, 2011 at 10:12 pm

    You make a great point about the balance between using each individual’s “unpublished data” and the published evidence…it’s something that comes up so often in various fields of medicine/health care. You know there’s a “right” and a “wrong” way but darn it, this one time the “wrong” way works so right!

    But how do we set the balance? Unpublished data can just be learning a particular set of habits and changing them only in the face of a very powerful N1. You argue very persuasively for the evidence for delayed cord clamping – what would you say to an OB who says “In decades of experience, I have never had a problem with immediate cord clamping and I don’t plan to change”? Not a criticism of your thoughts, but more a pondering on how the “unpublished data” approach can be adaptive and also maladaptive…

  11. MomTFH
    May 10, 2011 at 10:26 am

    Great article.

    And, to add an n=1 experience, the midwife I just did a rotation with taught the ob/gyn to use misoprostol for 2nd trimester IUFDs.

  12. antonette
    August 8, 2011 at 12:16 pm

    I am 29years old I got my tubes tied and now I really wanted another child please help me to know what can be done

    • August 8, 2011 at 3:57 pm

      I can’t give specific medical advice, but a person in your situation has two options

      1) tubal reversal surgery
      2) in vitro fertilization

      In vitro is usually the best option if one wants only 1 more child. If one wants more than one, tubal reversal may be a better option, though in some cases it is not possible. Reversal success is very dependent on the method used to do the sterilization, and how much healthy fallopian tube remains.

      In some cases insurance coverage dictates which way one goes. One should seek the consultation of a reproductive endocrinologist for this issue (physician who specializes in infertility.)

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 90 other followers

%d bloggers like this: