On The Criticism of Papers We Don’t Agree With, and How Endometriosis is More Common Than Many Doctors Think
As academics, we are all about evidence based medicine. We practice 100% based on what we read, and often change our method of practice when new evidence comes out that shows us the right path. Right?
Of course not…. In reality, most of us slowly integrate new data into our practice over many years, and only when there is a preponderance of evidence do we really change from what we previously thought was right. Until that time, we hold up the papers we like and attack the methodologies of papers we don’t.
For example, in the mid 2000s the Women’s Health Initiative trial was published, which suggested that postmenopausal hormone replacement was not as good as thing as we previously thought. When it was published, those that didn’t think HRT was good said SEE LOOK I WAS RIGHT, and those that were pro-hormone viciously and publicly attacked the paper as being methodologically flawed (which it was.) The Term Breech Trial got the same welcome after showing that it probably isn’t a great idea to deliver babies vaginally when the are in the breech position. Just like WHI, when it was published the doctors that didn’t like (or didn’t know how to do) vaginal breech delivery said SEE LOOK I WAS RIGHT, while the pro-breech camp viciously and publicly attacked the paper as being methodologically flawed (which it was.) And over and over again with every major publication.
Over the years I have seen this so many times that to some extent my faith in evidence based medicine is shaken. Not because the fundamental methodology of scientific inquiry is flawed, but because the humans meant to be informed by the science tend to just agree with the papers they like and attack the ones they don’t, and in the end they rarely change their practice based on what they read.
But since I am also a human and subject to the same cognitive biases and weaknesses as the rest of us, today I find myself compelled to attack the methodology of a paper I don’t agree with, because I think its conclusion is wrong.
This month in the Green Journal, Mowers et al published “Prevalance of Endometriosis During Abdominal or Laparoscopic Hysterectomy for Chronic Pelvic Pain”, in which they conclude that “Fewer than 25% of women undergoing hysterectomy for chronic pelvic pain have endometriosis at the time of surgery.” This is a result I was shocked with, as in my practice 80-90% of women for whom I do laparoscopy for pelvic pain have pathologically proven endometriosis. In fact, in three years of practice at Emory I can only remember a single patient who I scoped for suspected endometriosis and didn’t find it.
So when you read a paper that has a surprising conclusion not consistent with your experience, you dig into the materials and methods and found out how they came to their conclusions. And in this, we find out why the authors find such a low rate of endometriosis. The study was a retrospective analysis of women having hysterectomy for the diagnosis of pelvic pain at a large sampling of Michigan hospitals over 18 months of time. Cases were considered positive for endometriosis if:
- the surgeon documented in the operative note that there was endometriosis or
- there was evidence of endometriosis on the path report.
So are we surprised that they found so little? Not at all. I mean these criteria are just foolish. In my practice based primarily on the care of women with endometriosis, I have operated on hundreds of women who were “negative at laparoscopy for endometriosis” by other surgeons only to find significant pathologically proven disease at the time of my surgery. The truth is that most surgeons are not familiar with the varied ways that endometriosis can look, and they are not going to call it correctly in many cases. Depending on a surgeon to say in the operative note there is peritoneal endometriosis present is also foolish, since in many cases even if they see it they might not even document that they saw it after taking the uterus out.
The pathologic criteria are just as bad. Endometriosis is a disease of the peritoneum, not a disease of the uterus. So if the case is a hysterectomy, and the surgeon is not someone who has advanced skills in endometriosis work, the pathologist is just going to get a uterus and possibly the tubes and ovaries, without any of the surrounding peritoneum. This methodology then limits their positive cases to cases where there are implants or endometriomas on the ovaries, or where there are serosal implants on the uterus, which is only a relatively small subset of the ways endometriosis can present.
In my view, these criteria would predictably and grossly underestimate the number of women who have the disease in the setting of pelvic pain, unless all the surgeries were done by surgeons with significant experience with the disease, which given that the cases were done in hospitals all over Michigan, they almost certainly weren’t.
The authors dutifully note a possibility of false negatives in their “study weaknesses” section, as they are supposed to do. But sometimes, study weaknesses are so strong that acknowledging them just isn’t enough. Sometimes the weaknesses are so important that they fundamentally undermine the validity of the conclusion. I believe this is so in this case. And that is a shame, because I think a lot of readers will read this article and be misinformed about the prevalence of endometriosis in women with pelvic pain, which is ultimately a harmful thing to the prospects women with the disease. Women who present with severe dysmennoreah and pelvic pain, as well as the variety of other ways the disease can present, deserve to be taken seriously and appropriately managed. A world view that most pelvic pain is from no discernible physical source is not helpful in that.
When I was a younger academic, I took a course by David Grimes and Kenneth Schulz on research methodology. Many of us have taken the same course. Its paid for by an organization that really wants doctors to do good research and understand how to read the literature, and I am thankful that they paid for my education at this course as well. One of the things I learned in this course is how one should read a paper:
Step 1 – Is it a topic I care about?
No -> don’t read it
Yes -> read it
Step 2 – Read the materials and methods
They are appropriate and valid -> read the conclusion and be informed by it
They are flawed -> go to the next paper because the conclusion, whatever it is, is not validly supported by the paper and it really doesn’t matter what it is.
This is truly the right way to do it, because why would we care about a conclusion formed from invalid research methodology?
But that is not how most of us do it. We read conclusions first, and if they agree with what we already thought we feel good about how smart we were, and if they disagree with what we already thought we dig into the materials and methods to find out how the dumb authors screwed up the study.
Unfortunately, I am not always any better than the rest. With this paper, I knew I was interested in the topic, so I skimmed the paper and read the conclusion. When I realized how much I disagreed with the conclusion I pored over the materials and methods and found out how flawed they are. So I’m just as bad. And so here I am attacking a paper I don’t agree with rather than being informed by the wonder of evidence based medicine.
Sorry Dr. Grimes!