Trial and too many errors
‘Evidence-based’ medicine can be built on opinion and selective evidence
IN THE pursuit of safe, effective, and cost-effective medicine, evidence is king. The medical profession believes that treatments should be studied in clinical trials, then compared against other regimens, before deciding what’s best for patients. Congress and the Obama administration have strongly endorsed this process by allocating money to evaluate the most effective treatments, in order to spare the public, and insurers, the burden of unnecessary procedures.
One of the linchpins of this system is the vast network of medical associations that issue evidence-based guidelines to physicians. These organizations take a thorough look at the evidence available for or against a treatment and issue recommendations that help doctors make decisions. It’s a massive enterprise: the government’s National Guideline Clearinghouse currently lists 2,360 guideline documents released by 267 different medical organizations.
But evidence-based medicine is only as strong as the evidence used to support it. The stark reality is that evidence can be weak, biased, or even fraudulent. More guidelines stem from expert opinion than from solid clinical trial evidence. And many physicians who write these recommendations have financial ties to drug companies — with no widely accepted policy to correct for conflicts of interest.
There is nothing wrong with the underlying principle of teams of eminent physicians assessing the medical literature when evaluating treatments. But there’s a systemic lack of transparency in the guideline-setting process, which means that low-quality data gets portrayed as much better evidence than it really is. This wobbly process for determining treatment guidelines carries the fate not only of ObamaCare, but of millions of patients who take medicines and undergo procedures thinking, often erroneously, that there’s a proven track record behind the medical decision made on their behalf.
PERHAPS NO recommendations carry more weight — and the burden of more life-and-death decisions — than those in cardiology, issued jointly by the American College of Cardiology and the American Heart Association. Of the 2008 guidelines, involving over 2,700 treatment recommendations, only 11 percent were backed by strong evidence from multiple randomized clinical trials, according to a 2009 study by Duke University researchers published in the Journal of the American Medical Association. Meanwhile, 48 percent of the recommendations were based on the personal opinions of experts in the field.
A similar pattern holds across the medical landscape. An analysis in January of the treatment guidelines from the Infectious Diseases Society of America found that only 14 percent of the more than 4,000 recommendations were based on strong scientific evidence, while 55 percent were based on the opinions of experts.
When doctors are offering their own opinions, there is plenty of room for confusion, as exemplified by the Endocrine Society’s 2008 guideline recommending against testing for certain genetic markers, one of them called apoB, to help detect risk of heart diesase. That clashed with a consensus statement issued by the American Diabetes Association and the American College of Cardiology Foundation that same year supporting the apoB test.
And sometimes, when treatment recommendations are based on weak science, they turn out to be misguided. A powerful example comes from the Infectious Diseases Society of America’s guidelines for pneumonia. In its 2003 guidelines, the society recommended that patients coming into the hospital with suspected pneumonia should be treated with antibiotics within four hours of arrival, rather than the eight hours advised until then.
To back up the recommendation, the society cited one study, unpublished at the time. The study wasn’t a randomized clinical trial — the gold standard in terms of evidence quality — but rather an analysis of old Medicare data. Still, the society highlighted the study for pointing the way to “improved outcomes.’’
Soon after, a national panel of pneumonia experts convened by the Centers for Medicare and Medicaid Services recommended that the government start tracking hospitals to determine if they were administering antibiotics within four hours of admission. CMS and the Joint Commission on Accreditation of Healthcare Organizations adopted the four-hour rule as a performance standard nationwide in 2004.
But it wasn’t long before reports began to surface of emergency rooms over-diagnosing patients with pneumonia and giving them antibiotics just to comply with the four-hour rule. Studies also came out showing no survival advantage from giving the antibiotic earlier. By 2007, the government announced that it would stop using the four-hour performance measure in its tracking of hospitals. In the interim, many thousands of people were affected, plus there was the wasted cost incurred for many unnecessary doses of antibiotics. Moreover, the Medicare study’s senior author was one of the six men in charge of writing the society’s pneumonia guidelines — and he, along with two more members of the very same panel, also sat on the national pneumonia panel. Allowing the same experts to sit on multiple treatment panels, rubber-stamping their own recommendations, violates any system of checks and balances.
THOMAS FILE, head of the infectious disease section at the Northeastern Ohio Universities College of Medicine and who has been involved in setting guidelines for the Infectious Diseases Society of America, says the society has moved to shore up its internal controls in recent years. “We’re much more accountable, as we should be, to make sure our guidelines are better,’’ he said. For example, one new rule applied to a recent set of guidelines was that no recommendation is issued unless there is over 80 percent consensus among committee members. Guidelines will also spell out more clearly how much disagreement there was among panel members when issuing a recommendation, so physicians can better appreciate how solid the underlying evidence was judged to be.
That’s further than many medical organizations have gone, and sets a standard for transparency. The bottom line is that expert opinion, as well intentioned as it might be, shouldn’t masquerade as evidence-based medicine.
But what about when the opinions of the experts themselves are clouded by financial ties to companies making the drugs under consideration? More than half of 170 panel members responsible for revisions to the Diagnostic and Statistical Manual of Mental Disorders had associations with the pharmaceutical industry, University of Massachusetts researchers found in a 2006 study. In areas where drugs are the treatment of choice, such as mood disorders and schizophrenia, all guideline panel members had financial links to industry. A study of three major guideline documents from the American Psychiatric Association found that 90 percent of the work-group members had financial ties to drug manufacturers whose products were considered or included in the guidelines.
Doctors with ties to companies making the drugs under consideration shouldn’t be assessing them. It’s a basic ethical assumption, but it hasn’t been widely embraced by guideline-setting organizations, which think the problem is fixed by allowing panel members to disclose their financial ties or asking them to recuse themselves from voting. The truth is — and companies know this — that having a “thought leader’’ speak on behalf of a treatment is influential to his or her peers, even if disclosures have been made.
ANOTHER MAJOR threat to the pursuit of evidence-based medicine is that of poor data packaged as high-quality evidence. Drug manufacturers tend to be the sponsors of the largest and most expensive randomized clinical trials. As such, they are the gatekeepers of a significant percentage of the gold-standard medical evidence used to determine the best treatments.
Yet troubling examples have surfaced of companies keeping “adverse events’’ data — such as painful side effects — hidden, or of publishing only studies that are positive for a drug, while burying the negative results. For example, researchers from the University of California San Francisco looked at the raw clinical-trial data submitted by drug companies to the Food and Drug Administration for all new drugs approved in 2001 and 2002 and compared it with the papers that were later presented to medical journals. There were some distressing discrepancies. Almost half of the negative outcomes found in data submitted to the FDA were excluded from the published papers. The statistical significance of some results was reinterpreted between what was submitted to the FDA and what was published, and several conclusions were changed in the published papers to favor the drug.
Recently, a team of reviewers from the Cochrane Collaboration, a prestigious international non-profit that issues evidence-based reports, found themselves retracting their conclusions on the efficacy of the influenza drug Tamiflu, after being unable to verify the claims of a major study looking at the effects of the drug. The Cochrane team hit roadblocks when trying to access the original data from the manufacturer, and it found 10 serious adverse effects that had not been reported in published articles of the drug’s trials.
Evidence-based medicine functions on the assumption that strictly analyzing all evidence available for a treatment leads to an accurate judgment of the treatment’s effect. When negative studies are not published, or when they are published in a way that makes them seem positive, or when adverse outcomes are omitted from the medical literature, the “evidence” is no longer a reflection of a treatment’s real effect but a skewed, artificially positive illusion.
Since the published papers are beyond the purview of the FDA, there is no regulatory oversight of this trend. And medical organizations don’t police the studies, either. Ultimately, the journals themselves need to be a lot stricter about what they publish. They could ask, as a condition of publication, that companies deposit raw data in public databases that can be mined and verified.
Journals won’t take easily to such a watchdog role, because they are dependent on the industry to survive. In addition to advertising, sales from reprints are a significant source of revenue for journals. Publishing a major study can mean up to $1 milllion in reprint sales — much of that coming from the companies themselves, which buy copies to distribute to doctors.
Anyone doubting the true impact of this uneasy relationship should consider the testimony of Richard Horton, editor of the prestigious medical journal The Lancet, to the British House of Commons in 2004. Horton described one instance in which his journal criticized a study under review because it was over-interpreting its results and minimizing the impact of adverse reactions. The journal got a call from the company, saying: “Stop. Pull Back. Stop being overly critical because if you carry on like this we are going to pull the paper, and if we pull the paper that means no reprint income for the journal,’’ he said.
EVIDENCE-BASED medicine got a major boost with the American Recovery and Reinvestment Act of 2009, which allocated $1.1 billion for funding so-called Comparative Effectiveness Research, whose purpose is “to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers.’’ As it embarks on this effort, the government should demand availability of all raw data underlying the major clinical studies being evaluated. Special committees could be set up to verify conclusions and make sure they are supported by the raw data. Studies found to be inconsistent should not be included in the analyses that will determine major health-care decisions.
This nation’s health care system is a like a poorly planned building that has gotten bigger and more sprawling over time, with wings added haphazardly, sometimes on shaky foundations. Much of the structure is supported by pillars constructed with evidence that looked shiny and sturdy, like the very best marble. But too many of those columns have turned out to be hollow cardboard phonies.
Unless drastic changes are made, it’s only a matter of time before the entire building collapses.
Sylvia Pagán Westphal is a frequent contributor to the Globe editorial pages. Her e-mail address is email@example.com