How to Deal with False Research Findings

The Science of Getting It Wrong: How to Deal with False Research Findings by JR Minkel adds to our recent spate of posts on drawing faulty conclutions from data (such as: Correlation is Not Causation, Cancer Deaths – Declining Trend?, Seeing Patterns Where None Exists, Karl Popper Webcast).

In his widely read 2005 PLoS Medicine paper, Ioannidis, a clinical and molecular epidemiologist, attempted to explain why medical researchers must frequently repeal past claims. In the past few years alone, researchers have had to backtrack on the health benefits of low-fat, high-fiber diets and the value and safety of hormone replacement therapy as well as the arthritis drug Vioxx, which was pulled from the market after being found to cause heart attacks and strokes in high-risk patients.

Using simple statistics, without data about published research, Ioannidis argued that the results of large, randomized clinical trials—the gold standard of human research—were likely to be wrong 15 percent of the time and smaller, less rigorous studies are likely to fare even worse.

Among the most likely reasons for mistakes, he says: a lack of coordination by researchers and biases such as tending to only publish results that mesh with what they expected or hoped to find. Interestingly, Ioannidis predicted that more researchers in the field are not necessarily better—especially if they are overly competitive and furtive, like the fractured U.S. intelligence community, which failed to share information that might have prevented the September 11, 2001, terrorist strikes on the World Trade Center and the Pentagon.

But Ioannidis left out one twist: The odds that a finding is correct increase every time new research replicates the same result, according to a study published in the current PLoS Medicine.