The Science of Getting It Wrong: How to Deal with False Research Findings by JR Minkel adds to our recent spate of posts on drawing faulty conclutions from data (such as: Correlation is Not Causation, Cancer Deaths – Declining Trend?, Seeing Patterns Where None Exists, Karl Popper Webcast).
Using simple statistics, without data about published research, Ioannidis argued that the results of large, randomized clinical trials—the gold standard of human research—were likely to be wrong 15 percent of the time and smaller, less rigorous studies are likely to fare even worse.
Among the most likely reasons for mistakes, he says: a lack of coordination by researchers and biases such as tending to only publish results that mesh with what they expected or hoped to find. Interestingly, Ioannidis predicted that more researchers in the field are not necessarily better—especially if they are overly competitive and furtive, like the fractured U.S. intelligence community, which failed to share information that might have prevented the September 11, 2001, terrorist strikes on the World Trade Center and the Pentagon.
But Ioannidis left out one twist: The odds that a finding is correct increase every time new research replicates the same result, according to a study published in the current PLoS Medicine.

Pingback: CuriousCat: Contradictory Medical Studies
Pingback: Curious Cat Science and Engineering Blog » Mistakes in Experimental Design and Interpretation
Pingback: Statistical Errors in Medical Studies » Curious Cat Science and Engineering Blog