I have written about statistics, and various traps people often fall into when examining data before (Statistics Insights for Scientists and Engineers, Data Can’t Lie – But People Can be Fooled, Correlation is Not Causation, Simpson’s Paradox). And also have posted about reasons for systemic reasons for medical studies presenting misleading results (Why Most Published Research Findings Are False, How to Deal with False Research Findings, Medical Study Integrity (or Lack Thereof), Surprising New Diabetes Data). This post collects some discussion on the topic from several blogs and studies.
HIV Vaccines, p values, and Proof by David Rind
if vaccine were no better than placebo we would expect to see a difference as large or larger than the one seen in this trial only 4 in 100 times. This is distinctly different from saying that there is a 96% chance that this result is correct, which is how many people wrongly interpret such a p value.
…
So, the modestly positive result found in the trial must be weighed against our prior belief that such a vaccine would fail. Had the vaccine been dramatically protective, giving us much stronger evidence of efficacy, our prior doubts would be more likely to give way in the face of high quality evidence of benefit.
…
While the actual analysis the investigators decided to make primary would be completely appropriate had it been specified up front, it now suffers under the concern of showing marginal significance after three bites at the statistical apple; these three bites have to adversely affect our belief in the importance of that p value. And, it’s not so obvious why they would have reported this result rather than excluding those 7 patients from the per protocol analysis and making that the primary analysis; there might have been yet a fourth analysis that could have been reported had it shown that all important p value below 0.05.
How to Avoid Commonly Encountered Limitations of Published Clinical Trials by Sanjay Kaul, MD and and George A. Diamond, MD
Trials often employ composite end points that, although they enable assessment of nonfatal events and improve trial efficiency and statistical precision, entail a number of shortcomings that can potentially undermine the scientific validity of the conclusions drawn from these trials. Finally, clinical trials often employ extensive subgroup analysis. However, lack of attention to proper methods can lead to chance findings that might misinform research and result in suboptimal practice.
Why Most Published Research Findings Are False by John P. A. Ioannidis
Continue reading →