While I think we typically believe the results of clinical trials to be objective, this is in reality not often the case; the interpretation of results will reflect the worldview of the investigator (1). There are numerous “threats” to the integrity of the data that we, as readers, need to be aware of. Some are obvious and banal: we are invested in our research and wish to present the best picture from what we have discovered; we wish to enhance our career prospects (for promotion or for grant funding) by publication. Others are more complex and deep-seated: we may have obtained our funds from the industry in which our study is positioned; we might have ceded control of publication to industry representatives. We have seen public accountings of problems arising from these challenges. But even if we are clear about all of these above issues, we can be misled by research findings. The Users Guides lists eight guides to avoid being so misled.
1. Read only methods and results; bypass the discussion section. It is possible that the discussion often may suggest inferences that differ from those a reader could draw simply be reading the methods and results alone. This arises because authors might demonstrate greater enthusiasm for the interpretation of the results when funded by for-profit organizations.
2. Read the abstract reported in pre-appraised resources. A number of secondary-source publications exists wherein experienced clinicians and methodologists prepare structured abstracts of new research for readers. These provide important data and outcomes as well as clinical interpretations of use to practicing physicians. A notable example here would be the ACP Journal Club.
3. Beware large treatment effects in trials with only a few events. Exercise caution when you see large treatment effects (such as relative risk reduction of 50% or more) from studies with few participants (such as 100 or less). Large effects are not common or plausible because therapy can usually only address one or two mechanisms in a multi-causal disease process. You should see a large number of studies or a large number of events before beginning a new therapy that may be costly or risky.
4. Beware faulty comparators. This is related to what is being compared to what. For example, a faulty comparison is to a placebo when an effective agent is available, or to a more toxic agent when a less toxic agent is available. Read how the agent under study is being compared to, for that choice could lead to results that are skewed to the study intervention.
5. Beware misleading claims of equivalence. With drug research, as example, often the new drug is compared to a widely used drug in order to claim the new drug has some non-therapeutic benefit, such as less frequent dosing. Here, you should be looking at how wide the 95% confidence interval is- wider is less good than narrower. Also, you might be comparing the new drug to an older drug that is only marginally better than a placebo.
6. Beware small treatment effects and extrapolation to low-risk patients. There are a number of strategies researchers may (knowingly or not) use to create the perception of a large treatment effect. A classic example is to report a relative risk reduction rather than an absolute risk reduction- a relative risk reduction of 50% might mean an absolute reduction from 2% to 1% or from 50% to 25%. Other strategies exist as well, such as focusing on statistical significance in lieu of clinical significance.
7. Beware uneven emphasis on benefits and harms. Many trials neglect any reporting of harm, and almost never report event rates in treatment groups and controls.
8. Wait for overall results to emerge; do not rush. Today, information moves at light speed, and once results are out may people jump in to adopt them. We get a huge publicity splash with the first large trial, and years later see radically different findings. Think Vioxx.
We should be cautious and canny in our readings of trials literature. The above is a guide to help interpret findings in light of potential biases that have been unconsciously or consciously introduced.
1. Montori V, Ionnidis J, Jaescheke R et al. Dealing with misleading presentations of clinical trial results. In: Guyatt G, Rennie D, Meade MO, Cook DJ. Users guides to the medical literature, 2nd edition. New York, NY; McGraw Hill Medical, 2008:301-315