How to Interpret the Results of Clinical Trials
BOSTON—The interpretation of clinical trial results can stray from the data in many ways. Creating spin (ie, stressing an experimental treatment’s advantages) may or may not be the intention of the researchers or of people who write press releases, but clinicians evaluating the results should not be distracted from the key characteristics of a meaningful trial. They can use several strategies to keep the facts in focus, according to a researcher.
“Here are some words that should put you on alert: ‘revolutionary,’ ‘groundbreaking,’ and ‘first-line.’ It is time to be cautious when you are hearing the spin and the results at the same time,” said Elizabeth W. Loder, MD, MPH, Professor of Neurology at Harvard Medical School in Boston. At the 59th Annual Scientific Meeting of the American Headache Society, Dr. Loder spoke about migraine prevention trials, but she allowed that her remarks are relevant to any clinical trial.
Guidelines Aim to Increase Objectivity
The potential for overinterpretation, misinterpretation, or misleading interpretation of trial results was reduced greatly in 2005. At that time, the International Committee of Medical Journal Editors agreed that trials accepted for publication should first be registered and have their methodology defined before study initiation. Establishing the trial design and primary end points in advance makes selective reporting and data manipulation more difficult. The approach, however, does not eliminate the potential for spin, said Dr. Loder. “The trial registrations on sites like ClinicalTrials.gov are easy to find, and it is worth looking back to compare what was registered to what was reported. There can be some surprises,” Dr. Loder explained.
One potential surprise may be a discrepancy between the prespecified outcomes and the outcomes that the researchers stress at the conclusion of the study. The peer-review process of a high-quality journal limits claims based on secondary outcomes, but press releases do not have similar constraints. In addition, favorable reporting on outcomes that did not appear in the trial registration should arouse suspicion. “It is fair to include data on outcomes that were not prespecified, but they should be flagged. These are hypothesis-generating and should not be given the same weight as those prespecified,” Dr. Loder explained.
Guidelines to improve the objectivity of data gathered and reported for trials are growing increasingly rigorous, according to Dr. Loder. For headache prevention trials, the International Headache Society has issued specific recommendations about trial conduct and the measurement of end points. Although Dr. Loder conceded that strict constraints may make reports of trial results formulaic or tedious, the consistency of the formula, which progresses from an introduction through methods, results, discussion, and conclusions, makes the findings easier to interpret and to place into context.
Data Should Guide Interpretation of Results
A paper’s discussion section may cloud the reader’s understanding of the trial’s findings, Dr. Loder cautioned. In a properly reported study, the results section confines itself to the facts. In the discussion section, interpretation of the facts varies with perspective, according to Dr. Loder. The authors’ perception of relative benefit following a favorable outcome or of the burden of an adverse event is subjective. The potential for intentional or unintentional spin is substantial.
“Examples of spin include focusing on an outcome [that] the trial was not designed to study, focusing on subgroups rather than [on] the overall population, and downplaying adverse safety data,” explained Dr. Loder. Dr. Loder cited several studies that compared reader reaction to abstracts with and without spin. The studies showed that spin was persuasive. Moreover, Dr. Loder noted that spin in abstracts is typically passed on in press releases, news stories, and other accounts of the studies.
One strategy for remaining circumspect about new data is to consult one of many watchdog organizations that monitor clinical data and evaluate data collection and analysis. One such organization is HealthNewsReview.org, which has an editorial team that routinely critiques claims made about drugs, devices, vitamins, and surgical procedures. According to Dr. Loder, the website has examined migraine therapies and provided a perspective that was fully independent of the trials’ sponsors, their authors, and sometimes of the prevailing view.
Pure objectivity may not be appealing for those who want to draw attention to their research, and spin is hard to resist in the desire to develop an engaging narrative. Whether or not those who focus on the most favorable findings of a trial are conscious of their disservice to scientific inquiry, spin has been found repeatedly in systematic reviews of study data. Dr. Loder cited one study that found spin in 47% of 498 press releases on scientific articles.
“There were various types of spin, but 19% of the press releases failed to acknowledge that the primary end point was not statistically significant,” Dr. Loder noted. When abstracts that provided the basis for the press releases were analyzed, 40% were found to contain spin.