As the rigor of COVID-19 research comes under increasing scrutiny, a deep dive into contemporary trials of invasive cardiovascular interventions finds intricate ties with industry and the art of spin on full display.
After examining 216 randomized, controlled trials published in the past decade, researchers found that more than half (53.2%) were commercially funded. In 18.3% of these trials, the sponsor was involved with the trial conduct and reporting.
Commercially sponsored trials were significantly more likely to report results that favored the experimental therapy than trials without commercial sponsorship (64.3% vs. 48.5%; P = .02).
The association remained statistically significant after adjustment for differences in trial characteristics (exponent of regression coefficient beta, 2.80; 95% confidence interval, 1.09-7.18; P = .03), the authors reported in.
“To make this clear, this is not an attack on industry-sponsored trials,” study author and cardiac surgeon Mario Gaudino, MD, of New York–Presbyterian and Weill Cornell Medical Center, New York, said in an interview. “Because industry has more money, they have the best trialists, the best research organization. So they generally do a pretty good trial; they’re larger, they have a higher Fragility Index, which means they’re more solid.
“And, most importantly, more than half of the trials were sponsored by industry,” he said. “So without industry, there wouldn’t be half the research in that 10-year period we explored.”
Previous research inand in other fields has shown that trials supported by for-profit organizations are more likely to report positive findings. The explanations often focus on bias and differential quality in how the trials were designed and reported.
In the present analysis, however, the authors found no difference between trials with and without industry funding in terms of estimated treatment effect, length of follow-up, use of composite or clinically significant outcomes, or outcome modification, compared with the published protocol.
Part of the explanation may be that industry-sponsored trials more often used a noninferiority design (26.1% vs. 14.9%) and had a higher loss of patients to follow-up (median of sample, 1.0% vs. 0.1%), Dr. Gaudino said. “But I think more, in general, it’s not so much a difference in the measurable characteristics of the trial. It’s the selection of the sites that participate, the patient population that is targeted that makes the trial very likely to get the result that industry would like to see.”
“Just think of the differences in the transcatheter MitraClip results between MITRA-FR and COAPT – basically they were related to the fact they enrolled different patients,” he said.
The analysis included 216 coronary, vascular, and structural interventional cardiology and vascular and cardiac surgical randomized, controlled trials published from January 2008 to May 31, 2019. Most were multicenter trials (78.7%); 58% originated from Europe, 12% from North America, and 10.6% from Asia.
One in six trials (16.2%) were not prospectively registered before the start of enrollment, and at least one major discrepancy existed between the registered and published primary outcome in 38% of registered trials.
“If you don’t register the trial then you can make all the changes you want to the protocol up until the moment you publish,” Dr. Gaudino observed. “There really is no rational justification for not registering a trial.”
Overall, the trials were not particularly robust, he noted. In 62 trials in which the Fragility Index was measured, only a median of five patients experiencing a different outcome in a commercially sponsored trial would change statistically significant results to nonsignificant. For noncommercially sponsored trials, that number was 4.5 and in four trials; the change in condition of only one patient was needed to switch the statistical significance.
“This finding is concerning given the substantial role that [randomized, controlled trials] results play in federal device approvals, payer criteria, and clinical consensus guidelines,” the authors wrote.
The authors also looked for interpretation bias in the trials. In the 84 trials with nonsignificant differences in the primary outcomes, 65.5% contained spin, such as focusing on statistically significant secondary outcomes or interpreting nonsignificant primary outcomes as showing treatment equivalence or comparable effectiveness. Spin was present in 80.6% of the trials with commercial sponsorship and in 54.2% without (P = .02) – a finding that remained significant after trial differences were controlled for (beta, 4.64; 95% CI, 1.05-20.54; P = .04).