The conundrum of cost-effectiveness

Author and Disclosure Information



Drs. Udeh and Udeh attempt to highlight the “straw man” nature of my argument and the inaccuracies of my piece, but they ultimately disprove none of my claims.

Regarding vertebroplasty—a procedure that never worked better than a sham one—the authors do not fault the cost-effectiveness analysis for getting it wrong, but rather early clinical studies that provided false confidence. Yet, as a matter of fact, both were wrong. Cost-effectiveness analyses cannot be excused because they are based on faulty assumptions or poor data. This is precisely the reason they should be faulted. If incorrect cost-effectiveness analyses cannot be blamed because clinical data are flawed, can incorrect clinical research blame its shortcomings on promising preclinical data?

Cost-effectiveness analyses continue to be published regarding interventions that lack even a single randomized controlled trial showing efficacy, despite the authors’ assertion that no one would do that. Favorable cost profiles have been found for diverse, unproven interventions such as transarterial chemoembolization,1 surgical laminectomy,2 and rosiglitazone (Avandia).3 Udeh and Udeh hold an untenable position, arguing that such analyses are ridiculous and would not be performed (such as a study of antibiotics to treat the common cold), while dismissing counterexamples (vertebroplasty), contending they are moot. The fact is that flawed cost-effectiveness studies are performed. They are often in error, and they distort our discussions of funding and approval.

Regarding exemastane (Aromasin), the authors miss the distinction between disease-specific death and overall mortality. Often, therapies lower the death rate from a particular disease but do not increase the overall survival rate. Typically, in these situations, we attribute the discrepancy to a lack of power, but an alternative hypothesis is that some death rates (eg, from cancer) decrease, while others (eg, from cardiovascular disease) increase, resulting in no net benefit. My comment regarding primary prevention studies is that unless the overall mortality rate is improved, one may continue to wonder if this phenomenon—trading death—is occurring. As a result, cost-effective analyses performed on these data may reach false conclusions. The authors’ fatalistic interpretation of my comments is not what I intended and is much more like a straw man.

Lastly, some of the difficulties in reconciling costs from randomized trials and actual clinical practice would be improved if clinical trials included participants who were more like the patients who would ultimately use the therapy. Such pragmatic trials would be a boon to the validity of research science4 and the accuracy of cost-effectiveness studies. I doubt that decision analytic modeling alone can overcome the problems I highlight. Two decades ago, we learned—from cost-effectiveness studies of autologous bone marrow transplantation in breast cancer—that decision analysis could not overcome major deficits in evidence.5 Autologous bone marrow transplantation is cost-effective—well, assuming it works.

We need cost-effectiveness studies to help us prioritize among countless emerging medical practices. However, we also need those analyses to be accurate. The examples I highlighted show common ways we err. The two rules I propose in my original commentary6 are not obvious to all, and they continue to be ignored. As such, cost-effectiveness still resembles like apples and oranges.

Next Article:

Geriatric patient-centered medical home

Related Articles