The author of the commentary replies.
Chiedozie I. Udeh, MD, MHLTHEC
Department of General Anesthesiology, Cleveland Clinic
Belinda L. Udeh, PhD, MPH
Department of Outcomes Research, Cleveland Clinic
Address: Belinda L. Udeh, PhD, MPH, Department of Outcomes Research, P77, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: Udehb@ccf.org
Health care delivery is perennially resource-constrained, perhaps never more so than in these times of severe economic distress. Yet the introduction of new medical technologies and therapies (some of dubious benefit) continues unabated. Consequently, the search for how best to deploy limited health care resources continues to engender much interest.
In that light, the recent commentary on cost-effectiveness studies by Dr. Vinay Prasad in the June 2012 of this journal,1 which attempted to highlight some of the pitfalls of such studies, is commendable. Unfortunately, the comments, which largely focused on the methodology of cost-effectiveness studies, end up merely as a straw man debate. To the less well-informed reader, the commentary might appear as an indictment of cost-effectiveness research.
It is thus crucial to correct those potentially misleading comments and to point out that recommendations for the proper conduct of cost-effectiveness studies were published as far back as 1996 by the Panel on Cost-effectiveness in Health and Medicine.2 This panel was convened by the US Public Health Service and included members with demonstrated expertise in cost-effectiveness analysis, clinical medicine, ethics, and health outcomes measurement. The recommendations addressed all the issues raised in the commentary and more, and are well worth a read, as they enable readers to understand how to conduct these studies, how to judge the quality of these studies, and how the findings might be applied.2 Nonetheless, it is worthwhile to address the logical inaccuracies in the specific examples in the commentary.
IF A TREATMENT IS INEFFECTIVE, IT IS COST-INEFFECTIVE TOO
First, the author discusses the case of vertebroplasty for osteoporotic vertebral fractures. Vertebroplasty had previously been estimated to be cost-effective relative to 12 months of medical therapy. However, a subsequent clinical study found it was no better than a sham procedure, thus setting up the uncomfortable possibility that a sham procedure is more cost-effective than both vertebroplasty and medical therapy.
This can hardly be blamed on the earlier cost-effectiveness study. If any given therapy does not effectively achieve the desired outcomes for the condition for which it is being used, then that therapy ought not to be used at all for that condition. In that context, a cost-effectiveness study is rendered moot in the first place, as the therapy of interest is not effective. Using a more broadly related example, why would anyone conduct a cost-effectiveness study of antibiotics for the treatment of the common cold? Indeed, the vertebroplasty example merely highlights the limitations of the original clinical studies that erroneously deemed it effective for osteoporotic vertebral fractures.
The possibility that a sham procedure might be more cost-effective than vertebroplasty or medical intervention is unsettling to the extent that one has a pro-intervention bias for all diseases. Perhaps the lesson may be that none of the current therapies for this condition is useful, and that until there is a truly beneficial therapy, patients may best be served by doing nothing. To paraphrase one of the author’s rather obvious recommendations, knowing that a therapy is efficacious (toward achieving our desired end point, whatever that may be) should be a prerequisite to adopting it into clinical practice, let alone determining its cost-effectiveness.
Furthermore, cost-effectiveness studies by their nature cannot and should not be static but need to be adjusted over time. For all analyses, it is anticipated that future amendments will be required to adjust for changes in effectiveness (including the disproving of efficacy), changes in relevant strategies available, changes in cost, and changes in population parameters.
WE ALL DIE EVENTUALLY
Secondly, using the example of exemestane (Aromasin) for primary prevention of breast cancer in postmenopausal women, the author raises issues about how to determine the net benefit of preventive therapies in terms of deaths avoided or life-years gained. The particular concern relates to what extent the benefit of deaths avoided by exemestane is negated by deaths that are caused by other non-breast-cancer-related diseases. This implies that using exemestane to prevent death by breast cancer is possibly useless, as those women would go on to die of other causes eventually.
But is that not the case for every preventive or therapeutic intervention? Curing bacterial pneumonia with antibiotics surely saves patients who nonetheless will eventually die some day from another cause. Does this make the use of antibiotics for bacterial pneumonia cost-ineffective? No. The point is that life ultimately ends in death, but along the spectrum of life we utilize various interventions to prolong life and improve its quality as long as is meaningfully possible—either by preventing some diseases or by treating others.
Thus, the implicit assumption ab initio is that prevention or treatment of any particular disease is intrinsically a desirable proposition on its own merits and deserving of some expense of resources. As such, for any given disease, the cost-effectiveness of preventive or therapeutic measures must necessarily be confined to deaths avoided and life-years gained (or other such suitable measures) that are directly attributable to that disease process or to side effects of the particular therapy. Attempting to expand beyond that measure would lead to absurdities such that no intervention would ever be cost-effective because we all eventually die.
The author of the commentary replies.