Inside the operating room—balancing the risks and benefts of new surgical procedures
How should we introduce and evaluate new procedures?
By Joel D. Cooper, MD
Time magazine published an article in 1995 titled “Are Surgeons Too Creative?” that examined the question of whether operations should be regulated the way that medications are.1 The piece featured two patients. One, a patient with emphysema who underwent lung volume reduction surgery at our institution during the early days of this procedure, had a good outcome. The other was a neurosurgical patient who had a bad outcome.
The public is somewhat sympathetic to this article’s premise, which can be viewed as a call to require a similar level of evidence for surgical procedures as for new drugs. This sympathy arises from the expense of new technologies, pressure from payors to control costs and increase profits, hospital budget restraints, and the reality of increasingly well-informed patients.
Yet there are distinct differences between drugs and surgery. A new drug does not change over time. A new drug is associated with a variable biologic response whose assessment often requires large numbers of patients and considerable follow-up. And a new drug may manifest unforeseen late side effects and toxicities far removed from the time of initial use. In contrast, none of these characteristics applies to surgical procedures. A surgical intervention changes over time as the technique and experience evolve and as refinements are made in patient selection and in pre- and postoperative management. With this evolution comes a change in risk over time. Patient selection for surgery is as much an art as a science; each patient requires assessment of both the potential benefits and risks of the procedure, which argues against offering an operation by prescription. Moreover, with surgery, the facilities and the operator’s skill and experience levels vary from one center to another.
INTRODUCTION OF NEW PROCEDURES: COVERAGE VS VALIDATION
Introduction of a new surgical procedure depends on the nature of the procedure and the other interventions that may be available for the condition. In assessing how new procedures should be introduced, I believe we need to distinguish between coverage and validation. Coverage—ie, payment for the procedure—is an economic issue, whereas validation involves an ethical and scientific evaluation of the role of the procedure.
Coverage by an insurer should have at least theoretical justification and presumption of benefit. For instance, the rationale behind a heart transplant for a patient with a failing heart is obvious. Coverage generally requires preliminary evidence of efficacy, possibly in an animal model, although no animal models may exist for some conditions. Most important, a different standard for providing initial coverage should be applied if no alternative therapy exists for a condition that is severe, debilitating, and potentially life-threatening; if a new procedure treats a condition for which a standard therapy already exists, the standard for coverage must be higher. Finally, coverage in all cases should require ongoing reassessment of the procedure.
In contrast, validation is a scientific analysis of results over time, including long-term results, and can be accomplished by well-controlled case series, particularly if the magnitude of the benefit is both frequent and significant and especially if no alternative therapy exists. Randomized clinical trials are the gold standard for appropriate interventions but are not always applicable.
A 1996 study by Majeed et al2 provides a good example of validation-oriented surgical research. In this blinded trial, 200 patients scheduled for cholecystectomy were randomized to either laparoscopic or open (small-incision) procedures. The study found no differences between the groups in terms of hospital stay or postprocedure pain or recovery. In an accompanying commentary,3Lancet editor Richard Horton praised the design and conduct of the study, noting that it was very much the exception in surgical research, which he argued was preoccupied with case series. Horton offered the following speculation about this preoccupation:
Perhaps many surgeons do not see randomised trials as feasible strategy to resolve questions about surgical management. Cynics might even claim that the personal attributes that go to make a successful surgeon differ from those needed for collaborative multicentre research.3
IS THE ‘SURGICAL SCIENTIST’ AN OXYMORON?
Barnaby Reeves, writing in The Lancet 3 years later, offered a more diplomatic take on the difficulty of evaluating surgical procedures:
What makes a surgical technique new is not always easy to define because surgical procedures generally evolve in small steps, which makes it difficult to decide when a procedure has changed sufficiently to justify formal evaluation.4
Reeves went on to argue that doing an evaluation too early may preclude acceptance, since the technique may not have evolved sufficiently and surgeons may not have mastered it; conversely, doing an evaluation too late may make the evaluation moot, since the technique may have already become established and withholding it may be deemed unethical. Additionally, he noted that the quality of surgical evaluation is complicated by the possibility that some surgeons have better mastery of—and therefore better outcomes with—one procedure while other surgeons have better mastery and outcomes with an alternative procedure.4
These concerns were well captured by the late Dr. Judah Folkman, whom I once heard say, “When a basic scientist is informed that another investigator cannot reproduce his work, it has a chilling effect; for the surgeon, however, it is a source of pride.”