ADVERTISEMENT

Public reporting and pay-for-performance programs in perioperative medicine

Are they meeting their goals?
Author and Disclosure Information

ABSTRACT

Public reporting and pay-for-performance reimbursement are two strategies designed to stimulate hospital quality improvement. Information about the quality of hospital care (including surgical volumes and staffing, process-based measures, and mortality and other outcomes) is compiled on various Web sites, giving the public means to compare providers. While public reporting has been shown to foster quality-improvement activities by hospitals, its effects on clinical outcomes are less certain. Likewise, consumers’ awareness and use of publicly available hospital and provider quality data have been low but appear to be increasing.

KEY POINTS

  • Public reporting programs have expanded in recent years, driven by national policy imperatives to improve safety, increased demands for transparency, patient “consumerism,” and the growth of information technology.
  • Hospital-based pay-for-performance programs have had only a minor impact on quality so far, possibly because financial incentives have been small and much of the programs’ potential benefit may be preempted by existing public reporting efforts.
  • These programs have considerable potential to accelerate improvement in quality but are limited by a need for more-nuanced process measures and better risk-adjustment methods.
  • These programs may lead to unintended consequences such as misuse or overuse of measured services, “cherry-picking” of low-risk patients, or misclassification of providers.
  • Continued growth of the Internet and social-networking sites will likely enhance and change the way patients use and share information about the quality of health care.

CONTROVERSIES AND CHALLENGES

Many issues continue to surround public reporting and pay-for-performance programs:

  • Are the measures used to evaluate health care systems suitable and evidence-based? Do they truly reflect the quality of care that providers are giving?
  • Do the programs encourage “teaching to the test” rather than stimulating real and comprehensive improvement? Do they make the system prone to misuse or overuse of measured services?
  • How much of the variation in hospital outcomes can be explained by the current process-of-care measures?
  • Should quality be measured by outcomes or processes? Outcomes matter more to patients, but they require risk adjustment to ensure valid comparisons, and risk adjustment can be difficult and expensive to conduct.
  • How much is chance a factor in apparent performance differences between hospitals?
  • How much is patient selection a factor? Might public reporting lead to “cherry-picking” of low-risk patients and thereby reduce access to care for other patients?

Unidirectional measures can lead to misuse, overuse

In 2003, the Infectious Diseases Society of America updated its guidelines on community-acquired pneumonia to recommend that patients receive antibiotics within 4 hours of hospital admission. This recommendation was widely adopted as an incentive-linked performance measure by CMS and other third-party payers. Kanwar et al studied the impact of this guidelines-based incentive in a pre/post study at one large teaching hospital.20 They found that while significantly more patients received antibiotics in a timely fashion after publication of the guidelines (2005) versus before the guidelines (2003), almost one-third of patients receiving antibiotics in 2005 had normal chest radiographs and thus were not appropriate candidates for therapy. Moreover, significantly fewer patients in 2005 had a final diagnosis of pneumonia at discharge, and there was no difference between the two periods in rates of mortality or ICU transfer. The researchers concluded that linking the quality indicator of early antibiotic use to financial incentives may lead to misdiagnosis of pneumonia and inappropriate antibiotic use.

Of course, antibiotic timing is not the only quality measure subject to overuse or misuse; other measures pose similar risks, including prophylaxis for deep vein thrombosis, glycemic control measures, and target immunization rates.

More-nuanced measures needed

We must also consider how well reported quality measures actually reflect our objectives. For example, an evaluation of 962 hospitals’ performance in managing acute MI found that the publicly reported core process measures for acute MI (beta-blocker and aspirin at admission and discharge, ACE inhibitor at discharge, smoking-cessation counseling, timely reperfusion) together explained only 6% of the variance among the hospitals in risk-adjusted 30-day mortality.21 This underscores how complicated the factors affecting mortality are, and how existing process measures have only begun to scratch the surface.

How much of a role does chance play?

Another issue is the role of chance and our limited power to detect real differences in outcomes, as illustrated by an analysis by Dimick et al of all discharges from a nationally representative sample of nearly 1,000 hospitals.22 The objective was to determine whether the seven operations for which mortality is advocated as a quality indicator by the Agency for Healthcare Research and Quality are performed often enough to reliably identify hospitals with increased mortality rates. The researchers found that only for one of the seven procedures—CABG—is there sufficient caseload over a 3-year period at the majority of US hospitals to accurately detect a mortality rate twice the national average.

Although CMS is highly committed to public reporting, the comparative mortality data available on its Hospital Compare Web site are not very useful for driving consumer choice or motivating hospitals to improve. For example, of the nearly 4,500 US hospitals that reported data on 30-day mortality from MI, only 17 hospitals were considered to be better than the national average and only 7 were considered worse than the national average.4

CASE REVISITED: LESSONS FROM THE UMASS MEMORIAL EXPERIENCE

Returning to our case study, what can the UMass Memorial experience teach us, and how well does it reflect the literature about the usefulness of public reporting?

Did public reporting accelerate quality improvement efforts? Yes. Reporting led to the suspension of cardiac surgery and substantive reorganization, which is consistent with the literature.

Was the mortality reduction typical? No. An optimist’s view would be that the drastic actions spurred by the media coverage had strong effects. A skeptic might say that perhaps UMass Memorial did some “cherry-picking” of patients, or that they got better at coding procedures in a way that reflected more favorably on the hospital.

Were the declines in patient volumes predictable? No. So far, the data suggest that public reporting has its greatest effects on providers rather than on institutions. This may change, however, with the introduction of tiered copayments, whereby patients are asked to pay more if they get their care from lower rated institutions.

Would financial incentives have accelerated improvement? It is too early to tell. The evidence for pay-for-performance programs is limited, and the benefits demonstrated so far have been modest. But in many ways the alternative is worse: our current system of financing and paying for hospital care offers no financial incentives to hospitals for investing in the personnel or systems required to achieve better outcomes—and instead rewards (through supplemental payments) adverse outcomes.

Did prospective patients have a right to know? Despite the limitations of public reporting, one of the most compelling arguments in its favor is that patients at UMass Memorial had the right to know about the program’s outcomes. This alone may ultimately justify the expense and efforts involved. Transparency and accountability are core values of open democratic societies, and US society relies on public reporting in many other realms: the National Highway Traffic Safety Administration publicizes crash test ratings, the Securities and Exchange Commission enforces public reporting by financial institutions, and the Federal Aviation Administration reports on airline safety, timeliness of flights, and lost baggage rates.

FUTURE DIRECTIONS

In the future, we can expect more measurement and reporting of health care factors that patients care most about, such as clinical outcomes and the patient experience. It is likely that public reporting and pay-for-performance programs will address a broader range of conditions and comprise a larger number of measures. CMS has outlined plans to increase the number of publicly reported measures to more than 70 by 2010 and more than 100 by 2011. My hope is that this expansion of data, along with improved data synthesis and presentation, will foster greater use of publicly reported data. Further, the continued evolution of the Web and social networking sites is very likely to enhance public awareness of hospital performance and change the ways in which patients use these data.