Public reporting and pay-for-performance programs in perioperative medicine
ABSTRACT
Public reporting and pay-for-performance reimbursement are two strategies designed to stimulate hospital quality improvement. Information about the quality of hospital care (including surgical volumes and staffing, process-based measures, and mortality and other outcomes) is compiled on various Web sites, giving the public means to compare providers. While public reporting has been shown to foster quality-improvement activities by hospitals, its effects on clinical outcomes are less certain. Likewise, consumers’ awareness and use of publicly available hospital and provider quality data have been low but appear to be increasing.
KEY POINTS
- Public reporting programs have expanded in recent years, driven by national policy imperatives to improve safety, increased demands for transparency, patient “consumerism,” and the growth of information technology.
- Hospital-based pay-for-performance programs have had only a minor impact on quality so far, possibly because financial incentives have been small and much of the programs’ potential benefit may be preempted by existing public reporting efforts.
- These programs have considerable potential to accelerate improvement in quality but are limited by a need for more-nuanced process measures and better risk-adjustment methods.
- These programs may lead to unintended consequences such as misuse or overuse of measured services, “cherry-picking” of low-risk patients, or misclassification of providers.
- Continued growth of the Internet and social-networking sites will likely enhance and change the way patients use and share information about the quality of health care.
IS THERE EVIDENCE OF BENEFIT?
A consistent effect in spurring quality-improvement efforts
Nearly a dozen published studies have evaluated whether public reporting stimulates quality-improvement activities, and the results have shown fairly consistently that it does. A 2003 study by Hibbard et al is representative of the results.7 This survey-based investigation measured the number of quality-improvement activities in cardiac and obstetric care undertaken by 24 Wisconsin hospitals that were included in an existing public reporting system compared with the number undertaken by 98 other Wisconsin hospitals that received either a private report on their own quality performance (without the information being made public) or no quality report at all. The study found that the hospitals that participated in public reporting were engaged in significantly more quality-improvement activities in both of the clinical areas assessed than were the hospitals receiving private reporting or no reporting.
A mixed effect on patient outcomes
In contrast, the data on whether public reporting improves patient outcomes have so far been mixed. A 2008 systematic review of the literature identified 11 studies that addressed this issue: five studies found that public reporting had a positive effect on patient outcomes, while six studies demonstrated a negative effect or no effect.8 Unfortunately, the methodological quality of most studies was poor: most were before-and-after comparisons without controls.
One of the positive studies in this review examined the effects of New York State’s pioneering institution of provider-specific CABG mortality reports (provider profiling) in 1989.9 The analysis found that between 1987 and 1992 (during which time provider profiling was instituted), unadjusted 30-day mortality rates following bypass surgery declined to a significantly larger degree among New York Medicare patients (33% reduction) than among Medicare patients nationwide (19% reduction) (P < .001).
In contrast, a time-series study from Cleveland Health Quality Choice (CHQC)—an early and innovative public reporting program—exemplifies a case in which public reporting of hospital performance had no discernible effect.10 The study examined trends in 30-day mortality across a range of conditions over a 6-year period for 30 hospitals in the Cleveland area participating in a public reporting system. It found that the hospitals that started out in the worst-performing groups (based on baseline mortality rates) showed no significant change in mortality over time.
DOES PUBLIC REPORTING AFFECT PATIENT CHOICES?
How a high-profile bypass patient chooses a hospital
When former President Bill Clinton developed chest pain and shortness of breath in 2004, he was seen at a small community hospital in Westchester County, N.Y., and then transferred to New York-Presbyterian Hospital/Columbia University Medical Center for bypass surgery.11 Although one would think President Clinton would have chosen the best hospital for CABG in New York, Presbyterian/Columbia’s risk-adjusted mortality rate for CABG was actually about twice the average for New York hospitals and one of the worst in the state, according to the most recent “report card” for New York hospitals available at the time.12
Why did President Clinton choose the hospital he did? Chances are that he, like most other patients, did not base his decision on publicly reported data. His choice probably was heavily influenced by the normal referral patterns of the community hospital where he was first seen.
Surveys show low patient use of data on quality...
The question raised by President Clinton’s case has been formally studied. In 1996, Schneider and Epstein surveyed patients who had recently undergone CABG in Pennsylvania (where surgeon- and hospital-specific mortality rates for cardiac surgery are publicly available) and found that fewer than 1% of patients said that provider ratings had a moderate or major impact on their choice of provider.13
The Kaiser Family Foundation regularly surveys the public about its knowledge and use of publicly available hospital comparison data. In the latest Kaiser survey, conducted in 2008,14 41% of respondents said they believe there are “big differences” in quality among their local hospitals, yet 59% said they would choose a hospital that is familiar to them rather than a higher-rated facility. These findings may be explained, in part, by a lack of awareness that data on hospital quality are available: only 7% of survey participants said they had seen and used information comparing the quality of hospitals to make health care decisions in the prior year, and only 6% said they had seen and used information comparing physicians.
...But a trend toward greater acceptance
Although consumers’ use of publicly reported quality data remains low, their recognition of the value of such data has grown over time. Kaiser has conducted similar public surveys dating back to 1996, and the period from 1996 to 2008 saw a substantial decrease (from 72% to 59%) in the percentage of Americans who would choose a hospital based on familiarity more than on quality ratings. Similarly, the percentage of Americans who would prefer a surgeon with high quality ratings over a surgeon who has treated friends or family more than doubled from 1996 (20%) to 2008 (47%).14
What effect on market share?
Studies on the effects that public reporting has on hospital market share have been limited.
Schneider and Epstein surveyed cardiologists in Pennsylvania in 1995 and found that 87% of them said the state’s public reporting of surgeon- and hospital-specific mortality rates for CABG had no influence or minimal influence on their referral recommendations.15
Similarly, a review of New York State’s public reporting system for CABG 15 years after its launch found that hospital performance was not associated with a subsequent change in market share, not even among those hospitals with the highest mortality rate in a given year.16 Interestingly, however, this review also showed that surgeons in the bottom performance quartile were four times as likely as other surgeons to leave practice in the year following their poor report, which is one of the most prominent outcomes associated with provider profiling reported to date.
PAY-FOR-PERFORMANCE PROGRAMS
Evidence on the impact of pay-for-performance programs in the hospital setting is even more limited than that for public reporting.
Some evidence has come from the CMS/Premier Hospital Quality Incentive Demonstration, a pay-for-performance collaboration between the Centers for Medicare and Medicaid Services (CMS) and Premier, Inc., a nationwide alliance of hospitals that promotes best practices.17 The demonstration calls for hospitals that rank in the top quintile or decile for performance to receive a 1% or 2% Medicare payment bonus for five clinical focus areas: cardiac surgery, hip and knee surgery, pneumonia, heart failure, and acute MI. Performance ratings are based primarily on process measures as well as a few clinical outcome measures. Results from the first 21 months of the demonstration showed a consistent improvement in the hospitals’ composite quality scores in each of the five clinical areas.17
It is important to recognize, however, that this improvement occurred against the backdrop of broad national adoption of public reporting of hospital quality data, which makes it difficult to tease out how much of the improvement was truly attributable to pay-for-performance, especially in the absence of a control group.
To address this question, my colleagues and I evaluated adherence to quality measures over a 2-year period at 613 hospitals participating in a national public reporting initiative,18 including 207 hospitals that simultaneously took part in the CMS/Premier Hospital Quality Incentive Demonstration’s pay-for-performance program described above. We found that the hospitals participating in both public reporting and the pay-for-performance initiative achieved only modestly greater improvements in quality than did the hospitals engaged solely in public reporting; the difference amounted to only about a 1% improvement in process measures per year.
In another controlled study, Glickman et al compared quality improvement in the management of acute MI between 54 hospitals in a CMS pay-for-performance pilot project and 446 control hospitals without pay-for-performance incentives.19 They found that the pay-for-performance hospitals achieved a statistically significantly greater degree of improvement compared with control hospitals on two of six process-of-care measures (use of aspirin at discharge and smoking-cessation counseling) but not on the composite process-of-care measure. There was no significant difference between the groups in improvements in in-hospital mortality.
Why have the effects of pay-for-performance initiatives so far been so limited? It may be that the bonuses are too small and that public reporting is already effective at stimulating quality improvement, so that the incremental benefit of adding financial incentives is small. In the case of my group’s study,18 another possible factor was that the hospitals’ baseline performance on the quality measures assessed was already high—approaching or exceeding 90% on 5 of the 10 measures—thereby limiting our power to detect differences between the groups.