ADVERTISEMENT

Examining the Utility of 30-day Readmission Rates and Hospital Profiling in the Veterans Health Administration

Journal of Hospital Medicine 14(5). 2019 May;:266-271. Published online first February 20, 2019. | 10.12788/jhm.3155

BACKGROUND: The Veterans Health Administration (VA) reports hospital-specific 30-day risk-standardized readmission rates (RSRRs) using CMS-derived models.

OBJECTIVE: The aim of this study was to examine and describe the interfacility variability of 30-day RSRRs for acute myocardial infarction (AMI), heart failure (HF), and pneumonia as a means to assess its utility for VA quality improvement and hospital comparison.

RESEARCH DESIGN: A retrospective analysis of VA and Medicare claims data using one-year (2012) and three-year (2010-2012) data given their use for quality improvement or for hospital comparison, respectively.

SUBJECTS: This study included 3,571 patients hospitalized for AMI at 56 hospitals, 10,609 patients hospitalized for HF at 102 hospitals, and 10,191 patients hospitalized for pneumonia at 106 hospitals.

MEASURES: Hospital-specific 30-day RSRRs for AMI, HF, and pneumonia hospitalizations were calculated using hierarchical generalized linear models.

RESULTS: Of 164 qualifying VA hospitals, 56 (34%), 102 (62%), and 106 (64%) qualified for analysis based on CMS criteria for AMI, HF, and pneumonia cohorts, respectively. Using 2012 data, we found that two hospitals (2%) had CHF RSRRs worse than the national average (+95% CI), whereas no hospital demonstrated worse-than-average risk-stratified readmission Rate (RSRR; +95% CI) for AMI or pneumonia. After increasing the number of facility admissions by combining three years of data, we found that four (range: 3.5%-5.3%) hospitals had RSRRs worse than the national average (+95% CI) for all three conditions.

CONCLUSIONS: The Centers for Medicare and Medicaid Services-derived 30-day readmission measure may not be a useful measure to distinguish VA interfacility performance or drive quality improvement given the low facility-level volume of such readmissions.

© 2019 Society of Hospital Medicine

DISCUSSION

We found that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia showed little variation among VA hospitals. The lack of institutional 30-day readmission volume appears to be a fundamental limitation that subsequently requires multiple years of data to make this metric clinically meaningful. As the largest integrated healthcare system in the United States, the VA relies upon and makes large-scale programmatic decisions based on such performance data. The inability to detect meaningful interhospital variation in a timely manner suggests that the CMS-derived 30-day RSRR may not be a sensitive metric to distinguish facility performance or drive quality improvement initiatives within the VA.

First, we found it notable that among the 146 VA medical centers available for analysis,15 between 38% and 77% of hospitals qualified for evaluation when using CMS-based participation criteria—which excludes institutions with fewer than 25 episodes per year. Although this low degree of qualification for profiling was most dramatic when using one year of data (range: 38%-72%), we noted that it did not dramatically improve when we combined three years of data (range: 52%-77%). These findings act to highlight the population and systems differences between CMS and VA populations16 and further support the idea that CMS-derived models may not be optimized for use in the VA healthcare system.

Our findings are particularly relevant within the VA given the quarterly rate with which these data are reported within the VA SAIL scorecard.2 The VA designed SAIL for internal benchmarking to spotlight successful strategies of top performing institutions and promote high-quality, value-based care. Using one year of data, the minimum required to utilize CMS models, showed that quarterly feedback (ie, three months of data) may not be informative or useful given that few hospitals are able to differentiate themselves from the mean (±95% CI). Although the capacity to distinguish between high and low performers does improve by combining hospital admissions over three years, this is not a reasonable timeline for institutions to wait for quality comparisons. Furthermore, although the VA does present its data on CMS’s Hospital Compare website using three years of combined data, the variability and distribution of such results are not supplied.3

This lack of discriminability raises concerns about the ability to compare hospital performance between low- and high-volume institutions. Although these models function well in CMS settings with large patient volumes in which greater variability exists,5 they lose their capacity to discriminate when applied to low-volume settings such as the VA. Given that several hospitals in the US are small community hospitals with low patient volumes,17 this issue probably occurs in other non-VA settings. Although our study focuses on the VA, others have been able to compare VA and non-VA settings’ variation and distribution. For example, Nuti et al. explored the differences in 30-day RSRRs among hospitalized patients with AMI, HF, and pneumonia and similarly showed little variation, narrow distributions, and few outliers in the VA setting compared to those in the non-VA setting. For small patient volume institutions, including the VA, a focus on high-volume services, outcomes, and measures (ie, blood pressure control, medication reconciliation, etc.) may offer more discriminability between high- and low-performing facilities. For example, Patel et al. found that VA process measures in patients with HF (ie, beta-blocker and ACE-inhibitor use) can be used as valid quality measures as they exhibited consistent reliability over time and validity with adjusted mortality rates, whereas the 30-day RSRR did not.18

Our findings may have substantial financial, resource, and policy implications. Automatically developing and reporting measures created for the Medicare program in the VA may not be a good use of VA resources. In addition, facilities may react to these reported outcomes and expend local resources and finances to implement interventions to improve on a performance outcome whose measure is statistically no different than the vast majority of its comparators. Such events have been highlighted in the public media and have pointed to the fact that small changes in quality, or statistical errors themselves, can have large ramifications within the VA’s hospital rating system.19

These findings may also add to the discussion on whether public reporting of health and quality outcomes improves patient care. Since the CMS began public reporting on RSRRs in 2009, these rates have fallen for all three examined conditions (AMI, HF, and pneumonia),7,20,21 in addition to several other health outcomes.17 Although recent studies have suggested that these decreased rates have been driven by the CMS-sponsored Hospital Readmissions Reduction Program (HRRP),22 others have suggested that these findings are consistent with ongoing secular trends toward decreased readmissions and may not be completely explained by public reporting alone.23 Moreover, prior work has also found that readmissions may be strongly impacted by factors external to the hospital setting, such as patients’ social demographics (ie, household income, social isolation), that are not currently captured in risk-prediction models.24 Given the small variability we see in our data, public reporting within the VA is probably not beneficial, as only a small number of facilities are outliers based on RSRR.

Our study has several limitations. First, although we adapted the CMS model to the VA, we did not include gender in the model because >99% of all patient admissions were male. Second, we assessed only three medical conditions that were being tracked by both CMS and VA during this time period, and these outcomes may not be representative of other aspects of care and cannot be generalized to other medical conditions. Finally, more contemporary data could lead to differing results – though we note that no large-scale structural or policy changes addressing readmission rates have been implemented within the VA since our study period.

The results of this study suggest that the CMS-derived 30-day risk-stratified readmission metric for AMI, HF, and pneumonia may not have the capacity to properly detect interfacility variance and thus may not be an optimal quality indicator within the VA. As the VA and other healthcare systems continually strive to improve the quality of care they provide, they will require more accurate and timely metrics for which to index their performance.

Online-Only Materials

Attachment
Size