ADVERTISEMENT

Safety Huddle Intervention for Reducing Physiologic Monitor Alarms: A Hybrid Effectiveness-Implementation Cluster Randomized Trial

Journal of Hospital Medicine 13(9). 2018 September;609-615. Published online first February 28, 2018 | 10.12788/jhm.2956

BACKGROUND: Monitor alarms occur frequently but rarely warrant intervention.

OBJECTIVE: This study aimed to determine if a safety huddle-based intervention reduces unit-level alarm rates or alarm rates of individual patients whose alarms are discussed, as well as evaluate implementation outcomes.

DESIGN: Unit-level, cluster randomized, hybrid effectiveness-implementation trial with a secondary patient-level analysis.

SETTING: Children’s hospital.

PATIENTS: Unit-level: all patients hospitalized on 4 control (n = 4177) and 4 intervention (n = 7131) units between June 15, 2015 and May 8, 2016. Patient-level: 425 patients on randomly selected dates postimplementation.

INTERVENTION: Structured safety huddle review of alarm data from the patients on each unit with the most alarms, with a discussion of ways to reduce alarms.

MEASUREMENTS: Unit-level: change in unit-level alarm rates between baseline and postimplementation periods in intervention versus control units. Patient-level: change in individual patients’ alarm rates between the 24 hours leading up to huddles and the 24 hours after huddles in patients who were discussed versus not discussed in huddles.

RESULTS: Alarm data informed 580 huddle discussions. In unit-level analysis, intervention units had 2 fewer alarms/patient-day (95% CI: 7 fewer to 6 more, P = .50) compared with control units. In patient-level analysis, patients discussed in huddles had 97 fewer alarms/patient-day (95% CI: 52–138 fewer, P < .001) in the posthuddle period compared with patients not discussed in huddles. Implementation outcome analysis revealed a low intervention dose of 0.85 patients/unit/day.

CONCLUSIONS: Safety huddle-based alarm discussions did not influence unit-level alarm rates due to low intervention dose but were effective in reducing alarms for individual children.

TRIAL REGISTRATION: ClinicalTrials.gov identifier: NCT02458872. https://clinicaltrials.gov/ct2/show/NCT02458872

© 2018 Society of Hospital Medicine

Study Periods

The study had 3 periods as shown in Supplementary Figure 2: (1) 16-week baseline data collection, (2) phased intervention implementation during which we serially spent 2-8 weeks on each of the 4 intervention units implementing the intervention, and (3) 16-week postimplementation data collection.

Outcomes

The primary effectiveness outcome was the change in unit-level alarms per patient-day between the baseline and postimplementation periods in intervention versus control units, with all patients on the units included. The secondary effectiveness outcome (analyzed using the embedded cohort design) was the change in individual patient-level alarms between the 24 hours leading up to a huddle and the 24 hours following huddles in patients who were versus patients who were not discussed in huddles.

Implementation outcomes included adoption and fidelity measures. To measure adoption (defined as “intention to try” the intervention),16 we measured the frequency of discussions attended by patients’ nurses and physicians. We evaluated 3 elements of fidelity: adherence, dose, and quality of delivery.17 We measured adherence as the incorporation of alarm discussion into huddles when there were eligible patients to discuss. We measured dose as the average number of patients discussed on each unit per calendar day during the postimplementation period. We measured quality of delivery as the extent to which changes to monitoring that were agreed upon in the huddles were made at the bedside.

Safety Measures

To surveil for unintended consequences of reduced monitoring, we screened the hospital’s rapid response and code blue team database weekly for any events in patients previously discussed in huddles that occurred between huddle and hospital discharge. We reviewed charts to determine if the events were related to the intervention.

Randomization

Prior to randomization, the 8 units were divided into pairs based on participation in hospital-wide Joint Commission alarm management activities, use of alarm middleware that relayed detailed alarm information to nurses’ mobile phones, and baseline alarm rates. One unit in each pair was randomized to intervention and the other to control by coin flip.

Data Collection

We used Research Electronic Data Capture (REDCap)18 database tools.

Data for Unit-Level Analyses

We captured all alarms occurring on the study units during the study period using data from BedMasterEx. We obtained census data accurate to the hour from the Clinical Data Warehouse.

Data Captured in All Huddles

During each huddle, we collected the number of patients whose alarms were discussed, patient characteristics, presence of nurses and physicians, and monitoring changes agreed upon. We then followed up 4 hours later to determine if changes were made at the bedside by examining monitor settings.

Data Captured Only During Intensive Data Collection Days

We randomly selected 1 day during each of the 16 weeks of the postimplementation period to obtain additional patient-level data. On each intensive data collection day, the 4 monitored patients on each intervention and control unit with the most high-acuity alarms in the 4 hours prior to huddles occurring — regardless of whether or not these patients were later discussed in huddles — were identified for data collection. On these dates, a member of the research team reviewed each patient’s alarm counts in 4-hour blocks during the 24 hours before and after the huddle. Given that the huddles were not always at the same time every day (ranging between 10:00 and 13:00), we operationally set the huddle time as 12:00 for all units.

Data Analysis

We used Stata/SE 14.2 for all analyses.

Unit-Level Alarm Rates

To compare unit-level rates, we performed an interrupted time series analysis using segmented (piecewise) regression to evaluate the impact of the intervention.19,20 We used a multivariable generalized estimating equation model with the negative binomial distribution21 and clustering by unit. We bootstrapped the model and generated percentile-based 95% confidence intervals. We then used the model to estimate the alarm rate difference in differences between the baseline data collection period and the postimplementation data collection period for intervention versus control units.

Patient-Level Alarm Rates

In contrast to unit-level analysis, we used an embedded cohort design to model the change in individual patients’ alarms between the 24 hours leading up to huddles and the 24 hours following huddles in patients who were versus patients who were not discussed in huddles. The analysis was restricted to the patients included in intensive data collection days. We performed bootstrapped linear regression and generated percentile-based 95% confidence intervals using the difference in 4-hour block alarm rate between pre- and posthuddle as the outcome. We clustered within patients. We stratified by unit and preceding alarm rate. We modeled the alarm rate difference between the 24-hour prehuddle and the 24-hour posthuddle for huddled and nonhuddled patients and the difference in differences between exposure groups.