Standardized attending rounds to improve the patient experience: A pragmatic cluster randomized controlled trial
Background
At academic medical centers, attending rounds (AR) serve to coordinate patient care and educate trainees, yet variably involve patients.
Objective
To determine the impact of standardized bedside AR on patient satisfaction with rounds.
Design
Cluster randomized controlled trial.
Setting
500-bed urban, quaternary care hospital.
Patients
1200 patients admitted to the medicine service.
Intervention
Teams in the intervention arm received training to adhere to 5 AR practices: 1) pre-rounds huddle; 2) bedside rounds; 3) nurse integration; 4) real-time order entry; 5) whiteboard updates. Control arm teams continued usual rounding practices.
Measurements
Trained observers audited rounds to assess adherence to recommended AR practices and surveyed patients following AR. The primary outcome was patient satisfaction with AR. Secondary outcomes were perceived and actual AR duration, and attending and trainee satisfaction.
Results
We observed 241 (70.1%) and 264 (76.7%) AR in the intervention and control arms, respectively, which included 1855 and 1903 patient rounding encounters. Using a 5-point Likert scale, patients in the intervention arm reported increased satisfaction with AR (4.49 vs. 4.25; P = 0.01) and felt more cared for by their medicine team (4.54 vs. 4.36; P = 0.03). Although the intervention shortened the duration of AR by 8 minutes on average (143 vs. 151 minutes; P = 0.052), trainees perceived intervention AR as lasting longer and reported lower satisfaction with intervention AR.
Conclusions
Medicine teams can adopt a standardized, patient-centered, time-saving rounding model that leads to increased patient satisfaction with AR and the perception that patients are more cared for by their medicine team. Journal of Hospital Medicine 2017;12:143-149. © 2017 Society of Hospital Medicine
© 2017 Society of Hospital Medicine
This trial, which was approved by the University of California, San Francisco Committee on Human Research (UCSF CHR) and was registered with ClinicalTrials.gov (NCT01931553), was classified under Quality Improvement and did not require informed consent of patients or providers.
Intervention Description
We conducted a cluster randomized trial to evaluate the impact of a bundled set of 5 AR practice recommendations, adapted from published work,26 on patient experience, as well as on attending and trainee satisfaction: 1) huddling to establish the rounding schedule and priorities; 2) conducting bedside rounds; 3) integrating bedside nurses; 4) completing real-time order entry using bedside computers; 5) updating the whiteboard in each patient’s room with care plan information.
At the beginning of each month, study investigators (Nader Najafi and Bradley Monash) led a 1.5-hour workshop to train attending physicians and trainees allocated to the intervention arm on the recommended AR practices. Participants also received informational handouts to be referenced during AR. Attending physicians and trainees randomized to the control arm continued usual rounding practices. Control teams were notified that there would be observers on rounds but were not informed of the study aims.
Randomization and Team Assignments
The medicine service was divided into 2 arms, each comprised of 4 teams. Using a coin flip, Cluster 1 (Teams A, B, C and D) was randomized to the intervention, and Cluster 2 (Teams E, F, G and H) was randomized to the control. This design was pragmatically chosen to ensure that 1 team from each arm would admit patients daily. Allocation concealment of attending physicians and trainees was not possible given the nature of the intervention. Patients were blinded to study arm allocation.
MEASURES AND OUTCOMES
Adherence to Practice Recommendations
Thirty premedical students served as volunteer AR auditors. Each auditor received orientation and training in data collection techniques during a single 2-hour workshop. The auditors, blinded to study arm allocation, independently observed morning AR during weekdays and recorded the completion of the following elements as a dichotomous (yes/no) outcome: pre-rounds huddle, participation of nurse in AR, real-time order entry, and whiteboard use. They recorded the duration of AR per day for each team (minutes) and the rounding model for each patient rounding encounter during AR (bedside, hallway, or card flip).23 Bedside rounds were defined as presentation and discussion of the patient care plan in the presence of the patient. Hallway rounds were defined as presentation and discussion of the patient care plan partially outside the patient’s room and partially in the presence of the patient. Card-flip rounds were defined as presentation and discussion of the patient care plan entirely outside of the patient’s room without the team seeing the patient together. Two auditors simultaneously observed a random subset of patient-rounding encounters to evaluate inter-rater reliability, and the concordance between auditor observations was good (Pearson correlation = 0.66).27
Patient-Related Outcomes
The primary outcome was patient satisfaction with AR, assessed using a survey adapted from published work.12,14,28,29 Patients were approached to complete the questionnaire after they had experienced at least 1 AR. Patients were excluded if they were non-English-speaking, unavailable (eg, off the unit for testing or treatment), in isolation, or had impaired mental status. For patients admitted multiple times during the study period, only the first questionnaire was used. Survey questions included patient involvement in decision-making, quality of communication between patient and medicine team, and the perception that the medicine team cared about the patient. Patients were asked to state their level of agreement with each item on a 5-point Likert scale. We obtained data on patient demographics from administrative datasets.
Healthcare Provider Outcomes
Attending physicians and trainees on service for at least 7 consecutive days were sent an electronic survey, adapted from published work.25,30 Questions assessed satisfaction with AR, perceived value of bedside rounds, and extent of patient and nursing involvement.Level of agreement with each item was captured on a continuous scale; 0 = strongly disagree to 100 = strongly agree, or from 0 (far too little) to 100 (far too much), with 50 equating to “about right.” Attending physicians and trainees were also asked to estimate the average duration of AR (in minutes).
Statistical Analyses
Analyses were blinded to study arm allocation and followed intention-to-treat principles. One attending physician crossed over from intervention to control arm; patient surveys associated with this attending (n = 4) were excluded to avoid contamination. No trainees crossed over.
Demographic and clinical characteristics of patients who completed the survey are reported (Appendix). To compare patient satisfaction scores, we used a random-effects regression model to account for correlation among responses within teams within randomized clusters, defining teams by attending physician. As this correlation was negligible and not statistically significant, we did not adjust ordinary linear regression models for clustering. Given observed differences in patient characteristics, we adjusted for a number of covariates (eg, age, gender, insurance payer, race, marital status, trial group arm).
We conducted simple linear regression for attending and trainee satisfaction comparisons between arms, adjusting only for trainee type (eg, resident, intern, and medical student).
We compared the frequency with which intervention and control teams adhered to the 5 recommended AR practices using chi-square tests. We used independent Student’s t tests to compare total duration of AR by teams within each arm, as well as mean time spent per patient.
This trial had a fixed number of arms (n = 2), each of fixed size (n = 600), based on the average monthly inpatient census on the medicine service. This fixed sample size, with 80% power and α = 0.05, will be able to detect a 0.16 difference in patient satisfaction scores between groups.
All analyses were conducted using SAS® v 9.4 (SAS Institute, Inc., Cary, NC).