ADVERTISEMENT

Documentation of Clinical Reasoning in Admission Notes of Hospitalists: Validation of the CRANAPL Assessment Rubric

Journal of Hospital Medicine 14(12). 2019 December;:746-753. Published online first June 11, 2019 | 10.12788/jhm.3233

OBJECTIVE: To establish a metric for evaluating hospitalists’ documentation of clinical reasoning in admission notes.
STUDY DESIGN: Retrospective study.
SETTING: Admissions from 2014 to 2017 at three hospitals in Maryland.
PARTICIPANTS: Hospitalist physicians.
MEASUREMENTS: A subset of patients admitted with fever, syncope/dizziness, or abdominal pain were randomly selected. The nine-item Clinical Reasoning in Admission Note Assessment & Plan (CRANAPL) tool was developed to assess the comprehensiveness of clinical reasoning documented in the assessment and plans (A&Ps) of admission notes. Two authors scored all A&Ps by using this tool. A&Ps with global clinical reasoning and global readability/clarity measures were also scored. All data were deidentified prior to scoring.
RESULTS: The 285 admission notes that were evaluated were authored by 120 hospitalists. The mean total CRANAPL score given by both raters was 6.4 (standard devision [SD] 2.2). The intraclass correlation measuring interrater reliability for the total CRANAPL score was 0.83 (95% CI, 0.76-0.87). Associations between the CRANAPL total score and global clinical reasoning score and global readability/clarity measures were statistically significant (P < .001). Notes from academic hospitals had higher CRANAPL scores (7.4 [SD 2.0] and 6.6 [SD 2.1]) than those from the community hospital (5.2 [SD 1.9]), P < .001.
CONCLUSIONS: This study represents the first step to characterizing clinical reasoning documentation in hospital medicine. With some validity evidence established for the CRANAPL tool, it may be possible to assess the documentation of clinical reasoning by hospitalists.

© 2019 Society of Hospital Medicine

The notes of physicians working for the hospitalist groups at each of the three hospitals were the focus of the analysis in this study.

Development of the Documentation Assessment Rubric

A team was assembled to develop the Clinical Reasoning in Admission Note Assessment & PLan (CRANAPL) tool. The CRANAPL was designed to assess the comprehensiveness and thoughtfulness of the clinical reasoning documented in the A&P sections of the notes of patients who were admitted to the hospital with an acute illness. Validity evidence for CRANAPL was summarized on the basis of Messick’s unified validity framework by using four of the five sources of validity: content, response process, internal structure, and relations to other variables.17

Content Validity

The development team consisted of members who have an average of 10 years of clinical experience in hospital medicine; have studied clinical excellence and clinical reasoning; and have expertise in feedback, assessment, and professional development.18-22 The development of the CRANAPL tool by the team was informed by a review of the clinical reasoning literature, with particular attention paid to the standards and competencies outlined by the Liaison Committee on Medical Education, the Association of American Medical Colleges, the Accreditation Council on Graduate Medical Education, the Internal Medicine Milestone Project, and the Society of Hospital Medicine.23-26 For each of these parties, diagnostic reasoning and its impact on clinical decision-making are considered to be a core competency. Several works that heavily influenced the CRANAPL tool’s development were Baker’s Interpretive Summary, Differential Diagnosis, Explanation of Reasoning, And Alternatives (IDEA) assessment tool;14 King’s Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric;15 and three other studies related to diagnostic reasoning.16,27,28 These manuscripts and other works substantively informed the preliminary behavioral-based anchors that formed the initial foundation for the tool under development. The CRANAPL tool was shown to colleagues at other institutions who are leaders on clinical reasoning and was presented at academic conferences in the Division of General Internal Medicine and the Division of Hospital Medicine of our institution. Feedback resulted in iterative revisions. The aforementioned methods established content validity evidence for the CRANAPL tool.

Response Process Validity

Several of the authors pilot-tested earlier iterations on admission notes that were excluded from the sample when refining the CRANAPL tool. The weaknesses and sources of confusion with specific items were addressed by scoring 10 A&Ps individually and then comparing data captured on the tool. This cycle was repeated three times for the iterative enhancement and finalization of the CRANAPL tool. On several occasions when two authors were piloting the near-final CRANAPL tool, a third author interviewed each of the two authors about reactivity while assessing individual items and exploring with probes how their own clinical documentation practices were being considered when scoring the notes. The reasonable and thoughtful answers provided by the two authors as they explained and justified the scores they were selecting during the pilot testing served to confer response process validity evidence.

Finalizing the CRANAPL Tool

The nine-item CRANAPL tool includes elements for problem representation, leading diagnosis, uncertainty, differential diagnosis, plans for diagnosis and treatment, estimated length of stay (LOS), potential for upgrade in status to a higher level of care, and consideration of disposition. Although the final three items are not core clinical reasoning domains in the medical education literature, they represent clinical judgments that are especially relevant for the delivery of the high-quality and cost-effective care of hospitalized patients. Given that the probabilities and estimations of these three elements evolve over the course of any hospitalization on the basis of test results and response to therapy, the documentation of initial expectations on these fronts can facilitate distributed cognition with all individuals becoming wiser from shared insights.10 The tool uses two- and three-point rating scales, with each number score being clearly defined by specific written criteria (total score range: 0-14; Appendix).

Online-Only Materials

Attachment
Size