AEM Education and Training 28: Multi-institutional Implementation of the National Clinical Assessment Tool in Emergency Medicine: Data From the First Year of Use

Welcome to the twenty-eighth episode of the AEM Education and Training Podcast, a FOAMed podcast collaboration between the Academic Emergency Medicine Education and Training Journal and Brown Emergency Medicine.

Find this podcast series on iTunes here.

DISCUSSING (OPEN ACCESS THROUGH august 3, 2021; CLICK ON TITLE TO ACCESS)

Multi-institutional Implementation of the National Clinical Assessment Tool in Emergency Medicine: Data From the First Year of Use. Katherine Hiller MD, MPH, Julianna Jung MD, Luan Lawson MD, Rebecca Riddell MS, Doug Franzen MD, MEd

LISTEN NOW: INTERVIEW WITH AUTHORs

Screen Shot 2021-06-20 at 5.05.42 PM.png

Douglas Franzen MD, MEd, FACEP

Associate Professor, Department of Emergency Medicine

Associate Program Director, Emergency Medicine Residency Program

Jung Portrait.jpg

Julianna Jung, MD, FACEP

Associate Professor of Emergency Medicine, Johns Hopkins University School of Medicine

Director of Medical Student Education, Department of Emergency Medicine

Associate Director, Johns Hopkins Medicine Simulation Center

Abstract

Objectives

Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported.

Methods

Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed.

Results

All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non–EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference.

Conclusions

To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.