Developing, evaluating and validating a scoring rubric for written case reports
Peggy R. Cyr1, Kahsi A. Smith2, India L. Broyles3 and Christina T. Holt1
1Department of Family Medicine, Maine Medical Center, Portland, Maine, USA
2Center for Outcomes Research and Evaluation, Maine Medical Center, Portland, Maine, USA
3Department of Medical Education, University of New England, College of Osteopathic Medicine, Biddeford, Maine, USA
Submitted: 28/06/2013; Accepted: 03/01/2014; Published: 01/02/2014
Int J Med Educ. 2014; 5:18-23; doi: 10.5116/ijme.52c6.d7ef
© 2014 Peggy R. Cyr et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use of work provided the original work is properly cited. http://creativecommons.org/licenses/by/3.0
Abstract
Objectives: The purpose of this study was to evaluate Family Medicine Clerkship students' writing skills using an anchored scoring rubric. In this study, we report on the assessment of a current scoring rubric (SR) used to grade written case description papers (CDP) for medical students, describe the development of a revised SR with examination of scoring consistency among faculty raters, and report on feedback from students regarding SR revisions and written CDP.
Methods: Five faculty members scored a total of eighty-three written CDP using both the Original SR (OSR) and the Revised SR1 (RSR1) during the 2009-2010 academic years.
Results: Overall increased faculty inter-rater reliability was obtained using the RSR1. Additionally, this subset analysis revealed that the five faculty using the Revised SR2 (RSR2) had a high measure of inter-rater reliability on their scoring of this subset of papers (as measured by intra-class correlation (ICC) with ICC = 0.93, p < 0.001.
Conclusions: Findings from this research have implications for medical education, by highlighting the importance of the assessment and development of reliable evaluation tools for medical student writing projects.