100 – The Impact of Trait-Anxiety and Perceived Stress on Student Well-Being in a New Three-Year Medical Degree Curricula
Robert Treat, Diane Brown, Matthew Tews, Kristina Kaljo, Jennifer Janowitz, Dawn Bragg, and William J. Hueston
Medical College of Wisconsin
PURPOSE: The first class of a new three-year medical degree program recently completed their first year of training. Student well-being in this accelerated program is a potential concern since students may have limited time to relax and decompress. Given the anticipated pressures and time constraints on these students, it is necessary to assess how anxiety and stress impact student well-being.
The purpose of this study is to compare the relationships between student trait-anxiety, perceived stress, and well-being with a new three-year and a traditional four-year campus.
METHODS: In 2015/16, 60/230 first-year medical students voluntarily completed the following self-reported surveys: 40-item State-Trait Anxiety Inventory for Adults (1=not at all/4=very much so), 10-item Global Measure of Perceived Stress (1=never/5=very often), and 30-item Trait Emotional Intelligence Survey (1=completely disagree/7=completely agree). Independent t-tests and Cohenâs d assessed mean score differences and effect sizes, respectively. Multivariate linear regression assessed relational strength between student trait-anxiety and stress with well-being scores. Analysis generated with IBMÂź SPSSÂź 24.0. This research was IRB approved.
RESULTS: Lower scores in trait-anxiety (d=.33/p <.188), and stress (d=.29/p <.166); and higher scores of well-being (d=.43/p <.096) were reported for the three-year campus. Well-being was negatively correlated with trait-anxiety (r=-.74/p <.001) and stress (r=-.58/p <.001) for both campuses. Being calm is the best trait-anxiety predictor of well-being for the three-year campus (RÂČ=.54/p <.001), but being satisfied is for the four-year campus (RÂČ=.58/p <.001). Handling personal problems was the best stress predictor of well-being for the four-year campus (RÂČ=.42/p <.001). No stress predictors reported for the three-year campus.
CONCLUSIONS: Medical students are exposed to stresses like demanding course schedules, complex exams and varying teaching methodologies. This study shows that student well-being was more prevalent at the three-year campus and was associated with lower trait-anxiety and perceived stress. Calmness was the best trait-anxiety indicator of well-being for students in the accelerated three-year program.
101 – Medical student quality of life: How bad does it really get?
Lauren Sheidler and Hanin Rashid
Rutgers Robert Wood Johnson Medical School
PURPOSE: Medical student QoL may impact academic and clinical performance; yet little is known about QoL during Step 1 dedicated study time. The purpose of this study is to assess (a)medical student QoL compared to a normal population (b)change in QoL from the fall semester to dedicated study time (c)impact of QoL on Step 1 score.
METHODS: Three surveys were administered to M2 students throughout 2015-2016. The first two surveys used the Q-LES-Q-18, an 18-item validated questionnaire measuring physical health, subjective feelings, leisure and time activities, and social relationships on a 5-point likert scale (Ritsner et al.,2005) to compare QoL before and during Step 1 study time. The third survey asked for Step 1 score range. Change in QoL was assessed using a paired t-test and correlated to Step 1 performance using linear regression. QoL was compared to healthy controls using a Welchâs t-test.
RESULTS: Medical student QoL during Step 1 dedicated time (mean=2.9) was lower than that of patients with chronic mental illness (mean=3.4) (p <0.0001). Medical student QoL during the regular school year (mean=3.8) was lower than that of healthy controls (mean=4.1) (p <0.0001). Overall QoL decreased by 0.91 (p <0.017), physical decreased by 0.67 (p <0.008), subjective feelings decreased by 0.93 (p <0.024) and leisure decreased by 1.07 (p <0.03). QoL did not predict Step 1 score.
CONCLUSIONS: Previous research suggests that medical student QoL is an independent predictor of suicidal ideation and contributes to burnout (Dyrbye et al.,2008); therefore it is concerning that studentsâ QoL was below that of those battling chronic mental. Given the extreme decline of QoL during the dedicated step 1 study period, it is essential that faculty, counselors, and administrators create programs to raise student awareness of practices to promote strategies for sustaining QoL during this critical period of medical training.
102 – SUPPORTING MASTERY LEARNING: CONVERTION OF MID-COURSE ASSESSMENTS FROM SCORED TO PURELY FORMATIVE
Janet E. Lindsley, Laura Sells, Mark. M. Metzstein, and Jorie M Colbert-Getz
University of Utah School of Medicine
PURPOSE: In order to promote student self-efficacy and move our culture from performance-based to mastery-based, the University of Utah School of Medicine is analyzing the effects of replacing summative quizzes with formative assessments in the pre-clerkship curriculum, a change that began in fall of 2016. Using Kirkpatrickâs evaluation framework, we investigated studentsâ satisfaction and learning outcomes to determine the impact of this change, starting with one 9-week foundational science course.
METHODS: We compared performance on the three quizzes and final MCQ examination between MS2s in 2015 (N = 101-102) and MS2s in 2016 (N = 114-118) with Mann Whitney U tests. Studentsâ satisfaction with the change was measured by the percentage of MS2 in 2016 agreeing with an end of course evaluation question: âThe formative quiz structure enhanced my mastery of course content. Why or why not?â The narrative comments were analyzed using grounded theory. To determine if the change to very high stakes final exams caused increased pathologic stress in students, psychological services usage data were also queried.
RESULTS: There were no significant performance differences between 2015 and 2016 MS2s on quizzes 1, 3 and the final exam. On quiz 2, MS2s in 2015 did perform significantly better than MS2s in 2016, 82% vs. 78%, p = 0.011. The majority of students (75%) reported that the formative quizzes enhanced their mastery of course content. The number of psychological sessions requested by MS2s during the course dropped by 38% in 2016 compared to 2015.
CONCLUSION: Switching to formative-only mid-course assessments within one integrated MS2 course did not decrease performance on two quizzes and a final and the majority of students were satisfied with the change. More students reported reduced stress, agreeing with the decreased usage of psychological services. Similar analysis of other courses in the pre-clerkship curriculum is ongoing.
103 – Evaluating the effectiveness of cognitive integration of clinical skills and basic science in a medical school curriculum
Izumi Watanabe, Robin Ovitsh, and Lee Eisner
SUNY Downstate College of Medicine
PURPOSE: Integration of basic science and clinical knowledge is prevalent in medical school curricula, theorizing that creating relationships between basic science and clinical knowledge improves diagnostic reasoning skills and material retention. Previous studies demonstrate benefits of cognitive integration in isolated experimental settings, where integration and study subject recruitment occurred independently of educational curricula, solely for study purposes. We aim to implement an integration method that 1) can be incorporated into a current medical school curriculum, and 2) is robust in its level of basic science and clinical knowledge integration.
METHODS: This study focuses on purposefully integrated interdisciplinary sessions that teach the HEENT and cranial nerve exams. Students will be allocated to two groups. Clinicians and anatomy faculty will collaborate to teach the experimental group the exam maneuvers and relevant basic science concepts, focusing on causal mechanisms of pathologies and simultaneously reviewing a Powerpoint of relevant anatomy. The control group will learn the exam maneuvers from clinicians only and will receive the same Powerpoint later, post-initial assessment. Assessments for both groups will occur as a post-session test and questions built into routine end-of-unit summative examinations. All students will be assessed again 8 months later at the start of their neurology/psychiatry unit.
RESULTS: We will adapt tools utilized in previous literature to test the efficacy of the above methods to achieve cognitive integration: diagnostic accuracy multiple choice questions based on clinical vignettes, matching questions assessing recall of clinical features for specific pathologies, and diagnostic justification essays. This data will be presented.
CONCLUSIONS: The demands on faculty resources to achieve cognitive integration are challenging for any medical school, but it is essential that studies of the efficacy of cognitive integration of clinical skills and basic science be implemented within realistic contexts. Our approach and results could be applied to all organ and systems-based learning in UME.
104 – TEST-ENHANCED LEARNING IN HEALTH PROFESSIONS EDUCATION: A SYSTEMATIC REVIEW
Michael Green, Judy Spak, and Jeremy Moeller
Yale School of Medicine
PURPOSE: Assessment directly enhances learning though the retrieval effect or test-enhance learning (TEL). Studies in cognitive psychology demonstrate that students who engage in attempts to recall information show better learning, retention, and transfer than students who study the same material. We systematically reviewed TEL in health professions education.
METHODS: We searched 13 databases, manually searched 14 medical education journals, and screened reference lists of and articles citing the captured articles. We included controlled studies of TEL interventions that reported at least one learning outcome. To isolate the retrieval effect, we identified a smaller subset of studies in which the controls studied the same material that was tested in the cases. Two raters independently screened articles for inclusion and abstracted information from included articles. To allow comparisons among heterogeneous studies, we determined Cohenâs effect size for the TEL interventions.
RESULTS: 4326 of initial 4342 articles were excluded, leaving 16 studies for review. Kappaâs for the inclusion process ranged from 0.79 to were 0.88. The testing interventions included short answer questions (9), multiple choice questions (4), checklists for simulation (2), checklists for standardized patient (1), and key features questions (1). All 6 of the studies that measured immediate effects (within 1 week) demonstrated the superiority of testing (ES 0.35 to 0.76). Of the 13 studies that examined delayed retention, 10 showed superiority of TEL (ES 0.18 to 0.93). Short answer questions produced better retention than multiple-choice questions in one study.
CONCLUSIONS:TEL demonstrates consistent and robust effects across many health professions. The effectiveness of TEL extends beyond knowledge assessed by examinations to clinical applications, such as interpreting radiographs, CPR simulation, standardized patient encounters, and clinical reasoning. Educators should consider TEL interventions to enhance recall and retention. Further studies should examine different types of TEL interventions and learning outcome assessments.
105 – THE EFFECT OF FORMATIVE USAGE ON SUMMATIVE GRADES FOR PRECLINICAL MEDICAL STUDENTS
Cristina Sorrento, Demitrios Dedousis, Riccardo Bianchi, and Bonnie Granat
College of Medicine, SUNY Downstate Medical Center
PURPOSE: Tremendous time and effort is invested into developing medical school curriculums and accompanying tools. Furthermore, the proliferation of learning materials outside of the curriculum has opened the possibility that students can succeed without the use of curricular tools. This study investigates whether the use of one such curricular tool, weekly formative exams has a positive result on student outcomes as measured by unit summative scores.
METHODS: This study analyzed formative and summative data for 2 classes of preclinical medical students (N=373) at Downstate COM. Linear regression was used to examine the correlation between formative and summative scores. Students were then split into two groups, a group that took formatives as intended and a group that did not (ie, those who guessed randomly) based on whether a student scored on average above or below the guess rate on formatives. Linear regression was again used within each group to examine the relationship between formatives and summative scores.
RESULTS: Preliminary results showed a correlation of r= .43 between formative and summative scores. Analysis of the data further showed that 71% of students achieved scores of >60% despite formatives being effectively optional. Preliminary results show a positive correlation between formative and summative scores suggesting that use of formatives as intended by faculty is associated with higher summative scores.
CONCLUSION: This analysis can be used to hone tool usage within the curriculum. For instance preliminary results from this study suggest that adding a minimum score to weekly formatives could increase student learning as measured by summative scores. Students would be encouraged to take formatives more seriously and therefore benefit more from their use. Students scoring below the minimum score could be obliged to go over the exam with faculty members.
106 – PREDICTION OF THE EXAM PERFORMANCE IN THE ANATOMY COURSE BASED ON THE RESULTS OF THE QUIZZES
Iuliia Zhuravlova
Trinity School of medicine, St. Vincent and the Grenadines
PURPOSE: The purpose of this study is to investigate correlation between quiz and exam performances in the Anatomy course to facilitate learning of the subject. The presence of a correlation will underscore the need for students to start an early active preparation for the exams, and will help to identify students who will perform poorly.
METHODS: Anatomy quizzes are usually given 1 week prior to the exam and cover the material of entire block. The purpose of giving quizzes prior to exams is to orient the students to the material, including the depth of information, which might be tested in the exam and to delineate the aspects of the material that a student might have missed in preparation for the quiz.
We have analyzed the data collected over the period of 2 years. We have included results of the quizzes and exams in the Anatomy course of 267 students. The data were analyzed in the SPSS15.0 package. The Pearsonâs correlation analysis was followed by the linear regression analysis.
RESULTS: After the analysis of the data a weak positive linear correlation was identified between the quiz and exam performance +0.357. It was statistically significant (p <0.001). Regression analysis showed a statistical significance (p <0.001). We estimated that the exam score can be predicted by the quiz score with 0.318% adding to each additional percent scored in a quiz above average of 47.325%.
CONCLUSION: Normally, if a student is well prepared for the quiz, it gives the student another week to review the material for better performance in the exam. If student is unprepared by the time of the quiz, it will be also reflected as a low performance in the exam. There is some increase in the exam score compared to the quiz, but usually there is no significant improvement in performance.
107 – A CROSS-SECTIONAL STUDY OF AFFECTIVE AND COGNITIVE EMPATHY OF THE OSTEOPATHIC CLASSES OF 2017-2020
Bruce Newton and Zachary Vaskalis
Campbell University School of Osteopathic Medicine
PURPOSE: These cross-sectional data will determine if cognitive and vicarious empathy scores change across the first three years of undergraduate osteopathic medical education; and how these scores correlate to sex and specialty choice.
METHODS: The CUSOM classes of 2017-2020 voluntarily took the Balanced Emotional Empathy Scale (BEES) and the Jefferson Scale of Physician Empathy (JSPE) surveys (n = 497/629). Students indicated their sex and specialty choice. Specialties were broken into âCoreâ and âNon-Coreâ groups, with Family Medicine, Internal Medicine, Ob/Gyn, Pediatrics and Psychiatry representing Core specialties. The other 18 specialties were Non-Core, e.g., Surgery, Radiology, Emergency Medicine. Scores were analyzed using SPSS.
RESULTS:
- With the exception of M2 JSPE scores, mean female BEES & JSPE Scores were significantly higher than mean male scores across all four years and across Core vs. Non-Core.
- There was no significant change in BEES or JSPE scores for Core women across all four years, or for JSPE scores for Core men across all four years.
- Non-Core men BEES scores were significantly lower for M1-M3 and M1-M4 comparisons; and JSPE scores were significantly lower for M1-M4, M2-M4 and M3-M4 comparisons.
CONCLUSIONS: For the 2017-2020 classes, there are no significant changes in empathy scores for men or women selecting Core specialties and women selecting Non-Core specialties. There were significant drops in Non-Core male BEES scores after the finishing the M2 year, and for JSPE scores after finishing the M3 year. These data are markedly different from allopathic data where there are dramatic declines in BEES scores after completing the first and third years of medical school (Acad. Med. 83:244-249, 2008).
108 – Do Medical Students’ Lecture Attendance and Study Habits Correlate with Exam Performance or Long-Term Retention of the Basic Sciences?
Stephen D. Schneid, Chris Armour, and Hal Pashler
University of California, San Diego (UCSD) Skaggs School of Pharmacy and Pharmaceutical Sciences and School of Medicine and Division of Biological Sciences
PURPOSE: Medical students are exposed to a large body of basic science concepts and facts in their preclinical years. While most students generally perform well on examinations and pass their courses, we do not know how much their lecture attendance or study habits affect their performance on exams or long-term retention of the basic sciences.
METHODS: In the spring of 2014, seventy-nine medical students ending their first year provided responses to a general survey regarding lecture attendance and study habits. In the fall of 2014, twenty-seven medical students entering their second year volunteered to participate in a study of long-term retention of the basic sciences by taking a âretention examâ after a delay of five to eleven months. They were unaware that they would be retaking multiple-choice questions (MCQs) from four of their first-year course final exams. They also filled out a survey regarding lecture attendance and study habits, similar to the one from spring 2014, and the responses were correlated with their initial performance and long-term retention.
RESULTS : The overall mean performance on the original 60 MCQs was 82.8% (SD = 13.1%) and fell to 50.1% (SD = 19.2%), with a mean retention of 54% (SD = 11.7%). The retention for individual students ranged from 34% to 80%. There was no correlation between any of the study habits or self-reported lecture attendance and either a studentâs original score or retention.
CONCLUSIONS: The results of this study highlight, as previous studies have done, that medical students forget a significant amount of basic science content between the first and second year of medical school. What this study adds to the long-term retention literature, and is the novel feature of this study, is that studentsâ lecture attendance and other self-reported study habits, and even initial performance, are not correlated with long-term retention.
109 – EXPERIENCES WITH A MCQ EXAM THAT WAS STRUCTURALLY ASKEW
Thejodhar Pulakunta, Ali Alkhawaji, and Gary Allen
Dalhousie University
PURPOSE: Multiple choice question (MCQ) exams are a staple of Medical Education. Its advantages and disadvantages have been debated in literature but its popularity is indisputable. We are describing our experience with an MCQ exam that had a glitch in its structure.
METHODS: During the time of transition from a paper-based exam to an online exam, there was the buzz of excitement over how the questions would now be automatically shuffled and even the answer options would be randomized. This would reduce the possibility of cheating on the exams. But due to an error in the settings the 50-question exam was delivered in a manner where every candidate had option C as the correct answer for all the questions. It was a high stakes exam worth 25% of the final Pass or Fail on the course. The exam contained a mixture of questions including recall type, double and triple jump format.
RESULTS: Of the 80 students who took the exam, only a handful of students noticed the glitch during the exam and asked if there was something wrong. There was no significant change in the class average. The student grade distribution was in the classic bell-shaped curve. There were 11 students whose grades showed a sudden statistically significant boost but the rest of the student grades showed no significant change as compared to their grades in exams both prior to and following this.
CONCLUSION: The incident reiterates the advantages of the MCQ format. It is an inherently standardized exam, and grades achieved on such exams are resolute. The design of the questions is what characterizes the quality of the exam. Since the exams on this course traditionally had the reputation of being challenging and tough, there was a cloud on the minds of the candidates, which could be a cause for bias.
110 – Analysis of testing with multiple choice and Open-ended questions: Outcomes based observation in anatomy course.
Cheryl Melovitz-Vasan, David DeFouw, Bart Holland, and Nagaswami Vasan
Cooper Medical School of Rowan University and Rutgers New Jersey Medical School
PURPOSE: The purpose of the study is to compare performances on multiple choice (MCQ) and open-ended questions (OEQ) examination administered to two different group of incoming first year medical students in a preparatory anatomy course.
METHODS: The students dissected and studied thorax and abdomen anatomy. In this study, two identical tests each containing 30 questions were used. In the first test students were presented with the clinical vignettes (OEQ) and a short answer was required. Since the class size was small one instructor hand graded all the answer sheets with a previously generated key. The second test was a MCQ examination with a âscantronâ answer sheet that included selective bubbling of circles for possible answers and computer graded.
The students were given 45 minutes to complete each exam and during a 5 minute break between examinations, they were not permitted to consult each other or review any notes. The clinical vignettes and answers were reviewed by two anatomists and a clinician for accuracy. Mean scores from OEQ and MCQ examinations were analyzed by a paired t-test.
RESULTS: In the thorax module, combined mean score for groups 1 and 2 on the MCQ examination (70.3 ± 10.7) was significantly greater (p <0.01) than the combined performance on the OEQ examination (58.4 ± 10.5). The combined mean scores on the OEQ and MCQ examinations were improved in the abdomen module but unlike the thorax module, the combined MCQ exam performance (80.9 ± 10.0) was not significantly greater than the combined OEQ exam performance (72.00 ± 14).
CONCLUSION: Performances on the Abdomen OEQ examination were significantly improved in comparison to their thorax OEQ examination. Since the OEQ format created more challenging expectations, it appeared to motivate the students to develop and adopt an effective strategy for in depth learning (conceptual understanding than memorization/surface learning).
111 – EARLY DETECTION AND FEEDBACK IMPROVE STUDENT’S SUCCESS IN NATIONAL LICENCE EXAMINATION
Sutheera Sangsiri and Pisit Wattanaruangkowit
Division of Pharmacology, Department of Preclinical Sciences, and Department of Radiology, Thammasat University
PURPOSE: Approximately 20% of our 3-year medical students failed in the comprehensive and national license examination step 1 (NLE1) during the past few years. Our aim is to assist the students to succeed in these two examinations by creating an effective early pretest and identifying failure factors to alert the students in advance.
METHOD: A 300-question pretest for all 3-year students was created. The students took the pretest 2 months prior to the NLE1 and the total scores were reported to the students. After performing the pretest for the first time, we adjusted the pretest questions and score-reporting style to be more effective based on studentâs feedback. Moreover, one-on-one interview of the students who previously failed in the NLE1 was performed in order to identify learning behaviors and inform the current students.
RESULTS: There was positive correlation between the pretest, comprehensive and NLE1 scores. Students whose pretest score lower than 40% mostly failed the comprehensive exam and the NLE1. Our NLE1-passing percentage was increased, but still below that of the country. We modified the pretest questions based on NLE1âs table of specification. We also reported total test score and details of the knowledge weakness for each student. The interview information was surprising. All seven participants who failed both comprehensive and NLE1 graduated from the top national rank high schools and their average GPA at year 1 was higher than 3.5. However, their GPA was significantly declined during the year 2 and 3. The contributing factors were identified and the keen information was early informed the current 2-year students. This year, our passing percentage was increased and also was above the countryâs average.
CONCLUSION: Early detection with giving effective feedback, and notifying common learning mistakes in advance could help students to be successful in the comprehensive and national license examinations.
112 – The Teaching Handoff: A better way to evaluate, assess, and give feedback to our residents
Tammy Sonn, Rebecca McAlister, and Carolyn DuFault
Washington University School of Medicine
PURPOSE: Faculty received recent Millstones evaluations and shared written assessments via weekly teaching handoffs to improve their perceptions of their teaching and assessments and mid-rotation feedback. Milestones were used to scaffold residents and familiarize faculty with these concepts.
METHODS: Faculty on the Benign Gynecology service completed a pre survey and were then given training on a Milestone based teaching handoff evaluation form that was used for the weekly evaluation of each resident. All attendings could view each resident’s most recent Milestones and all other evlauations in the block prior to theirs. Mid-rotation feedback was verbally given based on weekly evaluations; residents received all weekly evaluations electronically at the end of the rotation. After one academic year, faculty completed a post survey.
RESULTS: Pre and post surveys were compared. P values are based on the McNemar test. No questions reached statistical significance; however trends were seen. The teaching handoff increased their comfort in giving feedback (66.7% vs 90) though it only helped to focus teaching as expected (83.3% vs. 80). Attendings felt their ability to scaffold residents was unchanged (69.2% vs 70). Although attendings’ understanding of clinical examples of Milestones remained the same (61.5% vs. 70), they could more easily recognize clinical examples in residents’ work to allow assignment of Milestone levels (38.5% vs. 89). After working with Milestones, fewer attendings reported assuming a resident’s level of competence based on PGY level (76.9% vs 40). Attendings underestimated the amount of time needed to give mid-rotation feedback (16.7% vs 60); slightly more felt these sessions were worth the time and effort for resident learning (91.7% vs 80).
CONCLUSIONS: This is the first description of a formal trial using scaffolding for teaching and evaluation in graduate medical education. Working with Milestones during routine clinical teaching would help all attendings better recognize the clinical scenarios that exemplify the Milestone outcomes.
113 – Development of an Assessment Rubric for EPA 5: Documenting clinical encounters during a Surgery Clerkship Clinical Skills Exam.
Tess Aulet, Jesse Moore, Cate Nicholas, and Michael Hulme
University of Vermont Medical Center, Clinical Simulation Laboratory at the University of Vermont, and Wake Forest School of Medicine
PURPOSE: The AAMC recently published Entrustable Professional Activities (EPAs) for undergraduate medical education. How the EPAs will be assessed and integrated into the curriculum remains to be determined. Effective documentation is an essential form of communication that promotes quality patient care and coordination of multidisciplinary teams. EPA 5 is âDocument a clinical encounter in the patient record.â Currently, University of Vermont (UVM )students complete two post encounter notes during their surgery clerkship clinical skills exam, which are not being utilized for assessment. Our goal was to develop an assessment rubric for EPA 5 to be used during this exam.
METHODS: Using the AAMC EPA Curriculum Developers guide, specific EPA 5 functions and behaviors that could be assessed, were identified and mapped to objectives. These served as the blueprint for the rubric. Rater training curriculum materials were developed. Note writing curriculum materials were standardized across the clerkship. Medical education, clinical experts, and students were involved in rubric review and development. Three raters who retrospectively reviewed student notes piloted the rubric. The rubric was presented to UVM Clerkship Directorâs Committee and at the Surgical Education Research Fellowship fall meeting for review.
RESULTS: We developed a rubric that assesses studentâs documentation of history, physical exam, differential diagnosis, diagnostic justification, workup and EPA specific functions. The rubric generates a score that categorizes the note as entrustable, pre-entrustable, or below expectations. The rubric also assigns a global note score. See attached rubric.
CONCLUSION: Assessment rubrics for EPAs are needed to address existing gaps. The development of this rubric demonstrates initial evidence to support content validity. We plan to prospectively apply the rubric to the surgery clerkship clinical skills exam post-encounter notes in order to collect additional validity and reliability evidence. Once validated, we hope to utilize it for feedback and assessment of EPA 5.
115 – INCREMENTAL EXERCISE TO DEVELOP CRITICAL REASONING SKILLS IN NOVICE MEDICAL STUDENTS
Cathy Buddenhagen Wilcox, Mary Lawhon Triano, and Kalman Winston
The Commonwealth Medical College and University of Liverpool Online
PURPOSE: Case-Based Learning has been shown to increase student satisfaction with learning. There is some evidence that it improves critical reasoning skills. However, research in this area is limited. We developed a critical reasoning exercise that increases in complexity and difficulty as students develop, in order to monitor and improve reasoning during the first year of medical school.
METHODS: A critical reasoning exercise with developmentally incremental structured reflections was developed for first year medical students. Initially, the exercise, based on work by Silvia Mamede, et. al., required that students read a clinical vignette, generate a problem list, with supporting evidence and state their next steps. This exercise was modified to four steps: Identification of key features, grouping of features into categories, stating two hypotheses about the underlying cause or problem and generating follow-up questions that would help to differentiate between the hypotheses.
RESULTS: First year medical students felt unprepared for the Mamede-based exercise, originally designed for year four pre-clinical students at a Brazilian medical school. Their knowledge was insufficient and did not allow them to analyze the information, state the problem or identify evidence. Our revised exercise, which is administered once per month, increases in complexity over time, and focuses on organizing and analyzing information, is well-received and illustrates the studentsâ reasoning skills acquisition.
CONCLUSION: Most incoming students do not have the skills needed to develop a limited problem list with supporting evidence. However, over the first year, they can organize and analyze information, and develop hypotheses and follow-up questions. The increase in their competence in these areas demonstrates improved critical reasoning appropriate to their level of education.
116 – ASSESSING THE EFFECTIVENESS OF STUDENT PEER FEEDBACK IN UNDERGRADUATE MEDICAL STUDENT ORAL RESEARCH PRESENTATIONS
Tracey A.H. Taylor and Stephanie M. Swanberg
Oakland University William Beaumont School of Medicine
PURPOSE: Accreditation standards require medical schools to include both formative and summative student assessments in the curriculum. Medical schools often utilize peer feedback as a formative exercise for students to practice providing meaningful feedback to their peers; a skill they will need for productive inter-professional teamwork in their future careers. Many studies have investigated medical school peer feedback, but not for student presentations. The purpose of this study was to explore how second-year medical students used peer feedback to revise their oral research presentations across two consecutive research courses at a Midwestern medical school in the United States.
METHODS: All second-year medical students presented oral presentations of research projects in the âTechniques of Effective Scholarly Presentationâ course in the winter semester of 2016. The best 15 students, as decided by a panel of expert judges, were invited to present a second time in the spring semester Research Colloquium. All second-year medical students provided feedback to peers at both oral presentation events. Comparative analysis of the student PowerPoint presentation files before and after receiving feedback was conducted. Narrative feedback from both peers and expert judges was coded for themes through constant comparison analysis, a grounded theory method.
RESULTS: Our sample included two sets of 15 PowerPoint files as well as over 720 peer and 122 judge comments that were categorized into eight major themes including presentation skills, visual presentation, and knowledge of topic. Analysis of the data is ongoing but initial results indicate that while students made only minimal changes to their PowerPoint presentations, they may have improved in presentation skills.
CONCLUSIONS: Peer feedback is an easy way to provide formative feedback to students and meet accreditation standards. The ability to effectively communicate research goals and findings is critical in medical school and the medical practice today.
117 – Predictive Value of Comprehensive End of Year One Exam for COMLEX Level 1 Performance
Kristie G. Bridges, Jandy B. Hanna, Predrag KrajacicLance Ridpath, Emily R. Thomas, and Raeann L. Carrier
West Virginia School of Osteopathic Medicine
PURPOSE: An End of Year Exam (EOYE) was developed to assess student retention of content delivered during the first year curriculum. The immediate goals of this project included exposing students to board-style questions and providing feedback regarding strengths or weaknesses on specific topic areas. A long-term goal was to develop an early indicator of student performance on COMLEX Level 1.
METHODS: A committee composed of biomedical and clinical faculty, developed an exam which was administered in the final course of Year 1. The exam provided integrated, case-based questions derived from the first 10 modules. Exam items were tagged with disciplines and competency domains. Student performance was assessed with psychometric data and students received performance data based on item tags.
EOYE and COMLEX Level 1 (COMLEX) first attempt scores were compared to determine if the EOYE was a valid predictor of COMLEX performance. Likewise, the EOYE score was compared to COMLEX passage using logistic regression. Additionally, individual questions (n=141) were examined as predictors of COMLEX passing (odds ratio test) and COMLEX score (Studentâs t-test).
RESULTS: EOYE and COMLEX scores were significantly correlated (r = 0.53, p <0.01). Likewise, EOYE score significantly predicted COMLEX passage (OR = 1.089, 95% CI [1.04, 1.14], Wald X2 = 13.23, p < 0.01). Individual items strongly predicting COMLEX performance were also identified.
CONCLUSION: The EOYE provided discipline and competency domain feedback to both students and faculty regarding student retention of Year 1 material. Continued refinement of exam items, may yield an EOYE that is a valid and reliable predictor of student achievement on COMLEX. This process could be adjusted and implemented in other institutions with similar results.
118 – A Prematriculation Experience to Promote Growth Mindset Formation
Jennifer Montemayor and Amber Heck
Rocky Vista University College of Osteopathic Medicine and University of the Incarnate Word School of Osteopathic Medicine
PURPOSE: Academic mindset refers to student attitudes about learning, and can be categorized as a âfixedâ or âgrowthâ mindset. Pre-matriculation courses (PMCs) are offered to students transitioning into medical school to help them cultivate skills for success. Our PMC aims to promote critical self-reflection and a meta-cognitive approach to education. While not specifically designed to target mindset, we hypothesized that the metacognitive topics addressed in our PMC would result in an increased growth mindset.
METHODS: The PMC included topics of educational psychology, basic science, and clinical medicine delivered in standard formats, including lectures and standardized patient experiences. Course content included an introduction to educational psychology and application through vignettes, a logical reasoning exam, test-item training, and a structured clinical examination. Participants (n=18) self-selected to take part in the one-week PMC a week prior to orientation to the first-year medical curriculum. Pre-course and post-course mindset was assessed using the Mindset Assessment Profile (MAP) Tool from Mindset Works Educator Toolkit.
RESULTS: Pre-course assessment revealed that students enter medical school with a high MAP. Analysis of mean pre-course (33.78) and post-course (37.67) MAP revealed a significant increase after participating in the PMC (p <0.001). Spearmanâs rho correlation analysis revealed a significant positive relationship between pre-MAP values and course grades in the first course in medical school (r=0.623, p=0.006). Additionally, a significant negative correlation exists between pre-MAP values and the difference (delta MAP) between pre- and post-MAP scores (r=-0.556, p=0.016).
CONCLUSION: Matriculating medical students may already benefit from a growth mindset. This study has demonstrated that, following a PMC aimed at promoting metacognition, student MAP increased significantly, especially for those who started with a lower MAP. In addition, MAP can be used to predict which students will benefit from metacognitive skills training and which students may be at risk in their early medical school career.
119 – ASSESSMENT OF ORAL AND PEER-GRADING FEEDBACK ON ESSAY PERFORMANCE WITHIN A BASIC SCIENCE MEDICAL SCHOOL COURSE
Jenny Fortun, Melissa Ward-Peterson, Dimitrios Ioannou, and Helen G Tempest
Herbert Wertheim College of Medicine, Florida International University
PURPOSE: Essays are part of the summative assessment in our courses. The essay goals are to evaluate studentsâ critical thinking and ability to apply foundational knowledge to novel clinical scenarios. To facilitate the development of these skills and assist students to improve their essay performance, two forms of feedback were introduced: (i) oral presentation of scoring rubric, and (ii) use of scoring rubric to grade an anonymized peerâs answer.
METHODS: Three essays were administered within one eight-week course; each essay required students to propose diagnostic possibilities (Q1) and diagnostic tests (Q2), interpret test results (Q3), and explain molecular mechanism of disease/therapy (Q4). Feedback was given three days after exam administration. Q1 and Q3 received feedback (ii) at different times, whereas Q2 and Q4 always received feedback (i). Student performance and perception of feedback was evaluated by quantitative and qualitative methods.
RESULTS: Numerical grades significantly improved with each essay, specifically for Q1 and Q4. The effect size of question performance was higher in Q4 (feedback i) than Q1 (feedback ii), when comparing the first and last essay. However, no significant differences were observed between the two types of feedback, when comparing the same question. A class survey indicated that more than 72% of the class either agreed or strongly agreed that this form of feedback increased their understanding of question/faculty expectation, assisted them to identify their own knowledge deficits, and helped them feel better prepared for the next essay after grading a peerâs answer.
CONCLUSION: Feedback helps students gain confidence in their knowledge and ability to answer critical thinking questions. Although numerical grades improved over time, there were no differences regarding the type of feedback used. The differences in effect size question performances might be attributed to the type of question, together with the early developmental stage of students, among others.
Poster Award Nominee
120 – VALIDATION AND PRELIMINARY RESULTS OF THE VTCSOM QUADRANT INSTRUMENT
Brock Mutcheson and Richard Vari
Virginia Tech Carilion School of Medicine
PURPOSE: The purpose of this study was to understand student perceptions of their proficiency and confidence on eight Virginia Tech Carilion School of Medicine (VTCSOM) institutional goals and objectives.
METHODS: The VTCSOM Quadrant Instrument (Quadrant) was designed to reflect the mapping of VTCSOM Educational Goals and Objectives to the six ACGME Core Competencies (Christianson, et al. Undergraduate Medical Education. Academic Medicine, 2007). Students shared perceptions of their level of proficiency and confidence on 42 items measuring eight different institutional goals. Quadrant was administered to matriculating and graduating VTCSOM students as a component of the program evaluation system. All students completed the survey anonymously. The instrument was validated using confirmatory factor analysis (Jöreskog, Psychometrika, 1969) and an analysis of internal consistency (Cronbach, Educational and Psychological Measurement, 2004). Preliminary descriptive statistics were summarized and compared across two time points (i.e. matriculation and graduation) by graduating class.
RESULTS: The instrument had a strong internal consistency (α = 0.9) and adequate model fit (RMSEA < 0.1). Although the average proficiency scores across all three graduating classes were similar within each time point (i.e. matriculation and graduation), students perceived themselves to be significantly more confident (p < 0.05) and more proficient (p < 0.05) immediately prior to graduating than students immediately prior to the first year of their medical education.
CONCLUSION: This analysis provided empirical evidence that VTCSOM students increased in proficiency and confidence on all eight over-arching VTCSOM institutional goals and objectives. Since students perceived lower proficiency and confidence on the same specific clinical skills across all graduating classes, extra academic support may be needed in these areas.
121 – EDUCATIONAL EFFECT OF ASSESSMENT: COMPARISON OF A SHORT ANSWER CLINICAL REASONING EXAMINATION WITH MULTIPLE-CHOICE QUESTION FORMAT IN AN ORGAN SYSTEMS MODULE
Carla Lupi, Helen G Tempest, Melissa Ward-Peterson, Rodolfo Bonnin, and Steven Ory
Herbert Wertheim College of Medicine, Florida International University
PURPOSE: The medical education literature has not addressed the comparative educational effect of closed-response (multiple-choice) and open-response (short answer/essay) exam formats (Hift 2014). It is increasingly important that medical educators use assessments that drive learners to maximize their learning. This exploratory work aims to compare the student-reported educational effect of these types of testing.
METHODS: For the last 3 years, an open-response summative âdiagnostic reasoning examinationâ (DxRx) has been administered in the Reproductive Systems Module consisting of unfolding cases requiring short answers and a final explanation of pathophysiology. The Module also uses an NBME summative examination. Each year, questions regarding student preparation processes for these examinations have been included in end-of-course evaluations. Narrative responses were examined qualitatively.
RESULTS: For each year, survey response rates were 47% (n=54), 82% (n=94), and 88% (n=106), respectively. The percentage of respondents who indicated their study strategy for the DxRx differed from that of the NBME was 81.6%, 84.0%, and 92.4% each year, respectively. A total of 195 of 254 respondents provided comments comparing study strategies between the two formats. We categorized learning effects using 2 of 6 categories delineated by Cilliers et al. (2012) cognitive processing and resource use. On cognitive processing, 30 reported re-organizing course content by clinical presentation (rather than disease or basic science discipline) and described developing and/or practicing the generation of differential diagnoses for different clinical presentations. Fourteen students referenced more memorization of âkey wordsâ in NBME preparation. Regarding resources, 31 students reported studying the eight case-based discussions within the course and sought other case-based resources. Nine students offered metacognitive insights pointing to more robust learning from studying for the DxRx.
CONCLUSION: The majority of students reported different approaches to studying for the DxRx. Narrative responses suggest approaches supporting robust learning, the majority through re-organizing material and constructing new knowledge.
122 – ASSESSING HISTOLOGY IN AN INTEGRATED MEDICAL CURRICULUM: DIGITAL SLIDEBOX AND BLOOM’S TAXONOMY
Allison K. Chatterjee and Sarah B. Zahl
Marian University College of Osteopathic Medicine
PURPOSE: At Marian University College of Osteopathic Medicine (MUCOM), histology is integrated into a systems-based curriculum. Students are assessed via quizzes after completing self-directed virtual histology laboratory modules created in Digital Slidebox (DSB). The goals of this study were to 1) determine if a relationship exists between student performance on histology questions, Bloomâs Taxonomy (1956) level of the questions, and the type of image used in questions; and 2) determine if performance on DSB and Exam questions differed when comparing the categories in #1.
METHODS: Histology questions from 3 academic years (student N = 482) were classified to appropriate levels of Bloomâs Taxonomy using a modified version of the Blooming Anatomy Tool (Thompson & OâLoughlin, 2014). Student performance data was analyzed to determine relationships and differences based on Bloomâs Taxonomy levels and 1) different assessment types, 2) histology topic (across multiple assessment types), 3) image type and 4) course topic (across both histology topic and assessment types).
RESULTS: Preliminary data analysis conducted on 755 multiple-choice histology questions showed that most questions (78%) were utilized in DSB quizzes. Most questions were classified as either Comprehension (31%) or Application (65%); and most questions included light micrographs (79%).
Performance on DSB quizzes (93.3%) far exceeds performance on exams (65.63%). Exam performance averages are higher for questions with light micrographs (68.25%) compared to electron micrographs (53.97%). Exam performance averages are highest on Knowledge (75.08%) questions, as compared to Comprehension (63.40%) and Application (66.68%) questions.
CONCLUSIONS: Low student performance on exam questions may be related to the dilution of histology content throughout the curriculum and studentsâ perceptions regarding the relevance of histology in their medical training.
Adjusting Bloomâs taxonomy levels to more accurately represent the cognitive processes required to answer multiple choice histology questions will provide more consistent and higher quality student performance data.
123 – DESCRIPTIVE ANALYSIS OF MEDICAL STUDENT VISUAL ATTENTIVE TEST TAKING BEHAVIOR USING EYE TRACKING
Johnny Lippincott and Ryan D. Darling
University of Mississippi Medical Center
PURPOSE: Standardized examinations consisting of multiple-choice clinical vignettes play a central role in assessing knowledge and reasoning ability of medical students, residents, and physicians. Many academic institutions and private companies teach testwiseness, the ability to use content-independent strategies and cues to arrive at correct answers. However, empirical characterization of visual attentive behavior underlying these strategies and the use of cues is needed to elucidate the mechanism(s) underlying varying testwiseness abilities. Therefore, we used eye tracking glasses (ETGs) and a computer-based test to describe differences in gaze behavior.
METHODS: Participants were current medical students at the University of Mississippi Medical Center (UMMC). While wearing ETGs, each participant took two 10-minute computer-based examinations (ExamSoft): the first simulating USMLE Step 1 clinical vignettes; the second containing content-free questions intended to gauge testwiseness independent of subject matter. BeGaze (SensoMotoric Instruments) statistically analyzed participantsâ gaze patterns. Participants completed questionnaires about past test-taking preparation and strategies, test anxiety, and learning styles. Admissions and curriculum data were assessed to substantiate findings in addition to the questionnaire data.
RESULTS: Preliminary results revealed discrete distributions in gaze patterns comprising entry times, dwell times, scan path, and revisits on visual areas of interest: the question stem (in whole and in part, such as the interrogative statement) and answer choices. Further analysis will be conducted to: a) incorporate admissions, curriculum, questionnaire, and past exam performance data into predictive statistical models and b) analyze gaze pattern differences in the context of test performance.
CONCLUSIONS: These data will help identify successful and efficient visual attentive behavior during test-taking. We will continue to characterize distributions of gaze patterns and test-taking with the ultimate goal of a) using these data to understand which visual behaviors are related to good test performance and b) teaching these behaviors to students in order to improve exam performance.
124 – A LONGITUDINAL STUDY OF AFFECTIVE AND COGNITIVE EMPATHY OF AN OSTEOPATHIC CLASS OF 2017
Bruce Newton and Zachary Vaskalis
Campbell University Jerry M. Wallace School of Osteopathic Medicine
PURPOSE: These longitudinal data will determine if cognitive and vicarious empathy scores change across the first three years of osteopathic medical education; and how these scores correlate to sex and specialty choice.
METHODS: The CUSOM 2017 graduating class voluntarily took the Balanced Emotional Empathy Scale (BEES) and the Jefferson Scale of Physician Empathy (JSPE) surveys (n = 124/156). Students indicated their sex and specialty choice. Specialties were divided into âCoreâ and âNon-Coreâ groups, with Family Medicine, Internal Medicine, Ob/Gyn, Pediatrics and Psychiatry representing Core specialties. The other 18 specialties were Non-Core, e.g., Surgery,Radiology, Emergency Medicine. Scores were analyzed using SPSS.
RESULTS:
- Mean female BEES & JSPE Scores were significantly higher than mean male scores across all four years (except for the M2 JSPE) and across Core vs. Non-Core.
- There was no significant change in BEES or JSPE scores for Core or Non-Core men or women across all four years.
- Except for Non-Core women, the largest drops occurred after finishing the first clinical (M3) year.
CONCLUSIONS: For the graduating class of 2017, empathy scores remain stable from the entrance into osteopathic medical school (M1 scores) through the start of the senior year (M4 scores). Except for Non-Core women JSPE scores, there is a trend for all scores to decline. These data are markedly different from allopathic data where there are dramatic declines in BEES scores after completing the first and third years of medical school (Acad. Med. 83:244-249, 2008). Maintaining affective and cognitive empathy scores suggest these osteopathic students may be better able to establish an empathic bond of trust with patients than the previously studied allopathic students.
125 – PEER TEACHERS WHO ARE TRAINED IN SPECIFIC ACTIVE AND SELF-DIRECTED LEARNING STRATEGIES HAVE A CLEARER ROADMAP FOR IMPROVING LEARNING ENVIRONMENTS
Chloe C. Read, Sarah E. Nguyen, Luke E. Sanders, Geoffrey T. Dorius, David A. Morton, and Jonathan J. Wisco
Brigham Young University and University of Utah School of Medicine
PURPOSE: Peer teaching provides an environment of learning accountability. We developed a pedagogical training program consisting of 12 weekly lessons and two assessments in which lecture (but not lab) peer teachers learned a variety of techniques related to active and self-directed learning. The goal of these trainings, based on Dee Finkâs Taxonomy for creating significant learning experiences for students and teachers (Fink, 2003, 2013), was to increase understanding and implementation of effective peer teaching strategies in classroom teaching sessions.
METHODS: At the end of the Fall 2016 semester, when the training program was implemented, we asked both our novice lecture (5/5, 100% response) and lab (7/23, 30% response) peer teachers to complete an IRB approved reflection through Qualtrics comparing and contrasting their skills as a teacher and as a learner between the beginning and end of the semester. We analyzed responses using a grounded theory approach, beginning with constructing a word cloud to reveal the most frequent words used in responses that informed subsequent detailed thematic qualitative analysis.
RESULTS: Regarding teaching improvements, all peer teachers expressed a desire to improve student learning experiences, but lecture peer teachers described a clear roadmap to improved student learning that utilized specific active learning techniques for the upcoming semester such as working with struggling students, leading thought provoking discussions, creating group activities and games, implementing analogies, facilitating problem and team based learning, writing case studies and multiple choice questions, and promoting self-directed learning. Regarding learning improvements, all peer teachers discovered cognitive bridges that integrated their increasing anatomy knowledge with related concepts in other classes.
CONCLUSION: A pedagogical training program for peer teachers benefits all participants in the learning process. We showed that a pedagogical training program helps direct peer teachers to learn and implement specific techniques, such as active learning, which ultimately benefits students.
126 – Increasing feedback comments of student performance on the clinical evaluation form
Rebecca Bellini
Upstate Medical University
PURPOSE: Written feedback comments on clinical evaluation forms are important not only to students while on the clerkship1, but also for the MSPE letters2, but too many evaluations submitted to the academic office were found to have little to no comments. Lack of comments left many students with little performance feedback and no content to draw upon for narratives.
METHODS: Upstate Medical University is located in the Northeast; average class size is 155 students. Intervention: In 2014, significant changes were made to the clinical evaluation form including the removal of five separate comment boxes, found after each competency domain. Two comment boxes, one for reinforcing feedback and the other for corrective feedback, were placed at the top of the new form allowing evaluators to see them first. The Surgery clerkship piloted the new form in 2015/16 and a roll-out to all clerkships was approved for the 2016/17AY. Evaluation comments pulled by student, and re-written in narrative format were evaluated by âTotal words.â âAverage word count/student(AWC)â and âtotal words/block(TW)â were compared, prior to and following intervention.
RESULTS: âAWC\\\”,â increased 17% to 80% over 14/15, and +51% over 14/15 for the first 20 weeks this year. âTWâ increased 10% to 144% over 14/15, and 46% over 14/15 for the first 20 weeks TY. âTotal word count per total studentâ increased 28% over 14/15. All students in 15/16, and the first 20 weeks of 16/17, received comments.
CONCLUSIONS: Preliminary findings indicate that a simple evaluation form intervention will have a positive impact on the amount of feedback comments. More comments translates to more performance feedback and narrative content for students.
127Â – ASSOCIATION OF NBME PHYSIOLOGY PERFORMANCE WITH CLASS ATTENDANCE, MCAT SCORES, AND UNDERGRADUATE GPA
Raju Panta, Frances Jack, and Mignonette R Sotto
Trinity School of Medicine
PURPOSE: Many faculty members in medical schools encourage their students to attend class regularly emphasizing that regular class attendance enhances their performance in exams. Some studies reported that performance on the National Board of Medical Examiners (NBME) examination has a strong positive correlation with class attendance, medical college admission test (MCAT) scores, and undergraduate grade point average (UGPA). Hence, we assessed the association of NBME physiology scores with class attendance (%), MCAT scores, and UGPAs.
METHODS: For this analytical comparative study, 93 medical students who completed two terms of medical physiology at Trinity School of Medicine (TSOM) without medical leave of absence were selected. They took their first attempt of NBME physiology examinations from summer 2014 to fall 2015. The data were tabulated and descriptive analysis was done. The data were non-normally distributed. Hence, the log and exponential functions were also taken into account. The correlation of the NBME physiology scores with class attendance (%), MCAT scores, and UGPAs were determined by Spearmanâs correlation coefficient and multiple regression analysis, using the SPSS version 24.
RESULTS: The NBME physiology scores were significantly correlated with the class attendance (%) and MCAT scores. The multiple regression analysis demonstrated that the best predictors of the natural logarithm function (Ln) of NBME physiology scores are the Ln of class attendance (%) and MCAT scores with a multiple correlation coefficient of 0.63, F value of 29.55, and p < 0.001. Neither the UGPAs nor their Ln showed significant correlation with the NBME physiology scores and their Ln.
CONCLUSION: The medical students with higher percentage of class attendance and those with higher MCAT scores have higher NBME physiology scores irrespective of their UGPAs. Implementation of class attendance policies in medical schools might enhance student performance on board exams.