100 – QUANTIFIABLE PREDICTORS OF STEP 2 CS SUCCESS
Vincent J. Sinatra, Milana Sapozhnikov, Jennifer Grossman, and Bonnie Granat
State University of New York Downstate College of Medicine
PURPOSE Previous literature has focused on predictors of success on USMLE Step 1 and Step 2 Clinical Knowledge (CK). While Step 2 Clinical Skills (CS) remains a mainstay for standardized evaluation of clinical skills, determinants of success on Step 2 CS have not yet been elucidated. We set out to 1) identify quantifiable factors that can predict future performance on Step 2 CS and 2) determine if a medical school curriculum that is competency-based with multi-modal clinical skills assessments provides predictors of students who are at risk of failing Step 2 CS.
METHODS 342 de-identified SUNY Downstate medical studentsā scores from pre-clinical, clerkship, and standardized examinations were collected: CS Multiple Choice exams, OSCEs, NBME Clerkship Shelf exams, undergraduate GPA, MCAT, Step 1, Step 2 CK and Step 2 CS results. Students were divided into two groups based on Step 2 CS performance: āPassā (n=326) and āFailā (n=16). An independent two-tailed t-test was performed to compare the two groups. Significant relationships were those determined to have a p-value of <0.05.
RESULTS There was a significant difference in performance between the 2 groups on all shelf exams as well as Step 1 and Step 2 CK: medicine (p= 0.001), surgery (p= 0.007), ob/gyn (p= 0.027), pediatrics (p= 0.003), psychiatry (p= 0.014), neurology (p= 0.033), Step 1 (p=0.006), and Step 2 CK (p=0.001). The Pass group performed better on all OSCE exams, preclinical CSMC exams, and undergraduate GPA, however, these results did not reach statistical significance.
CONCLUSION Those who passed
Step 2 CS received significantly higher scores throughout all core clerkships
with an association found between better performance on OSCE exams and
receiving a passing score on Step 2 CS. Focusing on early intervention for
consistently lower shelf performers may improve clinical skills deficits in
those at risk of failing Step 2 CS.
101 – The
Utilization of Peer Feedback During Collaborative Learning in Medical
Education: A Systematic Review
Sarah Lerchenfeldt, Misa Mi, and Marty Eng
Department of Foundational Medical Studies;
Oakland University William Beaumont School of Medicine, and Department of
Pharmacy Practice; Cedarville University,
PURPOSE Peer evaluation can provide valuable feedback to medical students, and increase student confidence as well as the quality of work. The goal of this systematic review is to examine the utilization, effectiveness, and quality of peer feedback in a collaborative learning environment, specifically in medical education.
METHODS The PRISMA statement for reporting in systematic reviews and meta-analysis was used to guide the process of conducting the systematic review. Evaluation of level of evidence (Colbert) and types of outcomes (Kirkpatrick) were used. Two main authors reviewed articles with a third deciding on conflicting results.
RESULTS The final review included 31 studies. The majority of studies (n=27) examined student peer feedback evaluation in a classroom setting; other settings included clinical and OSCE environments. Peer feedback was evaluated through collaborative learning activities integrated into preclinical courses (n=7) and clerkships (n=2); although many studies lacked clear information on where peer feedback was evaluated in the curriculum. Problem-based learning and team-based learning were the most common collaborative learning settings. Eleven studies reported that students received instruction on how to provide appropriate peer feedback. Seventeen studies evaluated the effect of peer feedback on professionalism; 12 of those studies evaluated peer feedback effectiveness for assessing professionalism and seven evaluated the peer feedback use for professional behavior development. Ten studies examined the effect of peer feedback on student learning. In addition, there were six studies that examined the role of peer feedback on team dynamics.
CONCLUSIONS This systematic
review indicates that peer feedback in a collaborative learning environment may
be a reliable assessment for professionalism and may aid in the development of
professional behavior. The review suggests implications for further research on
the impact of peer feedback, including the effectiveness of providing
instruction on how to provide appropriate peer feedback.
102 – An
innovative self-directed video-based course to improve medical studentsā note
writing
Sarah Scott, Valeriy Kozmenko, Valerie Hearns,
and Brian Wallenburg
University of South Dakota Sanford School of
Medicine
PURPOSE The high stakes Objective Structured Clinical Examination (OSCE) is a required South Dakota Sanford School of Medicine (USD SSOM) test that medical students need to pass at the end of Pillar 2. It closely represents the United States Medical License Examination STEP 2 Clinical Skills (USMLE STEP 2 CS) national board exam. The OSCE consists of a series of standardized patient (SP) encounters, followed by writing a patient note. In the past, approximately 75% of Pillar 2 students needed to remediate the note-writing component. Nationwide, there is a trend in increasing failure rates in the note-writing part of the USMLE STEP 2 CS exam. USD SSOM developed an Enhanced Patient Note Writer (EPNW) program for students to practice writing notes, but it has not improved studentsā note writing skills tested at the OSCE. We hypothesize supplementing the EPNW with individualized feedback to the students regarding their note writing skills will improve their performance during the high stakes OSCE.
METHODS A collection of video cases was created showing an interaction between a student doctor and SP. Medical students will watch a video, get the history and physical exam information, and then use EPNW to write a patient note within 10 minutes, the time constraint as per USMLE STEP 2 CS requirements. After completing a note, a note-grading checklist will become available. Students will self-grade their notes, and the scores will be saved on the server. Researchers will monitor studentsā progress during three months of preparation for the OSCE.
RESULTS We expect to see studentsā note-writing scores improve due to the individualized feedback provided by the software.
CONCLUSION Creation of an
automated system that provides individualized feedback to medical students
regarding their note writing skills may improve their performance in handling
patient documentation and offset the high cost if SPās and faculty were used
instead.
103 – EVEN FLOW:
DETERMINING CHARACTERISTICS OF EFFECTIVE TEAMWORK IN THE OPERATING ROOM WITH
FOCUS GROUP INTERVIEWS
Deborah D. Garbee, Laura S. Bonanno, Kathryn E.
Kerdolff, and John T. Paige
Louisiana State University Health Sciences
Center-School of Nursing in New Orleans and Louisiana State University Health
Sciences Center-School of Medicine in New Orleans
PURPOSE: Effective teamwork is essential for safe, quality care in the operating room (OR). In practice, however, OR team function is often hampered by differing perceptions of what constitutes effective teamwork among the various professions. We investigated whether common ground exists among OR personnel regarding the importance of certain team-based competencies.
METHODS: Semi-structured focus group interviews were conducted involving the various members of an interprofessional OR team (surgeons, anesthesiologists, nurse anesthetists, surgical technicians, and circulating nurses). For each focus group, participants were asked questions related to teamwork and team function in the OR. Responses were digitally recorded and transcribed. Qualitative analysis was undertaken by two reviewers who identified major themes related to effective teamwork. Intercoder agreement was employed to confirm findings and themes. Data collection continued until data saturation was obtained.
RESULTS: Three focus groups involving 14 individuals (2 surgeons, 1 anesthesiologist, 8 nurse anesthetists, 2 circulating nurses and 1 surgical technician) were conducted over a 1-month period in 2017. Four major themes related to effective teamwork emerged from analysis: (1) Smooth flow (2) United effort (3) Communication and (4) Positive attitude.
CONCLUSION Among the various
professions in the OR, agreement regarding effective teamwork centers around
the concepts of procedural flow, unified effort, clear communication, and
positive attitude of the team. These finding will be used to help design a
quick assessment tool for formative debriefing in the OR.
Poster Award Nominee
104 – One step ForWard and NO steps back! NBME subject exam scores and
transition to an integrated clinical curriculum
Kirstin Nackers, Raquel Tatar, Eileen Cowan, Laura
Zakowski, Katharina Stewart, Sarah Ahrens, David TIllman, Laura Jacques, and Shobhina
Chheda
UW-Madison SMPH
PURPOSE We used NBME subject exam scores and pass rates to study how moving from a traditional clerkship model to integrated clinical blocks affected student acquisition of medical knowledge. Recent AAMC data shows that two-thirds of medical schools are currently undergoing/planning substantial curriculum changes. (1) During periods of curricular change, it is desirable to track assessment outcomes to inform leadership of possible negative impacts to students during the transition.
METHODS The Legacy Curriculum at University of Wisconsin, School of Medicine and Public Health followed the standard medical school educational model, including traditional, department-based, clinical clerkships. The ForWard Curriculum began in 2016 and these students entered new integrated clinical experiences in January 2018. Our Neurology and Psychiatry clinical experiences are integrated with Internal Medicine in Acute Care. Family Medicine and Ambulatory Internal Medicine are integrated in Chronic and Preventative Care. OBGYN and Pediatric experiences are integrated in Care Across the Life Cycle. Finally, Surgery, Anesthesia, and other procedural specialties are integrated in Surgical and Procedural Care. We compared NBME scores and passing rates between the final cohort of Legacy M3 students and first cohort of ForWard students for Adult Ambulatory Medicine, Medicine, Neurology, OBGYN, Pediatrics, Psychiatry, and Surgery. Institutional passing thresholds were not changed.
RESULTS NBME scores and passing rates to date for the ForWard Curriculum cohort demonstrate no statistically significant differences from those of the Legacy cohort (p<0.05). Further monitoring of these data is necessary, however these results indicate no substantive negative effects to student acquisition of medical knowledge during the transition period to the integrated curriculum.
CONCLUSIONS Limitations
include relatively small sample size due to examining only two cohorts of
students. Strengths include examining the same standardized assessment
measures. References āCurriculum Inventory and Reports (CIR) – Initiatives –
AAMC.ā Association of American Medical Colleges, AAMC,
aamc.org/initiatives/cir/427196/27.html. Accessed 2018 October 15.
105 – Voluntary
Audience Response System Self-Assessment Quiz Performance Correlates with Exam
Performance in Biochemistry and Genetics
Chantal Richardson and Emmanual Segui
Alabama College of Osteopathic Medicine
PURPOSE Many medical school use audience response system to engage student in active learning during didactic sessions. Our study examines medical studentās voluntary use of the audience-response system self-assessment quizzes as a checkpoint for comprehension and learning of lecture material presented during a large biochemistry/genetics class. We hypothesize that students will actively participate in the self-assessment quizzes and that there will be a performance correlation between on the self-assessment quizzes and exams.
METHODS First year osteopathic medical school students (n=160) were given voluntary audience response self-assessment in-class quizzes during each biochemistry/genetics lecture. The class is broken down into five educational blocks with an average of four to seven biochemistry/genetics lectures in each block. At the end of the block students were assessed with a culminating exam. Participation and performance scores on the self-assessment quizzes during each educational block was reported via TOP HAT audience response system. The average performance score was plotted against the average biochemistry/genetic exam performance for each of the five educational blocks.
RESULTS The average voluntary participation on the self-assessment quizzes was 81.2%. The average scores on the self-assessment quizzes were statistically similar to the average exam score for each educational block. Using the Pearson correlation, we found that the average audience response self-assessment quiz performance was significantly (p=0.41) correlated with average exam performance (r=0.89).
CONCLUSIONS Audience
responses system self-assessment quizzes allow students to actively participate
in the biochemistry/genetics didactics. This study shows that studentās
performance on voluntary self-assessment in-class quizzes and on the corresponding
exams are closely correlated. With additional studies we hope to show that the
in-class quizzes can be used to accurately predict how students will perform on
exams.
Poster Award Nominee
106 – DO MCQ QUIZ SCORES PREDICT STEP 1 SCORES TO THE SAME DEGREE IF THE QUIZZES
ARE NO LONGER PART OF THE COURSE GRADE?
Kathryn Moore, Karly Pippitt, Candace Chow, and Jorie
Colbert-Getz
University of Utah School of Medicine
PURPOSE Scores from multiple-choice question (MCQ) assessments share a moderate-to-strong relationship with Step 1 scores. It is unknown if this relationship varies by the stakes of the assessment (i.e., whether the assessment is graded). If scores from graded assessments predict Step 1 scores, but scores from identical but ungraded assessments do not predict Step 1 scores, this may suggest students are not motivated to perform their best for ungraded/formative assessments. If, however, there is no difference in predictive ability, this would provide evidence that assessments offer valuable learning experiences even if ungraded and formative in nature.
METHODS Participants were University of Utah SOM students matriculating in 2014 (N=100) and 2016 (N=131). Students in the graded cohort completed 17 MCQ quizzes across 6 courses in years 1-2 while students in the ungraded cohort completed 19 MCQ quizzes. Both cohorts completed Step 1 at the end of year 2. Quiz performance contributed to 36%-50% of course grades for 2014 matriculates (graded cohort). The testing environment and majority of quiz questions were the same for the 2016 matriculates, but performance did not contribute to course grades (non-graded cohort). Correlations between quiz and Step 1 scores were computed; linear regression was used to determine if quiz scores significantly predicted Step 1 scores.
RESULTS Average quiz scores were 81%(SD=6%) for the grade cohort and 80%(SD=6%) for the non-grade cohort. There were strong correlations between quiz and Step 1 scores for the grade cohort (R=0.79) and non-grade cohort (R=0.63). Quiz scores significantly predicted Step 1 scores for the grade cohort, F(1,98)=165.81, p<0.001, and non-grade cohort, F(1,129)=84.59, p<0.001.
CONCLUSION Regardless if used
for grading or formative purposes, quiz performance shared a strong
relationship and predictive ability with Step 1 performance. This may suggest that assessments need not
count towards course grades in order to motivate students.
107 – THE USE OF
REFLECTIVE EPORTFOLIOS AS AN ASSESSMENT TOOL OF LEARNER DEVELOPMENT FROM
UNDERGRADUATE THROUGH TO POSTGRADUATE STUDY
Paula Smith and Uzma Tufail-Hanif
University of Edinburgh
PURPOSE The ePortfolio is being increasingly utilised in medical school curricula and postgraduate training programmes to monitor the development and improvement of learners. Reflective ePortfolios have been used since 2013/14 in a two-year online part-time ChM in Urology programme at our institution, requiring postgraduate students to learn from professional experiences out with formal teaching. This year, we introduced a similar assessed ePortfolio to the new BMedSci Surgical Sciences course for undergraduate medical students. Here, we evaluate its perceived effectiveness in developing student autonomy and self-reflection skills.
METHODS In 2017/18, students enrolled on the two courses (n=12 undergraduate and n=12 postgraduate) were asked to critically reflect on their experiences, actions and learning under the headings: Quality Improvement and Patient Care; Literature Evaluation Skills; Research and Experimental Design; Self-Learning Abilities and Habits; and, for ChM only, Teaching Skills. Anonymised, interim student feedback on the process has been gathered using online questionnaires.
RESULTS Eleven students completed the questionnaire (46% response rate); undergraduate and postgraduate responses were essentially comparable. None of the respondents had any previous experience of keeping a portfolio; 11/11 indicated it is a new learning experience for them. Encouragingly, no-one declared an anxiety about revealing their weaknesses. The majority of respondents find the ePortfolio helps them reflect on their approach to study (82%) and identifies where they need to improve (55%). Two-thirds (7/11) rate the ePortfolio as ābeneficial and challengingā. Whilst 27% indicated that they found it ādifficult and confusingā, most described it as āvaluableā (73%).
CONCLUSION Reflective ePortfolios provide opportunities
to give students autonomy, and develop their learning across and between
different courses, which they can continue into their postgraduate training and
beyond. To be effective, teaching staff need to provide detailed guidance on
the assessment requirements, such as exemplars, given that ePortfolios are
likely to be a novel endeavour for many students.
Poster Award Nominee
108 – A BENCHMARK-ANCHORED PATH FOR SUCCESS ON STEP 1 AND LEVEL 1 LICENSURE
EXAMS
Maria Cole, Kerin Fresa, Marcus Bell, Dawn Shell,
and Linda Adkison
University of Missouri School of Medicine, Philadelphia
College of Osteopathic Medicine, and Trinity Medical Sciences University School
of Medicine
PURPOSE Residency program directors increasingly use a USMLE Step 1 or COMLEX Level 1 score as an important screening point for applicants. Low scores and repeated failures can result in poor outcomes for students. Thus, the development of benchmarks supported by data that identify student progress and provide evidence of āpreparednessā lead to better prepared students with better individual and institutional outcomes. This study is a collaboration between three different medical schools to compare data and develop shared best practices.
METHODS Data were collected for analyses from a US 6-year allopathic medical school, an osteopathic medical school, and a Caribbean allopathic medical school. All schools collect data for students taking USMLE Step 1 and one school collects data for students taking COMLEX Level 1. These data include: course performance, NBME discipline exam results, commercial 8-hour simulation exam results, an internal comprehensive exam, and the NBME Comprehensive Basic Science Exam results. Data for multiple cohorts are combined and association studies were performed to determine Pearsonās coefficient and significance.
RESULTS Outcomes demonstrate the following for COMLEX Level 1 exams: Specific course performance can be associated with Level 1 scores COMSAE D, an internal comprehensive exam, and an 8-hour COMBANK exam are correlated with Level 1 scores. Outcomes demonstrate the following for USMLE Step 1 exams: NBME basic science exams and a commercial 8-hour exam are correlated with Step 1 scores The NBME Comprehensive Basic Science Exam is correlated with Step 1 scores. These results led to institutional-determined benchmarks for each student and students to achieve prior to registering for USMLE Step 1 or COMLEX Level 1. The benchmarks led to improved studentsā performances for first time pass rate and mean score.
CONCLUSION These studies
demonstrate the power of shared information and collaboration between schools
and support schoolsā efforts for studentsā success.
109 – PRELIMINARY
ANALYSIS OF THE NEW MCAT, STUDENT PROGRESSION, AND USMLE STEP 1 PERFORMANCE
Linda R Adkison
Trinity Medical Sciences University School of
Medicine
PURPOSE In 2015, a new Medical College Aptitude Test was introduced in the United States. Many schools include more non-cognitive measures of applicants, but the MCAT remains a prominent tool. This study assesses student progression and Step 1 performance with new MCAT scores.
METHODS Data from students matriculating in 2016 were collected: matriculation date, undergraduate grade point average, new MCAT score, clinical clerkship entry date, and first attempt Step 1 performance. Forty-four of 106 students met the inclusion criteria for the study. MCAT scores were sorted into the Upper, Middle, and Lower range and compared with data presented at the AAMC Annual Meeting in November 2018. Start dates for each of these students was then reviewed for each of score range.
RESULTS Data show that the majority of Trinity students who matriculate had MCAT scores in the Middle (495-504) and Lower third of the score range (472-495). US schools admit a majority of students from the Upper third (505-528) while Trinity matriculated about 10% of students in this group. Among the 2016 cohorts, 55.8% of students had MCAT scores in the Lower third of scores. There is greater attrition and delay among this cohort in starting clerkships but students completing Step I in these three cohorts have a 100% pass rate; AAMC data demonstrate 80-98% pass-rate. Data show that 100% of students who take Step 1 in the Top cohort start clerkships on-time. 90% and 67% of students with Middle and Lower scores start on-time, respectively. AAMC reports that in larger US cohorts, 95%, 93%, and 70% of students start clerkships on-time each cohort, respectively.
CONCLUSION Data show that
students with low MCATs and acceptable grade point averages can be successful
in a medical curriculum and pass Step 1 on the first attempt.
Poster Award Nominee
110 – Leveraging Student Perception of Assessment Performance to Support
Learning
Brock Mutcheson, Andrew Binks, Renee Leclair, and
Emily Holt
VTCSOM
PURPOSE: The overall goal of the study was to determine and refine the efficacy of student-support services intended to improve student performance. For this particular analysis, the research team investigated and described the extent to which student perceptions of their own assessment performance were linked to data aberrations detected using traditional data forensics.
METHODS: The study so far has included 102 first year medical student observations and seven assessments; three formative and four summative exams. In total, there have been 1090 item-level observations. During exam review sessions, participating first year medical students indicated the type of error they believed they had made on a questionnaire developed through a literature review conducted by the research team. The types of errors were categorized into two major groups; Type A, a test-taking error (e.g. poor question interpretation, misreading a question) or Type B, a lack of content mastery (i.e. they didnāt know the answer). Student perceptions were investigated by student demographic and academic performance characteristics and various item-level characteristics. For this analysis, the team focused on data aberrations detected on individual items an examinee was anticipated to answer correctly based on response patterns, but that the examinees actually answered incorrectly (Meijer 1994). The estimated error rates were then associated with the preliminary summary findings.
RESULTS: Several data aberrations were identified and the error rate was found to be significantly correlated with multiple type-A error categories (i.e. poor test-taking) identified by students.
CONCLUSIONS: This analysis
provided validity evidence for one intended use of the questionnaire and
promising evidence for studentsā abilities to recognize and explain assessment
errors. Moreover, this analysis demonstrated the value of including an
alternative post-hoc measure to the assessment feedback process with the ultimate
goal of improving learning support.
111 – DIDACTIC-YEAR
ORAL EXAMINATION GRADES AS PREDICTORS OF PERFORMANCE ON FUTURE WRITTEN
LICENSURE EXAMINATIONS
Ellen D. Feld, Patrick C. Auth, Charles Stream, and
Daniela Cheles-Livingston
Drexel University
PURPOSE Oral examinations have been used for many decades in medical education. The validity and reliability of traditional oral examinations (where student and examiner engage in a relatively free-form discussion) have been questioned. Structured oral examinations, where the examiner asks the student a predetermined set of questions, have been found to be more reliable. Our program uses structured oral examinations; where each student discusses one out of a predetermined list of diseases, and faculty grade using a checklist; in our didactic-year Clinical Medicine I and II courses. We examined the relationship between oral examination grades and subsequent scores on the Physician Assistant National Certifying Examination (PANCE), which students take after graduation.
METHODS We performed an analysis of the examination grades and scores of 352 students in the graduating classes of 2013-2017, using Clinical Medicine I and II oral examination grades and PANCE scores.
RESULTS Both Clinical Medicine I and Clinical Medicine II oral examination grades have a small but significant correlation with PANCE scores.
CONCLUSION Clinical Medicine
I and II oral examination grades are weak predictors of PANCE performance.
There are several possible reasons for the lack of a strong correlation: each
student receives a randomly selected oral examination topic, therefore not all
students receive the same examination; any individual student might score
differently if given a different topic. The PANCE, on the other hand, is a
comprehensive examination, covering all body systems and medical tasks. An
alternative explanation for the weak correlation is that oral examinations may
evaluate different skills than written tests, such as the ability to perform in
a social setting and to think-on-oneās-feet.
Poster Award Nominee
112 – Retention of Medical Knowledge across the First Year of Medical School
Carrie Bailes, Mary Caldwell, Renee Chosed, Anna
Blenda, and Matthew Tucker
University of South Carolina School of Medicine
Greenville
PURPOSE Medical students are tasked with absorbing a vast amount of medical knowledge. Because of this, it is important to assess how much of that knowledge they retain as well as the depth of memory for that information. Here we retested students on a subset of questions from their Molecular and Cellular Foundations of Medicine summative exam 10.5 months after they first sat for the exam. In addition to re-assessing student performance using the same multiple choice format (cued recall), we also tested how well students could answer the questions without seeing the answer choices (free recall).
METHODS Second year medical students (N=46, 25 Females) reported to the same location of the original exam and used the same testing software. Fifty of 104 questions were selected from the original exam based on whether the item could be answered from memory without seeing the answer choices. The question stem was displayed with a text box below it to allow for the free recall of an answer. After an answer was entered, the original answer choices were shown, and students selected the best answer.
RESULTS The studentsā average on the original exam was 87.2Ā±5.5%. 10.5 months later, students answered correctly 53.9Ā±9.6% of the items, which was 62% of their original scores. Free recall rates (i.e., recollection without seeing the answer choices) were considerably lower, with students correctly answered 15.8Ā±9.2% of the questions. Initial exam performance did not correlate with retest scores.
CONCLUSIONS Understanding how
much information is retained in medical school is important for evaluating
teaching effectiveness and the difficulty of course content. Understanding
differences in free vs. cued recall may provide insights about how deeply
information is memorized. Testing protocols can be developed to compare
retention rates between modules and across different time spans.
113 – Learner
Assessment 2.0: Embedding Remediation into the Assessment Strategy
Leah Sheridan, Andrea Barresi, Paige Gutheil,
and Sheila Chelimo
Ohio University Heritage College of Osteopathic
Medicine and Ohio University
PURPOSE: Assessment is a systematic process that involves designing, collecting, interpreting and using information to ascertain student performance in a learning environment with the primary goal of improving studentsā learning and development. Assessment is not separate from the learning process but a crucial part of medical education, and a robust assessment program should strategically and seamlessly embed learner remediation.
METHODS: The Ohio University Heritage College of Osteopathic Medicine launched its novel Pathways to Health and Wellness Curriculum in Fall 2018, which emphasizes integration of foundational science and generalist clinical concepts, application to patient care, assessment for learning, and personalization of the educational experience. The first of four semester-long courses, āWellness,ā employs integrated assessments that learners use to hone their Osteopathic core competencies for practice over time. Each assessment is designed for learning and creates opportunities for learner monitoring of progression toward specific learning outcomes. This formative assessment strategy encompasses a series of low-stakes quizzes and exams designed to assess learner mastery of new content as well as learner retention of past content from the previous quiz or exam.
RESULTS: For each assessment, faculty evaluated the extent to which students mastered both the new and preceding content to identify learner areas of mastery and for remediation. Each subsequent assessment is then informed by learner performance, personalized to prospective areas for improvement, while concurrently assessing new knowledge and skill.
CONCLUSION: With future
cohorts, the Wellness Course aims to personalize exams at the level of the
individual learner by offering each learner the opportunity to demonstrate
improvement upon past performance. We
believe that the merging of assessment and remediation strategies fosters the
culture of life-long learning and improvement expected of medical professionals
today.
Poster Award Nominee
114 – CAUSES OF VARIATION IN THE PREDICTIVE VALIDITY OF FORMATIVE ASSESSMENTS IN
AN ORGAN-SYSTEM BASED PRECLINIC CURRICULUM
Jason Booza, Paul Walker, and Matt Jackson
Wayne State University School of Medicine
PURPOSE In 2018, the Wayne State University School of Medicine launched a new preclinical organ-system curriculum with faculty-authored weekly formative assessments coupled to end-of-unit summative exams prepared through the National Board of Medical Examinerās customized assessment system. Preliminary evaluation indicates that while the formative assessments were moderately-to-highly predictive of summative performance, there was variation in predictive validity of low and high performers. Formative assessment tended to overpredict for low summative performers and underpredict for high summative performers. While medical education literature is replete with assessment best practices, there is a paucity of studies addressing the variation of learnerās performance on formatives assessments. Our purpose is to understand causes of variation affecting student learning and assessment, as well as to address this gap in the literature.
METHODS A mixed-method approach analyzed formative assessment practices routinely captured through ExamSoft. Data was collected from weekly formative assessments completed on or off campus during an open period from Friday afternoon to Sunday evening. Randomization was achieved with a stratified random sample survey of medical students across 3 levels of summative performance.
RESULTS We found high summative performing students took the formative assessments earlier and spent less time preparing. They were also more likely to use the post-exam review and performance summary features of the formative assessments in preparation for the summative exams. These students viewed the system as being a valuable part of their learning strategy. Low and mid summative performers completed formative assessments late and many reported that they did not utilize the post-exam review and performance summary features.
CONCLUSION The predictive
validity of formative assessments appear be affected by the strategy of
preparation, use of the performance summary features, and self-fulfilling views
of the formative assessment process.
These findings may shape targeted interventions to improve student
learning strategies.
115 – Medical Student Optimism: Hopefulness and Confidence about the Future as Impacted by Personality and Gender
Robert Treat PhD, Diane Brown MS, Jeff Fritz PhD, Koenraad De Roo, Amy Prunuske PhD, William J. Hueston MD, Kristina Kaljo PhD, Craig Hanke PhD, Molly Falk-Steinmetz MS, Medical College of Wisconsin
PURPOSE: Optimism is the hopefulness and confidence about ones future. It is an important facet of emotional intelligenceĀ¹ and well-being that helps medical students meet academic challengesĀ² and think positively towards graduation, residency, and independent practice. As an enduring intrinsic trait, it is linked to personal disposition and therefore impacted by their personality.Ā³ The purpose of this study is to analyze the relationship of medical student optimism and personality as moderated by gender.
METHODS: In 2017/18, 205 of 500 M-1/M-2 medical students (106 males/99 females) voluntarily completed these self-reported surveys: (A) Trait Emotional Intelligence Questionnaire (TEIQue-sf) to measure optimism (scale:1=completely disagree/7=completely agree), and (B) Five Factor Personality Inventory (scale:1=very uncertain/5=very certain). Inter-item reliability determined with Cronbachās alpha. Differences in mean scores analyzed with independent t-tests and Cohenās d effect sizes. Pearson correlations (r) and stepwise multivariate linear regressions used for predicting optimism scores from personality. IBMĀ® SPSSĀ® 24.0 generated statistical analysis. This research approved by institutionās IRB.
RESULTS: Optimism (alpha=0.6) mean scores were significantly higher (d=0.3, p<.033) for female students (mean(sd)=5.8(1.0)) than male students (5.4(1.3)). Eighty-four percent of optimism scores were above the instrumentās midline score=4.0 Optimism was significantly (p<.050) correlated to neuroticism (r=-0.5), agreeableness (r=0.3), extroversion (r=0.3), and conscientiousness (r=0.2). Linear regression results for female optimism (RĀ²=0.38, p<.001) was predicted by neuroticism (beta=-0.5), extroversion (0.2), and openness (0.2). Linear regression results for male optimism (RĀ²=0.51, p<.001) was predicted by neuroticism (beta=-0.3), extroversion (0.2), and agreeableness (0.2).
CONCLUSIONS: Optimism scores
were positive for most medical students, but significantly higher for female
students than male students suggesting greater hopefulness and confidence in
their future. Emotional personality traits such as neuroticism and extroversion
directly impacted optimism for all students. However, different cognitive
personality traits such as openness to experiences assisted female student’s
optimism, whereas being agreeable helped male students.