News

Reminder* Call for 2024 IAMSE-ScholarRx Student Educational Research Grants

Due January 15, 2024

As a reminder, IAMSE is currently soliciting applications for the IAMSE-ScholarRx Educational Research Grant Program.

Students must be members of IAMSE to be eligible to apply for this grant. All students will need to have a faculty mentor sign off on the proposal confirming that all policies will be met. Proposals must be accompanied by a letter from an appropriate institutional official confirming that the institution will pay to send the student to the IAMSE meeting the year following project completion to present the results of the proposed work; timing of the presentation is flexible as to be appropriate for the completion of the project. Up to four (4) student grants will be awarded for up to $2500 USD.

Applications are to be submitted on the submission page found here by 11:59 PM Eastern Time on January 15, 2024

All information regarding the IAMSE-ScholarRx Educational Research Grant Program, including the application process, eligibility, proposal format, and evaluation criteria, can be found on the IAMSE website here. A template for the budget can be found here. Note that you may need to open the template in another window to download it. 

Call for 2024 IAMSE-ScholarRxStudent Educational Research Grants

Due January 15, 2024

The International Association of Medical Science Educators (IAMSE) is pleased to once again accept applications for the IAMSE-ScholarRx Educational Research Grant Program.

Students must be members of IAMSE to be eligible to apply for this grant. All students will need to have a faculty mentor sign off on the proposal confirming that all policies will be met. Proposals must be accompanied by a letter from an appropriate institutional official confirming that the institution will pay to send the student to the IAMSE meeting the year following project completion to present the results of the proposed work; timing of the presentation is flexible as to be appropriate for the completion of the project. Up to four (4) student grants will be awarded for up to $2500 USD.

Applications are to be submitted on the submission page found here by 11:59 PM Eastern Time on January 15, 2024

All information regarding the IAMSE-ScholarRx Educational Research Grant Program, including the application process, eligibility, proposal format, and evaluation criteria, can be found on the IAMSE website here. A template for the budget can be found here. Note that you may need to open the template in another window to download it. 

Registration is Now OPEN for the Winter 2024 Webcast Audio Seminar Series

COVID-19, cardiovascular disease, mental health, climate change, poverty, conflict, health care infrastructure and an aging population are just some of the public health concerns that affect individuals across the globe. Amidst these global health concerns, there is a pressing need for our learners to not only be competent in caring for patients within their local communities but to also be poised for patient care in differing cultures and geographies. How do we train our learners for this immense task? The Winter 2024 IAMSE Webcast Audio Seminar series will explore the intersection of medical and health professions education with global health. Themes will include the difficulties of life and medical practice in other parts of the world, to the unique challenges faced by migrant physicians, and medical education and scholarship in low-resource countries. The webinar series will help us recognize and appreciate our own biases as well as different perspectives on values and shared global challenges. Global health is the collaborative trans-national research and practice for improving health and achieving equity in health for all. This webinar series will equip the health professions educators to train globally-minded learners who will provide care for the medically underserved from pole to pole.

One World, One Health
Tackling the Global Health Crisis

Join us for each one-hour session beginning Thursday, January 4 at 12 PM EST. Sessions in the Winter 2024 series include:

  • January 4, 12 PM EST – Global Health Electives: the Good, the Bad, the Ugly presented by Jenny Baenziger
  • January 11, 12 PM EST – Climate Change and Human Health presented by Eugene Richardson
  • January 18, 12 PM EST – Global Approaches to Medical School Regulation: Who Wins? presented by Ahmed Rashid
  • January 25, 12 PM EST – Cultures and Practices for Global Inclusion in Health Professions Education Publishing: What Can Work presented by Anna Cianciolo, Peter de Jong, & Subha Ramani
  • February 1, 12 PM EST – Refracting Lenses – Seeing Women of Colour in Global Health presented by Thirusha Naidu

As always, student member of IAMSE can register for hte series for FREE! Email support@iamse.org for details.

Last Call for #IAMSE24 Poster & Oral Abstracts

Deadline December 1, 2023

As a reminder, the call for abstracts for oral and poster presentations is now open for the 28th Annual IAMSE Conference to be held at the Hilton Minneapolis Resort in Minneapolis, Minnesota, USA from June 15-18, 2024. The IAMSE conference offers opportunities for training, development, and mentoring, to meet the needs of learners and professionals across the continuum of health professions education.

A few things to note:

  • The first time you enter the site, you will be required to create a user profile. Even if you did submit in previous years, you need to create a new account.
  • All abstracts for oral and poster presentations must be submitted in the format requested through the online abstract submission site.
  • You may list several authors, but you are limited to one presenter.
  • Once the submission deadline has passed, you may not edit your abstract. This includes adding authors.
  • Once the submission deadline has passed, authors will no longer have access to their abstract submissions.

There is no limit on the number of abstracts you may submit, but it is unlikely that more than two presentations per presenter can be accepted due to scheduling complexities. Abstract acceptance notifications will be returned in March 2024. Please contact support@iamse.org for any questions about your submission.

We hope to see you in Minneapolis next year!

Reminder* #IAMSE24 Poster & Oral Abstracts Now Welcomed!

Deadline December 1, 2023

As a reminder, the call for abstracts for oral and poster presentations is now open for the 28th Annual IAMSE Conference to be held at the Hilton Minneapolis Resort in Minneapolis, Minnesota, USA from June 15-18, 2024. The IAMSE conference offers opportunities for training, development, and mentoring, to meet the needs of learners and professionals across the continuum of health professions education.

Students who would like feedback on a draft of their abstract prior to final submission should email it to the Student Professional Development Committee, care of Stefanie Attardi at support@iamse.org, by November 10, 2023.

A few things to note:

  • The first time you enter the site, you will be required to create a user profile. Even if you did submit in previous years, you need to create a new account.
  • All abstracts for oral and poster presentations must be submitted in the format requested through the online abstract submission site.
  • You may list several authors, but you are limited to one presenter.
  • Once the submission deadline has passed, you may not edit your abstract. This includes adding authors.
  • Once the submission deadline has passed, authors will no longer have access to their abstract submissions.

There is no limit on the number of abstracts you may submit, but it is unlikely that more than two presentations per presenter can be accepted due to scheduling complexities. Abstract acceptance notifications will be returned in March 2024. Please contact support@iamse.org for any questions about your submission.

We hope to see you in Minneapolis next year!

Join us for the IAMSE 2024 Medical Educator Fellowship Program!

The International Association of Medical Science Educators (IAMSE) is pleased to announce that applications for the 2024 Medical Educator Fellowship (MEF) Program are now being accepted! IAMSE is once again offering members and non-members the option of completing the MEF Program 100% virtually, from any location around the globe.

The primary goal of the MEF is to support the development of well-rounded healthcare education scholars through a program of targeted professional development and application of learned concepts to mentored research projects. The program is designed for healthcare educators from all backgrounds who wish to enhance their knowledge and productivity as educational scholars.

Please note that as a prerequisite, applicants are required to have completed the Essential Skills in Medical Education (ESME) Program. For more detailed information about the program, please visit the information on our website at http://www.iamse.org/fellowship-program/.

Applicants for the next cohort will be accepted until December 1, 2023. To submit your application, fill out this application form, then email the completed form along with your ESME Completion Certificate and your CV to support@iamse.org

In case your application is accepted, you will be invited to an online kickoff meeting on December 13, 2023. For questions about the Fellowship or how to apply, please contact support@iamse.org. We thank you for your interest and look forward to supporting you in achieving your professional goals in educational scholarship.

#IAMSE24 Poster & Oral Abstracts Now Welcomed!

Deadline December 1, 2023

The International Association of Medical Science Educators (IAMSE) is pleased to announce the call for abstracts for oral and poster presentations for the 28th Annual IAMSE Conference to be held at the Hilton Minneapolis Resort in Minneapolis, Minnesota, USA from June 15-18, 2024. The IAMSE conference offers opportunities for training, development, and mentoring, to meet the needs of learners and professionals across the continuum of health professions education.

Students who would like feedback on a draft of their abstract prior to final submission should email it to the Student Professional Development Committee, care of Stefanie Attardi at support@iamse.org, by November 10, 2023.

A few things to note:

  • The first time you enter the site, you will be required to create a user profile. Even if you did submit in previous years, you need to create a new account.
  • All abstracts for oral and poster presentations must be submitted in the format requested through the online abstract submission site.
  • You may list several authors, but you are limited to one presenter.
  • Once the submission deadline has passed, you may not edit your abstract. This includes adding authors.
  • Once the submission deadline has passed, authors will no longer have access to their abstract submissions.

There is no limit on the number of abstracts you may submit, but it is unlikely that more than two presentations per presenter can be accepted due to scheduling complexities. Abstract acceptance notifications will be returned in March 2024. Please contact support@iamse.org for any questions about your submission.

We hope to see you in Minneapolis next year!

Say hello to our featured member Ian Murray!

Our association is a robust and diverse set of educators, students, researchers, medical professionals, volunteers and academics that come from all walks of life and from around the globe. Each month we choose a member to highlight their academic and professional career and see how they are making the best of their membership in IAMSE. This month’s Featured Member is Ian Murray.

Ian V.J. Murray
IAMSE 2023 Virtual Forum Program Chair
Professor of Physiology
Alice Walton School of Medicine (AWSOM), USA

How long have you been a member of IAMSE? 
I was first introduced to IAMSE in 2012 when a colleague suggested we submit our research at the conference. I enjoyed the conference so much that I returned in 2015, where I presented a poster on student attention span during lectures. It was at this conference that I was introduced to educational theories, and with special note of the cognitive load lecture by Dr. Jimmie Leppink. I have attended all of the IAMSE conferences since!

You are currently Chair of the 2023 Virtual Forum Planning Committee. Tell me a little bit about what that process has been like. What are you most looking forward to during the event? 
It has been a very rewarding experience working with IAMSE, and it facilitated interaction with colleagues, leadership, and administration. It is a pleasure to work with the team to make this conference a reality.

The theme of the Virtual Forum is “Should It Stay or Should It Go? Changing Health Education for Changing Times“. The process of ideation for the theme and subthemes was a collective effort using convergent and divergent thinking. Seeing how the different ideas converged to a common one was satisfying. With evolving and changing health education, with the end of the pandemic and the advancement of artificial intelligence (AI), we aimed to obtain international insight on innovations from students and international members. We also aimed to discuss and share about what does and does not work, and how medical educators have overcome and innovated as a result of these challenges. Additionally, since students are the future of this organization, the student registration fee was intentionally priced at $25.

I am most excited to be a part of this amazing forum and learn from learners and educators of all walks. We have three amazing Ignite Talks of 60 mins in length, which are unique and allow for direct interaction of participants, AND with the speaker. We also accepted 60 amazing Lightning Talks, with each talk being 14 minutes. Again, we planned these talks to allow for significant participant interaction with the speaker. Please take time to review the schedule, as we have content on AI, curriculum, teaching, and related to students.

Looking at your time with the Association, what have you most enjoyed doing? What are you looking forward to?
I have served on several IAMSE committees, with my first one being the Engage Committee, now renamed the Education and Advocacy Committee (EAC). I always enjoy my interactions and discussions with members in and out of the conference, as well as the new collaborations that have formed. These interactions have led to my connecting with mentors and interacting with members of communities of growth (COG). This has led to exciting and amazing friendships, opportunities, collaborations, and publications. 

I have always been excited to present my medical education research at these conferences. My pivot from wet lab research to medical education research occurred when I moved to the Caribbean and became director of student research. I feel that it is crucial to engage students in research and pair them with suitable mentors. This enables the students to not only experience the research process, but also increase their competitiveness for residency applications. IAMSE is an excellent place for students to make connections and present their research.

Looking back at your time during your graduate studies and early career, if you could give your younger self a piece of advice what would it be?
Reflecting on my career, my best advice pertains to all levels for students and educators. The advice is to identify an effective mentor early on, one who is preferably external to their own institution. Perspective is influenced by one’s lens. The advantage of a mentor is that they bring their wisdom and personal relationship with the mentee into play to reframe perspectives. I experienced the power of mentoring during an AAMC grant workshop. Here, the expert mentor gave an example of how the grant feedback was perceived as failure by the applicant, but then was reframed as steps for success. I was pleased to see IAMSE develop a mentoring program and a publication on this important topic. 

Looking back at your time during your graduate studies and early career, if you could give your younger self a piece of advice what would it be?
I look forward to meeting you at IAMSE in the future, and please feel free to reach out to me to say “Hello“!


IAMSE Fall 2023 Webcast Audio Seminar Series – Week 4 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concludes on October 5, 2023. Over these five sessions, we will cover topics including the foundational principles of Artificial Intelligence and Machine Learning, their multiple applications in health science education, and their use in teaching and learning essential biomedical science content.

The fourth session in this series is titled Artificial Intelligence (AI) Tools for Medical Educators and is presented by Drs. Dina Kurzweil, Elizabeth Steinbach, Vincent Capaldi, Joshua Duncan, and Mr. Sean Baker from the Uniformed Services University of the Health Sciences (USUHS).  Dr. Kurzweil is the Director of the Education & Technology Innovation (ETI) Support Office and an Associate Professor of Medicine. She is responsible for all of the strategic direction for the ETI, including instructional and educational technology support for the faculty. Dr. Steinbach is the Academic Writing Specialist in the newly established writing center at USUHS. She has 20 years of experience teaching and facilitating the learning of academic writing. LTC (P) Vincent F. Capaldi, II, MD is the Vice Chair of Psychiatry (Research) at USUHS and Senior Medical Scientist at the Center for Military Psychiatry and Neuroscience at the Walter Reed Army Institute of Research in Silver Spring, MD. Dr. Capaldi is also the program director of the National Capital Consortium combined Internal Medicine and Psychiatry residency training program and chair of the Biomedical Ethics Committee at Walter Reed National Military Medical Center. Dr. Joshua Duncan is the assistant dean for assessment. He earned his medical degree and MPH  from the UHSUS and is board-certified in pediatrics, preventative medicine, and clinical informatics. Mr. Sean Baker is the chief technology and senior information security officer, where he leads a team of 80 technologists to support the IT needs of USUHS and the entire military health system.

Dr. Kurzweil reviewed the goals of this webinar presentation and the learning outcomes.

  • Understand AI terminology
  • Identify AI teaching opportunities
  • Review citation options for AI tool use
  • Explain course policies using AI-generative tool(s)
  • Describe two accountability measures for using AI systems
  • List several impacts of using AI for assessment

Dr. Duncan briefly described AI as an intersection of Big Data, Computer Science, and Statistics and defined AI as a computer performing a task that would typically take the cognition of a human.  A subset of AI is Machine Learning (ML), where machines are programmed with algorithms to perform some of these tasks, which can be supervised or unsupervised by human interaction. Supervised learning can include Computer Vision, Natural Language Processing, and Machine learning, in contrast to Deep Learning, which is unsupervised and mimics human cognition.

Dr.  Duncan emphasized that understanding and using AI is becoming a required competency in health care, medical education, and research.  He provided several examples such as using AI to do large database statistical analysis, keyword database searching, use of clinical algorithms in clinical decision support, and to support clinical thinking and dialogue. One specific example he discussed, with references, was using Natural Language Processing in medical education assessment to evaluate three categories: Processing Trainee Evaluations, Accessing Supervisory Evaluation Techniques, and Accessing Gender Basis.  

Dr. Duncan then presented a demonstration of Chat GPT to illustrate the many uses for medical educators. He used ChatGPT to generate the following six topic prompts: Curriculum development, Assessment creation, Teaching, Teaching methodology, Research ideas, and Adaptive teaching.

Using the ChatGPT platform, he provided a prompt for the above areas.  For curriculum development, he asked ChatGPT to create a 6-week course on medical ethics that included lecture topics, readings, and assessments. In a manner of seconds, the 6-week course was designed. He pointed out that while the course topics and sequence generated by ChatGPT may be only a partial version of the course, it provides the user with a great starting point if they want to create a course like this from scratch. Dr. Duncan emphasized that it was essential to be cautious about all references ChatGPT provides because AI models, as text-predictors,  can hallucinate, meaning that if they do not have access to real answers, they will make up some. The AI user needs to verify all content and references to ensure they are valid and legitimate. He then demonstrated an Assessment creation using a detailed ChatGPT prompt to create five NBME style multiple choice questions with answer explanations on cardiovascular physiology, suitable for first-year medical student assessment.  Like the first ChatGPT demonstration, the five questions were generated with five possible answers, the correct answer was indicated, and an explanation was given for why this answer was the most accurate choice.  Dr. Duncan stated that there is an art of asking good questions (or prompts) so that the output generated is close to what you were looking for or expecting.  The prompts that Dr. Duncan used during his demo were one-sentence prompts and can be specific, for example, asking for effective teaching methodologies for imparting clinical skills to medical students.  He concluded his presentation by prompting ChatGPT for three research topics in medical education that are currently under-explored and why they are important. Dr. Duncan stated that AI can be an important member of the medical education team by providing the user with a draft that is 80% complete with answers to their prompts.

Mr. Sean Baker, in charge of IT security at USUHS, discussed the need for careful compliance when using all AI tools.  He stressed the importance of not entering information not already cleared for public release, such as personal data and information, controlled unclassified information, hiring, performance management or contract data, student data, evaluations, and Personal Identification Information (PII).  Mr. Baker then highlighted the need to be aware of the policies at the user’s institution and provided examples of how they use Generative AI at USUHS. He compared using AI to using Social Media in that you do not want to post anything on AI that you would not post on Social Media.

Dr. Kurzweil then presented the topic of higher education’s need to think critically about user agreements and how we present these agreements to our students and faculty.  These policies must be discussed and decided at all levels, from Federal, State, University, College, Departments, including individual courses and classrooms. She emphasized that AI will be widely used, and its use will depend on individual institutions’ decisions, especially when it comes to student use in courses and faculty use in the classroom.  She pointed out that it is important to clearly state examples of where AI cannot be used, such as requiring all course assignments to be exclusively the student’s work and specifying that the student cannot use AI applications like Spinbots, DALL-E or ChatGPT. She also provided examples of when AI use is permitted, such as when the assignment will require a topic or content search strategy or provide a reference for additional information. 

Dr. Kurzweil discussed a 2023 Educause article by McCormack1, describing use cases clustered around four common work areas to incorporate Generative AI in higher education. They are:

  • Dreaming:  Brainstorming, summarizing information, research, and asking questions.
  • Drudgery: Sending communications, filling out reports, deciding on materials, and gathering information to help develop syllabus reading.
  • Design: Using Large Language Models to create presentations, course materials, and exams.
  • Development: Creating detailed project plans, drafting institutional policies and strategic plans, or producing images and music.

Many AI tools are currently available, and you, as the user, need to decide how best to use them. It is essential to consider how these tools can be used in teaching and what we must do to prepare our learners and faculty to develop their digital fluency.  She cautioned that these tools can hallucinate, i.e., makeup sources,  so you need to check your work. You need to check all citations to be sure they are real and that the information is correct. Dr. Kurzweil emphasized that nothing comes out of these tools that she would take at face value without first verifying the information source.

Dr. Kurzweil then described opportunities to use AI tools to help you teach, including:

  • Altered active real learning
  • Independent thinking and creativity
  • Review of data and articles quickly
  • Overcoming writer’s block
  • Research and Analysis skills
  • Real-time response to questions
  • Tutoring and Practice
  • Creation of Case Studies

She then described several ways to create Curriculum Integration Opportunities with AI in the classroom, including:

  • AI formalized curriculum
  • Introduction to AI concepts
  • Computer literacy and fluency
  • Data Science
  • Hands-on AI tool practice
  • Medical Decision-Making with AI
  • Professional Identity Formation
  • Ethical Decision-Making
  • Computer Science Theory

Dr. Kurzweil presented the application of Assessment with AI  using seven examples, including:

  • Project-based learning
  • Expectations of draft completeness
  • Rubrics created and applied to student work
  • Annotated references
  • Reflections
  • Using pen and paper in class for initial (draft) work development
  • Testing centers

She then highlighted these examples linked to specific Assessment Practices examples impacted by AI, including:

  • Requiring students to work collaboratively
  • Scaffolding assignments
  • Becoming familiar with students’ writing style
  • Making assignments personal, timely, and specific
  • Creating assignments that require higher-level cognitive skills
  •  Authentic assessments with Observation and Simulation experiences

 Dr. Kurzweil then listed six ways that AI can be incorporated into the medical education curriculum:

  1. Provide medical students with a basic understanding of what AI is and how it works.
  2. Introduce medical students to the principles of Data Science
  3. Introduce medical students to the use of AI in radiology and pathology.
  4. Teach medical students how AI can be used to analyze patient data and provide treatment recommendations.
  5. Introduce medical students to ethical considerations of AI, such as privacy, bias, and transparency.
  6. Provide medical students with an opportunity to apply their AI foundational knowledge in real-life clinical scenarios.

She then turned the session over to Dr. Steinbach to discuss plagiarism.

Dr. Steinbach focused on our need to be aware of plagiarism occurring with AI, especially when students use ChatGPT to complete assignments. Many AI detectors utilize a perplexity score, which measures the randomness of text, and a burstiness score, which measures the variation in perplexity to differentiate between text composed by humans or text written by AI.  She noted in a paper published in 2023 that the software GPTZero correctly classified 99% of human-written articles and 85% of AI-generated content.  Educators will have concerns that our students may be using AI, such as ChatGPT, to generate text for their writing assignments without correctly citing the source of the generated text, which could give them an advantage over students who are not using AI to help them complete their assignments. Dr. Steinbach stated that writing assignments that focus on students’ reflections or interpretations that are generated by ChatGPT could pass without getting identified by the AI detectors.  The same can be said for the writing of scientific papers and abstracts, where the software was only able to identify that humans wrote 68% of these.  The way to help avoid these issues is to be very clear about the policies and expectations in your course syllabus.

If you allow your students to use Generative AI in your course assignments, you must be clear on how you want them to cite the AI-generated information. Dr. Steinbach focused on two main style guides, AMA and APA, and a guide on how to cite text generated through AI. First, AI tools cannot be listed as an author because they are not human and cannot answer questions about the work that was produced. For both citation style guides, you can put in the method sections how AI was used and also note it in the acknowledgment sections for AMA. According to the APA style guide, you can also mention AI in the introduction section.  She stated that the APA style guide requires the author to include the prompt and identify the text generated by the AI tool. AMA style guide is not clear in their guidance yet, nor do they provide any advice on in-text citations.

The last speaker, Dr. Capaldi, emphasized that there isn’t a perfect AI detector because as the large language models develop and become more sophisticated, the AI detectors tend to lag behind these software improvements. The best AI detectors can do is provide the user with a probability score of whether the text was AI-generated.  When used as an AI detector, Watson was only able to detect as AI-generated about 60% of what ChatGPT produced.  Dr. Capaldi stated it is harder to detect text that has been edited, combined, or paraphrased. He also noted the probability scores are not perfect either, and there can be false positives and determinations as to whether or not the text was generated using AI tools.  He asked the audience to be careful when using AI detectors because they are not entirely accurate and are not completely foolproof when it comes to their implementation in the academic setting since probability scores are not absolute determinations of text that are or are not AI-generated.

Dr. Kurzweil ended the session by stating that AI and education have immense promise, but it also comes with responsibility. She asked that we commit to using AI to empower our learners, faculty, and educational institutions as AI a tool and not as a replacement for us as educators. AI needs to be viewed as a partner working with educators to enhance our ability to make education efficient and effective.  She stated we need to embrace innovation and digital fluency while upholding the values of equity, privacy, and ethics in education.

References

  1. McCormack, M. Quick Poll Results: Adopting and Adapting to Generative AI in Higher Ed. Tech. Educause Research Notes, 2023. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech?utm_source=Selligent&utm_medium=email&utm_campaign=er_content_alert_newsletter&utm_content=06-21-23&utm_term=_&m_i=KLvwCDTJUoupZ8FnwYkdq9V07qSZlQeD9ZID2uHfuGiuD%2BGrd53tXNOEA7c6mzGSLdnJzOY6_I0FO0uh8dBaxv0XVHjX0R1KKK&M_BT=36667538866



As always, IAMSE Student Members can register for the series for FREE!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 3 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Co-Presenters for the 3rd session are Dr. Michael Paul Cary Jr. and Ms. Sophia Bessias.  Dr. Cary is an Associate Professor and Elizabeth C. Clipp Term Chair of Nursing at the Duke University School of Nursing. Ms. Bessias is the evaluation lead for the Algorithm-Based Clinical Decision Support (ABCDS) Oversight program. She provides operational support and peer review for clinical decision support software proposed for use within the Duke University Health System (DUHS).  Ms. Bessias holds master’s degrees in science and Analytics and Public Health from NC State University and the University of Copenhagen.

Dr. Cary listed four objectives of the session:

  1. Establishing Context and Recognizing Challenges
  2. Operationalizing Bias Mitigation through AI Governance
  3. Navigating the Terrain of Large Language Models (LLMs)
  4. Equipping Educators and AI-Driven Healthcare Technologies

The session was divided into four sections, each discussing one of the above Session Objectives.

Objective 1: Establishing Context and Recognizing Challenges

Dr. Cary began by sharing the context of the promises and perils of AI and Healthcare. AI in healthcare can revolutionize healthcare through the promise of:

  • Improve patient care and clinician experience
  •  Reducing clinical burnout
  • Operational efficiencies
  • Reducing costs. 

He then highlighted several potential perils that need to be taken into consideration, such as:

  • Non-adoption or over-reliance on AI
  • No impact on outcomes
  • Technical malfunction
  • Violation of government regulations
  • Non-actionable or biased recommendations that could exacerbate health disparities

Dr. Cary posed a fundamental question: “Why is Identity Bias in algorithms so important?” He discussed a 2019 study by Obermeyer et al.al1 that demonstrated that a biased algorithm systematically assigned the same risk score to White patients and Black patients even though Black patients had 26.3% more chronic disease than White patients, which resulted in systematically excluding Black Patients from accessing needed care management services. The reason behind this was the algorithm assigned risk scores based on past healthcare spending, and Black patients tend to have lower spending than White patients for a given level of health. The error resulted from the developers using an incorrect label to predict a particular outcome, called Label Bias. Once the algorithm was corrected, the percentage of Black patients automatically enrolled in the care management program rose from 17.7% to 45.5%1

Dr. Cary reviewed four elements of AI Government Regulations that are evolving.   These include the 2022 FDA Final Guidance on Software as a Medical Device regulations that will regulate software and medical devices, including AI-powered devices. There is also the AI Bill of Rights, which aims to protect individuals from the potential harms of AI, such as Label bias and other biases and discrimination.  There is also a lot of AI regulation going on at the State level by their Attorney Generals beginning to regulate AI in their states. In 2022, the Attorney General of California sent a letter to the CEOs of all the hospitals in CA asking for an account of the algorithms being used in their hospitals, what the potential bias could be, and what they plan to do to mitigate these biases. Finally, the Department of Health and Human Services (DHHS) announced a proposed rule of Section 1557 of the Patient Protection and Affordable Care Act (PPACA) that states Covered Entities (Health Care Systems and Providers) must not discriminate against any individual through the use of clinical algorithms in decision making and develop a plan to mitigate that possibility. Dr. Cary stated that while this is a huge step forward, the proposed rule needed to go further to specify what the covered entities need to do to reduce bias. Still, it did solicit comments on best practices and strategies that can be used to identify bias and minimize any discrimination resulting from using clinical algorithms.

Dr. Cary and his team determined that the covered entities referenced in Section 1557 of the PPACA  would need to know how to examine their clinical algorithms to ensure they complied with the proposed rule.  They conducted a Scoping Review of 109 articles to identify strategies that could be used to mitigate biases in clinical algorithms with a focus on racial and ethnic biases.  They summarized a large number of mitigation approaches to inform health systems how to reduce bias arising from the use of algorithms in their decision-making.  While Dr. Cary outlined the literature search, the study selection, and data extraction, he could not show or discuss the results of their review before its official publication.  He noted that the Scoping Review results would be published in the October 2023 issue of Health Affairs at www.healthaffairs.org.

Dr. Cary then discussed some of the most pressing challenges facing the use of AI in healthcare. These include the lack of an “equity lens,” which results when AI algorithms are trained on biased or unrepresentative data sets. The result of this oversight is to exacerbate existing healthcare disparities, resulting in the AI decision-making system not providing equitable care.

The second challenge is the need for AI education and training of healthcare professionals and health professional educators. Very few of us have the necessary AI training, which results in a gap in knowledge and skills required to promote the successful integration of AI in healthcare.  This leads to healthcare professionals struggling to understand the capabilities and limitations of AI tools, leading to a lack of trust, use, and improper use.  Lastly, there is little to no governance in the design or use of data science and AI tools, which could lead to ethical and privacy concerns.

Objective 2: Operationalizing AI Governance Principles

Ms. Bessias began her presentation by sharing how Duke AI Health and the Duke Health System are attempting to overcome some of these challenges. In 2021, the Dean, Chancellor, and Board of Trustees charged the Duke Health Care System leadership to form a governance framework for any tool that could be used in patient care, specifically any algorithm that could affect patient care directly or indirectly. The outcome of this charge was the formation of The Algorithm-Based Clinical Decision Support (ABCDS) Oversight Committee. The ABCDS is a “people-processed technology” effort that provides governance, evaluation, and monitoring of all algorithms proposed for clinical care and operations at Duke Health.  This committee comprises leaders from the health system, the school of medicine, clinical practitioners, regulatory affairs and ethics experts, equity experts, biostatisticians, and data scientists. It takes all of these perspectives working jointly to adequately assess the risks and benefits of using algorithms in health care.

The mission of the ABCDS Oversight Committee is to “guide algorithmic tools through their lifecycle by providing governance, evaluation, and monitoring.”  There are two core functions of the ABCDS. The first step is registering all electronic algorithms that could impact patient care at Duke Health. The second step is to evaluate these algorithms as high, medium, or low risk.  High-risk algorithms involve all data-derived decision-making tools, sometimes home-grown and sometimes from vendors. In either care, this process investigates how they were developed and how they are proposed to be used. Medium risk involves knowledge-based clinical consensus-based algorithms based on clinicians sharing their expertise to create a rubric. Lastly, there are low-risk algorithms that include the Medical Standard of Care that are well integrated into clinical practice and frequently endorsed by relevant clinical societies. The specific type of risk evaluation used varies depending on the details of any given use case.

Ms. Bessias then took us through a detailed review of the ABCDS Evaluation Framework, which consists of the different stages the algorithm must meet to proceed to the next stage. It is based on a software development life cycle process. There are four stages in the Evaluation process:

  • Stage 1: Model development
  • Stage 2: Silent evaluation
  • Stage 3: Effectiveness evaluation
  • Stage 4: General deployment.

Each one of these stages is separated by a formal Gates Review that evaluates each stage through a series of quality and ethical principles, including transparency and accountability, clinical value and safety, fairness and equity, usability, reliability and adoption, and regulatory compliance. The intention is to ensure that when the AI algorithms are deployed, patients see the maximum benefit and simultaneously limit any unintended harm. Quality and ethical principles are translated at each gate into specific evaluation criteria and requirements.

Duke’s AI Health goal is anticipating, preventing, and mitigating algorithmic harms. In 2023, they introduced a new bias mitigation tool to help development teams move from a more reactive mode to a more proactive and anticipatory mode of thinking about bias. One of their process’s most critical aspects was linking the algorithm with its implementation: ABCDS tool = Algorithm(s) + Implementation.

What was discovered is that bias can be introduced anywhere in the life cycle of the algorithm and needs to be considered during each stage. To better understand this, Duke AI Health focused on a publication by Suresh and Guttag in 2021.2 This study is known as a framework for understanding the sources of harm during the Machine Learning life cycle as it illustrates how bias can be introduced.  The 7 types of Bias are Societal (historical), Label, Aggregation, Learning, Representation, Evaluation, and Human Use.  They use a template to help people identify and address each type of bias. It is during data generation that historical, representation, and label biases are introduced.  Ms. Bessias discussed three of these biases: Societal (due to training data shaped by present and historical inequities and their fundamental causes), Label (use of biased proxy target variable in place of the ideal prediction target), and Human Use (inconsistent user response to algorithm outputs for different subgroups) and gave an example of each one, as well as ways to address and mitigate them.

Objective 3: Navigating the Terrain of Large Language Models (LLMs)

Everyone is thinking about how to navigate the terrain of Generative AI in health care, especially large language models. Ms. Bessias then addressed how we can apply some of these tools and frameworks to LLMs. There are a large number of proposed applications of Generative AI in healthcare that range from low-risk to very high-risk.  These include generating billing information, drafting administrative communications, automating clinical notes, EHR inbox responses, providing direct medical advice, mental health support, etc. There are some limitations and ethical considerations as well.  For example, LLMs are trained to generate plausible results that may not necessarily be factual or accurate results.  Explainability (how the algorithm produces an output) and Transparency (accessible communication about the sources of data creating outputs) is the second major consideration. This leads to an ethical consideration of what happens when an algorithm provides misleading or incorrect information. What options are available to address algorithmic harm, and who has this recourse? Another important question is about access versus impact when considering equity. How are the risks and benefits of Generative AI distributed in the population?  An example of these considerations was discussed using Automated Clinical Notes as the AI application. Ms. Bessias stated there are many questions and few answers, but these are all the things that need to be considered as healthcare moves towards deploying some of these Generative AI technologies. To end this session, Ms. Bessias shared a reflection from Dr. Michael Pentina, who is the chief data scientist at Duke Health and the Vice Dean for Data Sciences at Duke University School of Medicine, in an Op-Ed that he wrote on how to handle generative AI:

“Ensure that AI technology serves humans rather than taking over their responsibilities or replacing them. No matter how good an AI is, at some level, humans must be in charge.”

Objective 4: Equipping Educators for AI-Driven Healthcare Technologies

Dr. Cary then discussed the 4th webinar objective about the competencies needed for health care professionals and health care educators as published by Russell et al. in 2023.3 The data for this publication was collected by interviewing 15 experts across the country, and they identified 6 competency domains:

  • Basic Knowledge of AI – factors that influence the quality of data
  • Social and Ethical Implications of AI – impact on Justice, Equity, and Ethics
  • Workflow Analysis for AI-based Tools – impact on Workflow 
  • AI-enhanced Clinical Encounters – Safety, Accuracy of AI tools
  • Evidence-based Evaluation of AI-based Tools – Analyze and adapt to changing roles
  • Practice-Based Learning and Improvement Regarding AI-based Tools

By developing these competencies, healthcare professionals can ensure that AI Tools are used to improve the quality and safety of patient care.

For the past year, Dr. Cary and Duke University have partnered with North Carolina Central University, a historically black university with a deep understanding of the challenges faced by underrepresented, underserved communities. Through this partnership, they developed a proposed set of competencies for identifying and mitigating bias in clinical algorithms.

  1. Trainees should be able to explain what AI/ML algorithms are in the context of healthcare.
  2. Trainees should be able to explain how AI governance and legal frameworks can impact equity.
  3. Trainees will learn ways of detecting and mitigating bias in AI algorithms across the life cycle of these algorithms.

Dr. Cary ended the sessions by presenting the audience with several training opportunities and resources offered by Duke University. These include short courses and workshops, formal programs, and virtual seminar series shown in the fall and spring semesters open to anyone worldwide.  In March 2024, Dr. Cary will present at the first-ever Duke University Symposium on Algorithmic Equity and Fairness in Health.

Lastly, Dr. Cary invited all webinar members to join Duke University in their commitment to advancing health equity and promoting responsible AI through a Call to Action for Transforming Healthcare Together.

References:

  1. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447-453.
  2. Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and access in algorithms, mechanisms, and optimization (pp. 1-9).
  3. Russell, Regina G. et al. Competencies for the Use of Artificial Intelligence-Based Tools by Health Care Professionals. Academic Medicine 98(3):p. 348-356.



As always, IAMSE Student Members can register for the series for FREE!

Free Workshop for Students: New Educator and Scholar Training (NEST)

The International Association of Medical Science Educators (IAMSE) invites your students to join a free Professional Development Workshop for Students. Hosted by IAMSE and sponsored by ScholarRx, this highly interactive workshop will provide student participants with an introductory, hands-on experience in applying Kern’s Six-Step model to design a complete education activity with appropriate pedagogic strategies. Students will also explore models of converting medical education design and development into scholarship.

Session: New Educator and Scholar Training
Facilitators: Colleen Croniger, Amber Heck, Tao Le and Elizabeth Schlegel
Date and Time: Saturday, October 7, 2023 from 9:00 AM – 12:00 PM (EDT)
Cost: FREE for students!

After participating in this session, student attendees should be able to:

  • Describe a framework for medical education professional development
  • Discuss and apply principles and best practices for curriculum design, pedagogic strategies, and educational scholarship
  • Identify and synthesize themes that integrate across major domains of medical education professional development.

All students are welcome to attend this free event. If you are not already an IAMSE member, we encourage you to join by clicking here. Membership is not required to register. Non-members will need to present proof of enrollment before the event date. If you have any questions please reach out to support@iamse.org.

Don’t Miss the IAMSE #VirtualForum23This October!

We hope you’ve made plans to join medical educators and students from around the world for the second annual IAMSE Virtual Forum! Join IAMSE October 18-20, 2023 as we host lightning talks, ignite talks, and more! This year’s theme is:

Should It Stay or Should It Go?
Changing Health Education for Changing Times

At this first-of-its-kind event, we will discuss what works and what does not serve educators and students alike when it comes to curriculum reform – what should stay and what should go. Topics will cover how curriculum reform impacts students, curriculum, and teaching as well as Artificial Intelligence in Health Sciences Education.

Meet the Ignite Speakers

IVF23 Ignite Group
From left: Kimara Ellefson, Holly Gooding and Neil Mehta

Calibrating Our Compass: Flourishing as the North Star for Charting the Way Forward
Wednesday, October 18 from 10:15 AM – 11:15 AM EDT
Presented by Kimara Ellefson, Kern National Network

How Are We Going to Get to The Moon? Developing Operating Principles for Effective Curriculum Change
Thursday, October 19 from 10:15 AM – 11:15 AM EDT
Presented by Holly Gooding, Emory University School of Medicine

Teaching in the Age of Online Resources: Designing Lesson Plans to Enhance the Value of In-Person Classroom Learning
Friday, October 20 from 12:00 PM – 1:00 PM EDT
Presented by Neil Mehta, Cleveland Clinic Lerner College of Medicine of CWRU


View the Lightning Talk Abstracts


Wednesday, October 18
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: Mental Health
Lightning Talks Room 2: Student Success
Lightning Talks Room 3: Student Success
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Social Determinants of Health

Thursday, October 19
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Assessment
Lightning Talks Room 3: Interprofessional Education
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Courses

Friday, October 20
10:15am – 11:15am Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Professional Development
Lightning Talks Room 3: Social Determinants of Health
Lightning Talks Room 4: Research
Lightning Talks Room 5: Other


If you have any questions, comments, or concerns, please let us know at support@iamse.org. Additional forum details and registration can be found at www.iamseforum.org.

We’re looking forward to seeing you in October!