10 tips: Ethical use of artificial intelligence in medical education*
*These 10 tips were compiled by human beings with some editorial help from ChatGPT 3.5.
Since Alan Turing’s seminal question, “Can machines think?” in 1950 artificial intelligence (AI) has an undeniably huge impact on healthcare, with over 60,000 articles published on the topic in the last two decades. AI content generation promises significant potential in medical education, offering efficiency gains and personalized learning opportunities among other transformative benefits. AI also raises ethical questions that demand attention from educators who play a pivotal role in promoting responsible and sustainable evolution to the advantage of current and future generations of healthcare professionals. These 10 tips aim to raise awareness of some broad ethical challenges posed by AI and how these may be mitigated, harnessing its undebated potential while upholding ethical standards.
Tip 1: Ensure transparency
When using AI in education, meticulously document and communicate objectives, data sources, and processes. This builds and maintains trust among both educators and learners. Providing concrete examples of transparency measures can further enhance understanding and implementation.
Tip 2: Validate AI output
Rigorous validation of AI-generated educational content is crucial for ensuring its accuracy and alignment with current medical standards. Continuous expert review and user feedback are necessary to maintain content quality and reliability, enhancing the learning experience and fostering confidence in AI’s role in medical education.
Tip 3: Prioritize privacy protection
Protection of privacy is a high priority that is non-negotiable, while striking a balance to address required functionalities, e.g. personalization. Clear, user-friendly materials explaining data protection measures can alleviate concerns and build trust among faculty and learners. Implement robust data governance frameworks to ensure responsible data management and mitigate risks related to data privacy, security, and ownership.
Tip 4: Ensure informed consent
Informed consent ensures faculty and learners are aware and agree to the use of AI with clear communication about its role, purpose, and impact.
Tip 5: Avoid bias
Bias in AI algorithms may lead to unfair outcomes or discrimination that is detrimental to the learning experience or opportunities. Mitigate biases in AI algorithms by conducting regular audits and updates to ensure equitable support for all learners, promoting fairness and inclusivity.
Tip 6: Train faculty
Integrate AI use and ethics into faculty development programs to prepare educators for responsible AI use with practical insights and ethical awareness. The aim is both to prepare educators to responsibly use AI themselves, and also instill ethical awareness in their learners.
Tip 7: Educate learners on AI implications
Provide your learners with activities and information that help prepare them to be both technically proficient and conscious users of AI technologies. Emphasize best practices as well as ethical issues, including its implications for their own context.
Tip 8: Promote collaboration and accountability
Encourage collaboration between AI experts, educators, and learners to develop effective, learner-centric AI-enhanced education that leverages diverse perspectives and expertise.
Establish clear lines of accountability for the development, deployment, and outcomes of AI in medical education to promote responsible use and collaborative effort among stakeholders.
Tip 9: Evaluate impact of AI
Regularly evaluate the impact of AI on learning outcomes, student experience, and educator workload to continuously improve the integration of AI and maximize its benefits. Continuous evaluation of AI responses in upholding accuracy, reliability, and ethical standards, with user feedback playing a pivotal role in improving performance. Implementing monitoring tools or dashboards can facilitate real-time feedback on accuracy, potential biases and other performance issues.
Tip 10: Create regulatory and policy awareness
Enhance regulatory awareness regarding the standards governing AI utilization in healthcare, especially when disclosing patient data that may potentially violate Patient Health Information Protection regulations. Collaborate with ethics committees to address legal and ethical challenges effectively and uphold best practices. To prevent academic dishonesty, establish clear guidelines for learners on the appropriate use of AI-generated content.
References
- Masters, K. (2023). Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide No. 158. Medical Teacher, 45(6), 574–584
- D'Souza et al. (2024) Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Med Educ Online. 2024 Dec 31;29(1):2330250.
- Jeyaraman et al (2023). ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J Methodol, 13(4): 170-178
- Rosenberg GS, Magnéli M, Barle N, Kontakis MG, Müller AM, Wittauer M, Gordon M, Brodén C. ChatGPT-4 generates orthopedic discharge documents faster than humans maintaining comparable quality: a pilot study of 6 cases. Acta Orthop. 2024 Mar 21;95:152-156.