Lauren Bibin on AI, Simulation and Ethics in Health Ed

Lauren Bibin delivered a talk for the Technology Ethics Initiative on February 18 about the ethics of nursing simulation in general, and AI-powered simulation in particular.

Dr. Lauren Bibin, DNP, CNM, APRN, CNE, CHSE, CHSOS, is the Director of Simulation Education and Innovation at Seattle University. She has been leading Seattle University’s Clinical Performance Lab, housed at the Swedish Cherry Hill Medical Center, and has been working with simulation technology within simulation programs for over 20 years. Her lunchtime talk at the Technology Ethics Initiative focused on the ethics of healthcare technologies, especially AI-powered ones. While she identifies multiple uses of AI in healthcare education and research – namely personalized learning, the automation of administrative tasks, advances in data analysis and pattern recognition, and improved prediction through algorithmic models – the ethical risks are not to be ignored.

Dr. Bibin notes that nursing simulation has been used for centuries. In fact, early medical models date back to the 1800s. A breakthrough moment was “Mrs. Chase” – a 1910s mannequin with realistic features that was nonetheless low fidelity. Fast forward to today, and nursing simulation includes virtual reality (VR), standardized patients (SP), and wearable simulators.

Based on the Healthcare Simulationist Code of Ethics, Dr. Bibin identifies three broad principles to carry out simulation work ethically: integrity, transparency, and respect for dignity and rights. To elaborate, healthcare simulation should be carried out in an honest fashion, honor the principles of fairness, accountability, and transparency, and avoid bias, misrepresentations, and the reinforcement of disparities in access.

The ethical risks of simulation should raise alarm given the long history of demographic underrepresentation. To illustrate the point, Dr. Bibin refers to a sentence she has heard many times: “women have atypical signs of heart attack,” which shows how much the medical profession has internalized the treatment of the male body and its symptoms as the norm, with the female one as a deviation from it. Thus, she advocates for a DEI-based Simulation Design that reflects the communities nursing students come from and serve. In addition to gender and ethnoracial representation, she pays specific attention to the inclusion of people with disabilities in inclusive design choices.

Dr. Bibin then goes on to discuss AI specifically. A major problem with AI-powered healthcare data analysis is that the underlying training data do not reflect population diversity. Beyond the demographic representation problems referred to earlier, she notes that much of the available data reflect inpatients, which ignores the experiences of patients who were not admitted to the hospital. Furthermore, AI-driven simulation scenarios can unintentionally reinforce existing biases if the source data lack diversity in terms of socioeconomic status, geographic location, and healthcare access.

She argues that a hybrid approach that balances AI use with human input is the best way forward. More concretely, she brings up three strategies for progress: diversifying data collection; implementing oversight of AI models to identify and mitigate biases; and ensuring stakeholder engagement to incorporate the perspectives of diverse populations.

As AI becomes increasingly integrated into healthcare simulation, Dr. Bibin stresses the importance of recognizing and addressing biases in the data used to develop simulation experiences. Without deliberate intervention, these biases can perpetuate disparities and limit the effectiveness of training programs intended to prepare future nurses for the realities of diverse patient care. Through ethical, inclusive simulation design and responsible AI implementation, healthcare education can move toward a more equitable and effective future.

Onur Bakiner

February 24, 2025