Tech ethicist on why healthcare isn’t ready for ambitious AI overhaul
Within the frothy, exuberant hype around AI’s ability to solve wide-ranging, formerly intractable problems, Alex John London is taking on the often thankless role of buzzkill.
Within the frothy, exuberant hype around AI’s ability to solve wide-ranging, formerly intractable problems, Alex John London is taking on the often thankless role of buzzkill.
“As a philosopher and an ethicist, it’s my job to go and say uncomfortable things to people,” said the professor from Carnegie Mellon University. “And then historically, they do things like put us to death.”
London offered his reality check, which he softened with humor, at Seattle University’s annual Ethics and Tech Conference on Thursday. The event this year focused on the intersection of AI and healthcare.
“Right now, the AI ecosystem is full of chaff and noise,” London said. “And that makes it very difficult for us to figure out how to construct systems that are going to do what we need them to do make health systems safer, more effective, more efficient, [and] more equitable with the kinds of technologies that we have and the kinds of data that we have.”
London is the director of Carnegie Mellon’s Center for Ethics and Policy and chief ethicist at the university’s Block Center for Technology and Society. He is also co-editor of Ethical Issues in Modern Medicine, one of the most widely used medical ethics textbooks.
In his presentation, London ticked through the steps required for building an AI tool to address a medical challenge — and the ways to mess them up.
Select an appropriate healthcare problem to address. This is the most important step and a particularly difficult one, London said, given that the data available for training AI models are frequently misaligned with the health system problems that need to be solved.
Data often come from less-than-ideal sources, namely clinical care records and insurance billing information. There’s a lack of controls among the data and an abundance of bias in the information, which can lead to racist outcomes, among other problems.
London cited IBM’s Oncology Expert Advisor project that aimed to provide diagnostic guidance in cancer care. The effort was a huge mismatch between ambition and suitable training data, and the costly project was ultimately scrapped in 2016.
Provide interventions, not predictions. AI models are best at predicting what will happen next based on past patterns. But that’s not the goal for medical care, said London. “We don’t want to circle back to things that we’ve done before. We want to try new things in order to make things better.”
Undergo sufficient validation. Before AI tools are deployed, they need to be carefully validated by reliable means. That didn’t happen with an algorithm that the digital medical records company Epic released for predicting potentially deadly cases of sepsis.
Hundreds of hospitals adopted the Epic Sepsis Model, but an independent, peer-reviewed study years after its release found that it failed to identify 67% of patients with sepsis, while producing false alerts on a significant number of patients that didn’t develop sepsis, creating “a large burden of alert fatigue.”
Other sepsis prediction tools created on datasets that predated COVID-19 began erroneously detecting the disease everywhere when applied to a population with COVID symptoms.
From London’s perspective, systemic changes are needed before AI can provide large-scale, positive impacts on healthcare.
“These structural problems are not going to be changed by doing fancy work on your dataset,” he said. “To really make use of AI and get all the value out of AI in healthcare, we have to change health systems, the data that we generate, our ability to learn, and the way we deliver health care, and who’s included in our systems.
“Until we do that,” London concluded, “it’s going to be incredibly hard to get the value that we want out of artificial intelligence.”
Some 200 people registered for the event. Attendees included employees of healthcare providers and health tech companies, bioethicists, data analysts, students, investors and academics.
Also presenting on Thursday were Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science and Dr. Vin Gupta, a pulmonologist at Seattle’s Virginia Mason, chief medical officer in pharmacy at Amazon, and an affiliate assistant professor at the University of Washington.
Other speakers included experts from Seattle Children’s Hospital; Microsoft; the University of Washington’s Institute for Protein Design; Truveta, a Bellevue, Wash.-based startup; and Seattle’s Madrona Ventures Group.
Read the article on Geekwire.
Friday, June 28, 2024