How will artificial intelligence change medicine? And what can be done to ensure that change is for the better?
Stanford Medicine turned to some of Stanford University’s guiding lights on matters of AI and ethics for some insight.
These six members of Stanford’s faculty are the leaders of an initiative launched this summer to address ethical and safety issues surrounding AI innovation. The initiative, RAISE-Health (Responsible AI for Safe and Equitable Health), launched in June, is sponsored by Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.
Here’s what we asked: 2023 has been a turning point in how we think and talk about artificial intelligence, especially in medicine. Looking to the year ahead, and even beyond, what is in your forecast for AI’s future?
What developments inspire optimism for you? Which issues should be getting more attention?
Fei-Fei Li, PhD, RAISE-Health executive co-sponsor, professor of computer science and Stanford HAI co-director
I am inspired to see the interest from students and researchers here at Stanford in learning about the ethical boundaries, policy implications and the societal implications of AI. At HAI, we are at the forefront of bringing ethical design and human-centered thinking to bear on the development and use of AI: We have hundreds of members of the community benefiting from our various programs. It’s becoming a movement.
One area I think needs more emphasis is public investment in AI. Right now there is a huge asymmetry between public sector and private sector investment in AI. This is not healthy for our country, and it’s not healthy for the ecosystem of AI. We need trusted sources to evaluate and assess this technology — organizations serving a role like the FDA serves for medicine. Public sector investments are well suited to support curiosity-driven and multidisciplinary research, which are so important for discovering new drugs, new treatments and understanding the mechanism of disease. Without investing in the public sector we’ll be losing these opportunities.
Lloyd Minor, MD, RAISE-Health executive co-sponsor, dean of the School of Medicine, vice president for medical affairs, Stanford University
Without question, AI technologies will soon be embedded in nearly every facet of society — and biomedicine will be no different. What is different, of course, are the stakes involved. Errors in health care and biomedical research can have life-altering consequences. So, as we embrace the potential, we must do our due diligence.
As with any powerful new tool, we must not only develop the knowledge and skills to employ it effectively but also invest in shaping its safe and responsible use. What gives me hope is that experts across fields are proactively coming together to lead this work through initiatives like RAISE-Health.
As I look to the future, I will be paying special attention to the regulatory environment surrounding these new technologies. We’re currently in the “wow” phase of this technology, but very quickly we will need informed and consistent policies to govern AI’s development, use and long-term evaluation. That is the key to safety and efficacy and ensuring that these technologies not only work but also help close long-standing health inequalities.
Russ Altman, MD, PhD, RAISE-Health co-leader, Kenneth Fong Professor, and professor of bioengineering, of genetics, of medicine and of biomedical data science
AI sometimes gets criticized based on (reasonable) concerns about privacy, fairness, justice. These need to continue to be front of mind for all AI researchers to make sure that the tools they create are ones that contribute to equity.
However, there is also a huge upside of AI in helping manage biomedical discovery and improve the delivery of clinical care. Large language models may revolutionize our ability to explain to patients their diagnosis, prognosis and treatment — in clear, plain English. And AI technology is going to move from being purely a tool for analyzing biological data sets to a colleague/assistant who can help formulate hypotheses, test them and report on the results. This will catalyze the pace of discovery and translation.
Innovation is also needed to reduce the technology’s power usage and the amount of data needed to achieve good performance. With success in these two areas, there will be a democratization of AI where it is no longer dominated by large, rich tech companies but where it can proliferate and be built and used by a much larger group of diverse users with diverse needs, perspectives and goals.
Sanmi Koyejo, PhD, RAISE-Health co-leader, assistant professor of computer science
Much of what we have seen over the year highlights the benefits and risks of the current era in AI development broadly and AI deployments in health care. Indeed, the future of AI in health care depends on the decisions we make now. We can repeat the mistakes we have made that lead to inequitable and fragile systems, or we can shape AI in health care to positively impact society. I hope we choose the latter.
Meaningful strategies for evaluating AI — what works and what breaks — are crucial components for building trust and positive impact. Toward that end, we have introduced a new evaluation framework focusing on the trustworthiness properties of AI models, like ChatGPT, that generate new data. Evaluation can also help ascertain AI’s abilities. For example, we have shown that some of the claims that large language models are developing emergent properties — in other words, surprising behaviors reminiscent of human intelligence — do not stand up to scrutiny.
Prioritizing human-centric development of AI in health care is also key. Including stakeholders in the design and deployment of the technology and keeping humans in the pipeline will lead to more equitable systems that avoid repeating the biases of our past.
Curtis Langlotz, MD, PhD, RAISE-Health co-leader and professor of radiology, of biomedical informatic research and of biomedical data science
I am most optimistic about how clinical data from multiple sources is converging to power the latest AI breakthroughs. Data from clinic notes, lab values, diagnostic images and genomic tests are coming together. As a radiologist I am especially excited about how AI can help us reduce medical errors and detect disease at the earliest stages.
We should be paying more attention to the challenges of implementing these amazing technologies in a fair, practical and sustainable way. Because these systems learn from data, any biases inherent in the data are incorporated into the system. And there is no guarantee that clinicians will have the bandwidth to act on insights produced by AI.
I am excited about the implications of ChatGPT, Bard and other large language models for engaging patients and providers. Patients now have ready access to their medical records, and language models can help them understand their health information at a reading level and in a language that is right for them. And training models like ChatGPT on large amounts of patient data may create capabilities that surprise us.
Sylvia Plevritis, PhD, RAISE-Health co-leader, professor of biomedical data science and of radiology and chair of biomedical data science
We are living in unprecedented times. Generative AI models, like ChatGPT, capture the structure of data in ways that were unimaginable just over a year ago — and it’s so much more than knowing how to string words together. Structure is not just in sentences, it’s in everything: DNA, RNA, and amino acid sequences; protein folding; electronic health record entries; and imaging all have structure that can be explored with generative AI.
I work in cancer research. Today’s AI is enabling us to combine data in a way that can predict what clinical event (“CPT code”) will likely be next for a given cancer patient based on their clinical and molecular status and history. This is allowing us to build active learning systems that can simultaneously advance basic science discovery and clinical care.
I see a bright future, and I am not alone. People are generating great ideas about what can be done. But the limit right now is accessibility. This work requires significant computational and data resources that do not exist at most academic institutions. Universities like Stanford are taking this very seriously — how do we create high-performing but lower-cost AI technologies for academic (and broader) settings to empower a scholarly, not only commercial, perspective?
Photo credits: Lloyd Minor by Carolyn Fong, Fei-Fei Li by Drew Kelly, Sanmi Koyejo by Ananya Navale.