Paging Dr. Algorithm

At Stanford Medicine, AI is becoming part of the curriculum, clinical training and the future of care

Featured Media for Paging Dr. Algorithm

Artificial intelligence, which is already changing medicine and patient care, will soon be in all Stanford Medicine classrooms. AI technologies are expected to improve patients’ and doctors’ lives by speeding up breakthroughs in medical science and assisting with health care administrative tasks, giving doctors more time to care for patients.

For these reasons, Stanford Medicine is revamping its curriculum to incorporate all that AI has to offer medicine as a teaching aid and practitioner’s tool, according to Reena Thomas, MD, PhD, senior associate dean of medical education and a clinical professor of neuro-oncology.

“Imagine if a student could have access to a teaching tool that they can use any time they want to practice asking the questions that give the information they need to make an accurate diagnosis, and it is just like having a conversation with a patient,” Thomas said.

Starting this fall 2025 quarter, all students pursuing degrees in medicine and physician assistant studies will learn about and from AI technologies. The curriculum will cover how different types of AI work so students learn to evaluate the information provided by them — which can sometimes be inaccurate or biased — and to determine how to best use the technologies when caring for patients.

Teaching the future now

As Stanford Medicine’s AI faculty champion, bioinformatics researcher Jonathan Chen, MD, PhD, is responsible for leading the integration of the AI curriculum into existing instruction and ensuring that the material being taught aligns with best practices for bringing new technology into clinical use.

Chen recalled that, as a medical student, he imagined just this future: While memorizing “100 bugs and 100 drugs,” he knew there had to be a way for humans to use computers to improve health care, he said. Yet, even Chen, an assistant professor of medicine and of biomedical data science who has studied the application of computers and intelligent systems to medicine for over 20 years, was taken aback by the implications of AI for medicine.

“In 2022 when ChatGPT came out and blew up as the fastest-growing internet application in history, I thought, ‘Whoa, this is like when the internet was invented. This is going to change the entire nature of clinical practice, and we cannot ignore it,’” Chen said.

“Everything about education, especially with training and evaluating people based on what they read and write, just got turned upside down, and we have to rethink how to train new students and how to retrain an entire generation of the health care workforce.”

“Unlike how there is a curriculum for teaching how the kidney works, we cannot pin down a fixed curriculum for AI in health care because the technology is a moving target.”

Jonathan Chen, bioinformatics researcher and Stanford Medicine’s AI faculty champion

The curriculum is being created by a steering committee and a working group made up of AI researchers and programmers, clinician faculty and students. They have settled on four learning objectives that form the basis of what students will be taught: how different types of AI work, how AI-based tools can be used in clinical practice, ethical and legal implications of using AI in medicine, and how to critically evaluate the information AI provides.

“Unlike how there is a curriculum for teaching how the kidney works, we cannot pin down a fixed curriculum for AI in health care because the technology is a moving target,” Chen said. “We need to have an adaptable framework. Luckily, many of the concepts needed for AI education are foundational, and we are already talking about them.”

Some concepts will be new, while others will build on the existing curriculum. For example, the students will learn to assess AI results using the same statistical methods taught for interpreting diagnostic tests and clinical trial results. Though diagnostic tests are very accurate, incorrect results are always a possibility, and doctors are taught how to interpret results and how to use them in patient care.

The same thinking applies to AI in health care because the algorithms can provide information that sounds correct but is actually wrong. The concepts of sensitivity (how well a test correctly identifies people with a condition) and specificity (how well a test identifies people without a condition) can help students evaluate AI, Chen said.

Earlier learning opportunities

Aditya Narayan, a student in Stanford University’s MD/MBA program and member at-large of the Association of American Medical Colleges Board of Directors, started medical school years before generative AI took off. He said he truly started to learn medicine during his clinical clerkships — hands-on learning experiences with patients after two years of predominantly classroom learning.

“A great deal of my basic preclinical training did not directly translate to the wards,” he said. “With AI, we can instantly generate patient cases that ground classroom concepts in real-world context and test students’ understanding in real time.”

Getting AI into the classroom could give students opportunities to use what they are learning as they are learning it.

Narayan explained that instead of memorizing possible causes of an abnormally high level of potassium in the blood, students could learn about the condition by interacting with an AI chatbot that immediately produces scenarios that might lead to high potassium. These could include a dialysis patient who missed a treatment, a trauma patient whose muscle tissue breaks down and enters the bloodstream or a child whose adrenal glands don’t make enough hormones.

Such AI-based tools could be a game-changer in the early years of medical school because there would be no limits on how much students could practice, Narayan said.

Traditionally, students have practiced on actors trained to play a patient, called standardized patients. Faculty members supervise and evaluate the encounters.

“Standardized patient scenarios are useful but rarely have the complexity and emotional texture of real patients,” Narayan said. “Having access to tools that simulate clinical encounters with patients would let students get instant feedback and opportunities to justify their reasoning in a low-stakes way. This can also open opportunities to gather more longitudinal data on student outcomes even before they reach higher-stakes hospital environments.”

The Stanford University-developed Clinical Mind AI platform does this, letting students start practicing patient-interviewing skills much earlier.

“Having access to tools that simulate clinical encounters with patients would let students get instant feedback and opportunities to justify their reasoning in a low-stakes way.”

Aditya Narayan, fourth-year student in Stanford University’s MD/MBA program

The app was developed under the direction of Thomas Caruso, MD, PhD, a clinical professor of anesthesiology, perioperative and pain medicine, and Shima Salehi, PhD, an assistant professor in the Graduate School of Education. Marcos Rojas Pino, MD, a doctoral student in education, leads the project.

The platform uses a chatbot, similar to ChatGPT, but rather than requiring students to type their questions, they can communicate by talking, just as they would in a conversation with a patient. Students can take medical histories from simulated patients, and the app gives them real-time feedback so they can learn which questions they should have asked. The purpose of the app is to teach students the clinical reasoning skills they will need when interacting with and treating patients.

“Asking relevant questions to get the information you need to arrive at a correct diagnosis is a skill that is developed and honed throughout your time in the early years of practicing clinically,” Thomas said.

The app, like the AI curriculum, is designed to be flexible given the fast pace at which AI is changing.

“Clinical Mind AI is built like a Lego set, so that when AI changes and comes out with a new type of ‘piece,’ we can take a brick off and add in a new one as needed, without changing the user interface,” Caruso explained.

A small group of students in the Pathophysiology Capstone class, the final part of the two-year Practice of Medicine course that all medical students take, recently tried Clinical Mind AI, and Caruso and Rojas Pino have fielded licensing requests for the app from medical schools spanning the globe.

The updated curriculum also includes the use of AI Clinical Coach, an app that listens in as a student presents a patient case to a faculty instructor. It then summarizes the patient’s medical history, symptoms, tests and treatment plan and creates a report that a faculty instructor can use to guide instruction and give feedback to the student.

Led by Sharon Chen, MD, a clinical professor of pediatrics, infectious diseases, researchers developed the tool within Stanford Medicine’s electronic health record system so students can practice evaluating actual patient cases.

“AI Clinical Coach facilitates the learning experience by providing real-time feedback to students and faculty educators about strategies the student is using. It also identifies whether students made the correct diagnosis and used all the relevant information to reach that diagnosis,” Thomas said.

Embracing change

The introduction of any new technology changes what people need to learn, and AI is no different, said Natalie Pageler, MD, Stanford Children’s Health’s chief medical information officer and clinical professor of medicine and of pediatrics and clinical informatics. Pageler co-founded the Stanford Clinical Informatics Fellowship, which trains doctors how to apply new information technologies to health care.

For example, before electronic medical record systems became widespread at hospitals, every medical resident needed to know how to write treatment instructions for a newly admitted patient — admission orders — from scratch. “This used to be a core skill that all residents did over and over,” Pageler said.

Now, electronic medical records systems have pre-written sets of orders that can be selected and edited as needed. Instead of learning how to write admission orders in their entirety, residents learn about the individual components that make up orders, which gives them a foundation upon which they can evaluate admission orders and revise them when necessary.

“AI will change the way we learn to practice human-centered medicine, and we need to be prepared to shape that future or be shaped by it.”

Aditya Narayan

Even when technology picks up tasks formerly performed by doctors, it’s still important for students to understand the underlying logic, Pageler said, much as how K-12 students master math concepts such as multiplication tables before using a calculator. For example, using Clinical Mind AI and AI Clinical Coach to learn how to interview patients, arrive at a diagnosis and propose a treatment plan will set up Stanford Medicine students to effectively use AI-based applications like Microsoft’s DAX Copilot that record entire patient-doctor interactions and draft visit notes.

AI joins other past advances that drastically and immediately improved patient care — vaccines, anesthesia, antibiotics, imaging technologies and even the humble stethoscope, to name a few. And right now, though the impact AI will have on medicine is inevitable, the nature of that impact is uncertain.

“When the stethoscope was invented two centuries ago, it essentially shifted how clinicians conceptualized themselves — from anatomists to diagnosticians who had privileged access to the hidden language of the body. Now we are at another one of those inflection points,” Narayan said. “AI will change the way we learn to practice human-centered medicine, and we need to be prepared to shape that future or be shaped by it.”

Dong Yao, MD, and Shivam Vedak, MD, both clinical informatics fellows at Stanford Medicine, have created a workshop for health care professionals on how to prompt, or talk to, generative AI models like ChatGPT or Stanford Medicine’s secure GPT, a HIPAA-compliant chatbot that can answer questions and summarize text and files.

They also get into the nitty-gritty of how generative AI works, using language non-computer scientists can understand.

Part one of the workshop is available free of charge at stan.md/Workshop

Author headshot

Kimberlee D'Ardenne

Kimberlee D'Ardenne is a freelance science writer. Contact her at medmag@stanford.edu