Data dealers
Researchers are harnessing millions of de-identified patient records for the ultimate consult
Doctors face all sorts of conundrums. A patient might have troubling symptoms but no clear diagnosis. Medical guidelines may recommend one thing while intuition points to another. And the question that’s perhaps most central: What’s the best treatment for my patient?
When bloodwork, medical literature and one-off case studies don’t turn up an answer, doctors often seek something called a curbside consult, the “phone a friend” lifeline of medicine. Now, there’s a service at Stanford Medicine that does the curbside consult one better.
How? By scaling up — replacing one consult with a thousand.
This is the concept behind the new Clinical Informatics Consult service, which is being spearheaded by biomedical data scientist Nigam Shah, MBBS, PhD. The idea is to draw on patient records from across the country and use them to help answer medical questions.
His team’s technology — part mega-search engine, part powerful data analysis — brings a deluge of data to bear on inquiries too thorny or in the weeds to answer based on established guidelines. Shah hopes the service, while currently available only to Stanford physicians, will one day extend to hospitals and academic centers across the country, even to patients themselves.
At the core of the service is a one-of-a-kind search engine that scours a trove of anonymized health data. Records of lab test results, prescriptions, written medical histories, vital signs, surgeries and more accumulate by the millions, creating a wealth of information for the search engine to sort through. To protect the patients’ privacy, all personal identifiers in their data have been stripped.
A doctor’s query could be as simple as asking how many kids have run a fever in the past year. Or it could be as complicated as asking how an approved drug for a heart condition will interact with a blood pressure medication a patient is already taking. The federal approval process for a new drug tests the safety and efficacy of the therapeutic, but whether that drug plays nicely with others is often left blank.
“Our idea was to build something that answers these sorts of questions by sifting through millions of patient records and looking for information relevant to each specific case,” said Shah.
Rethinking the patient consult
Using past records to inform current cases is nothing new or controversial — in fact, the earliest attempts date back to the 1970s. But it took some finesse for Shah and his team to do it on such a large scale.
When Shah first joined Stanford in 2011, one of his colleagues had a tough patient case and thought other medical records of similar cases could help point her in the right direction. So the colleague pulled the files herself, which caused some concern among the medical school’s leaders: Stanford had not yet established guidelines for accessing old patient records, even if it was for a good reason.
“I’d heard about what she did and I started chatting with a couple of colleagues, and we thought, ‘This isn’t something she should have had to do manually in order to help a patient,’” said Shah. And it pointed to a new opportunity: “What if we could treat these questions as a research project and come up with a way to learn from these cases systematically?”
Shah teamed up with Christopher Longhurst, MD, then-chief medical information officer of Lucile Packard Children’s Hospital Stanford, and Robert Harrington, MD, professor and chair of medicine, to create a vision for using patient records of similar cases to successfully conduct the big-data-based consults on a large scale.
Over three years, Shah’s team rifled through thousands of records, carrying out these steps — identifying symptoms, making note of timelines, compiling treatment information and more. Then they completed a few trial runs.
Their efforts served as a proof of concept to show that the process could be valuable for treating future patients.
“This service is not for the times when doctors absolutely know what they should be doing, nor is it for the times when they absolutely know what not to do. It’s for the times when we don’t know the answers, when a patient population hasn’t been studied, or a disease condition has a twist that makes it different from what’s been previously reported. And these are often the majority of cases and questions that we see,” said Harrington, the Arthur L. Bloomfield Professor of Medicine.
“The service doesn’t tell physicians what to do; it provides another set of information and data for clinicians to put into the context of everything else they know to better guide decision making.”
Before the Clinical Informatics Consult service became a full-fledged operation offered to Stanford doctors, Shah called the concept the Green Button, he said, explaining: “We thought, ‘Wouldn’t it be great to have a button within the patient electronic health records system that, with one click, could comb through past patient records and spit out a report with data-based conclusions?’”
Over the next several years, the group built on the momentum of the initial test cases, devising strategies to streamline and, in some cases, automate the work necessary to conduct each consult. They needed a search engine for patient records — software that could efficiently and effectively scan patient records to pull out only those that fit certain criteria.
In true Silicon Valley style, Shah gathered venture backing for a startup, hoping to develop the idea into a company. The team, however, closed shop after two years, still needing time to perfect the search engine software. So Shah invited colleagues still interested in the project to Stanford, where they continued to refine the technology as a research project.
Vladimir Polony, PhD, senior research engineer and the driving force of the software’s search engine, came first, followed by instructor of pathology Saurabh Gombar, MD, PhD, and research scientists Alison Callahan, PhD, and Ken Jung, PhD.
Together, they formed the squad that transformed the concept into what it is today. The informatics consult service is technically still research, but after five years of development and approval from the board that oversees Stanford research on human subjects, it opened to all Stanford physicians at the beginning of last year. As far as Shah knows, it’s the first such service at an academic medical center.
“We wanted to make sure that this was a service provided, not a self-serve type of thing,” said Shah. “If you look at the history of medical technology, anytime something complicated enough comes along, a new specialty is born. Doctors don’t do their own MRIs; a radiologist does. We don’t have primary care doctors analyze disease tissues; we have pathologists. So why would we expect a doctor to do their own data digging?”
For doctors, Gombar is the first line of contact and focuses on what they want to know. Callahan and Jung then translate requests into code that the search engine will understand. Guided by the query, the search engine hunts for medical records with compatible information, whittling down the population until all that remains is data from individuals who, medically, look like the patient in question.
Then the group performs data analysis, drilling into the particulars (such as the time span of recovery or the efficacy of drug A versus drug B), and finally, summarizes the takeaways. The team is usually able to get doctors a full report within 72 hours.
To date, the service has filled more than 130 requests from Stanford doctors in a wide range of specialties, mining data from the Stanford hospitals and from two insurance claims databases that include patient records from across the United States, which is, in part, what makes the service so powerful. If one patient fares better on drug A, that could be a fluke, but if a thousand patients benefit more from drug A, that’s something to pay attention to, said Shah.
The informatics service is also poised to supplement clinical trials, especially those that are highly specific. Take a well-studied field like cardiology: About 20% of evidence-based medical guidelines have their foundations in randomized clinical trials, but if you ask a doctor how often a patient who walks through the door matches the people who were in that clinical trial, the answer is less than a quarter of the time.
In addition, treatments recommended by a clinical trial are not always a sure thing. For instance, a patient could be allergic to the drug or taking another medication that doesn’t mix well with the recommended therapeutic.
“So you’re looking at about a 4% chance that there’s a guideline with trial-based evidence that applies,” said Shah. “By and large, doctors are extrapolating from evidence produced for people who are not like the person in front of them.”
The consult service offers doctors an opportunity to learn from other patients similar to their own.
Pushing the button
When Shah first offered the consulting service at Stanford, he expected to get mostly “drug-A versus drug-B” types of questions. But his team has researched a variety of inquiries, some seeking treatment guidance, some that pertain to research, some aiming to help streamline administrative tasks. Most often, doctors want to know how often a particular medical outcome occurs.
In one of the earliest consultations, Matthew Wheeler, MD, assistant professor of cardiovascular medicine, asked the group to investigate the difference between two treatments for a heart condition known as hypertrophic cardiomyopathy. The disease can hinder the heart’s ability to pump blood effectively, particularly when the heart rate is elevated. It leads to thickening of the heart wall, which can block blood flow. Small clinical trials from the 1960s and 1970s established beta blockers — drugs that slow the rate of heart contractions and therefore the thickening of the heart walls — as the go-to drug for the condition.
But other studies from the 1970s and 1980s suggested calcium channel blockers might work just as well, if not better. Early trials that favored beta blockers monitored patients for a year, which Wheeler didn’t think was long enough.
“No one’s ever really looked at calcium channel blockers across very large data to see what happens over five to 10 years,” Gombar said. “So we searched it in our data and, actually, when the patient population is large enough and monitored for long enough, the data showed that calcium channel blockers did effectively treat the heart condition.”
In fact, patients who were on calcium channel blockers did better than patients on beta blockers. The finding, Wheeler said, could challenge the current standard of beta blockers as the first-line drug.
Although the data is compelling, consult team members make it clear they’re laying out a summary of what happened to other patients, not making recommendations for medical courses of action. Having findings that favor calcium channel blockers doesn’t mean Wheeler and other cardiologists should abandon beta blockers, for example, but they do point to a new avenue for research and could even prompt a re-evaluation of current guidelines.
In another example, Douglas Blayney, MD, professor of medicine and a cancer specialist, contacted Shah’s team because he wanted to compare two drugs he might prescribe to patients with tumors that have spread to bone tissue and who have an increased risk for fractures and bone-related injury.
Blayney can treat the patient with a generic drug or he can use an antibody-based drug, which is more expensive. There’s some evidence that the antibody-based drug works better to protect patients’ bones from cancer erosion, but it’s not convincing enough to solely favor it. Plus, if the cheaper drug works just as well, that’s a huge advantage for whoever’s paying.
According to the data the service provided, the costlier antibody-based drug was about 20% more effective in breast cancer patients at protecting against bone injury.
“That’s a real difference, and bigger than we were expecting,” said Blayney. “Does it mean we will exclusively use the more expensive drug from here on out? Not necessarily, but it’s a clear sign that there’s more investigation to be done.”
Blayney and his colleagues are now proposing a new study to confirm the data analysis. “We would have had to muddle through 900 records to find this information on our own,” he said.
The informatics consult service is stirring interest outside of Stanford, too. “This data-based ‘consult’ idea has been out there for some time, but what makes this project so novel is that last mile that Shah and his group have traversed. It’s turned the idea into something that’s actually operational,” said Kevin Johnson, MD, professor and chair of biomedical informatics at Vanderbilt University Medical Center.
“What I really love about this is that a computer scientist by training merged the traditional MD train of thought — use a consult — with a newfangled data science approach,” said Johnson. “I’ve been watching the project evolve, trying to learn more about it so that we might be able to implement it here at Vanderbilt.”
Shah’s plan is to make that a reality, but not just for Vanderbilt. He’s put together a “playbook” about how to properly harness the search engine and data analysis software, which is open-source. That way anyone can access the code, technology and step-by-step instructions to run it themselves.
“Basically there are three things that you need for this to be successful: the data, the software and the people to run the service,” said Shah. “I say, if you’re an academic center and have the data, we’ll give you the technology for free — all you have to do is find the people.”