On the button

Treatments that work for people just like you

Print Winter 2016

It’s hardly a secret among medical practitioners: For most patients, clear treatment guidelines simply don’t exist.

Take Vera.

She is a 55-year-old woman of Vietnamese descent who has asthma. You’re her doctor, and you’ve just learned she also has high blood pressure. Vera’s case doesn’t fit the data from any clinical trials; there’s no medical literature on hypertension medications for middle-aged, asthmatic Vietnamese-American women.

You want to treat her hypertension, but you have no guidelines. Medications that work great in one ethnic group can work dismally in another. Older people metabolize drugs more slowly than young people. Males and females can respond quite differently to the same drug. Among the numerous subjects enrolled in the totality of clinical trials that have been conducted for hypertension, there have been few, if any, asthmatics, because people with multiple conditions are typically screened out.

Vera is sitting in your exam room now. What do you do?

Suppose you could get some guidance simply by pressing a virtual button on a computer screen displaying Vera’s electronic medical record? This would trigger a search of millions of other electronic records and, in a matter of minutes, generate a succinct composite summary of the outcomes of 25 or 100 or perhaps 1,000 patients very similar to her — same race, same age, same symptoms, similar lab-test results — who were given various antihypertensive medications. Patients similar to Vera, it turns out, respond particularly well to one particular drug — something you likely wouldn’t have guessed on your own.

Vera is a made-up patient, but there are plenty of people who are square pegs in the round hole of clinical-trial results. Scattered throughout millions of electronic medical records, such look-alike cases could point the way to effective treatment options for Vera and others if they could be plucked from the aggregate and formatted for easy interpretation. While some aspects of this approach need to be worked out, such as assuring patients their privacy will be protected and making databases compatible between health-care systems, Stanford medical researchers are tackling those problems. The goal is a seamless system that quickly links physicians to the information they need in order to give their patients the best-validated treatments available.

In 2014, three Stanford Medicine faculty members authored an article in a major health policy journal, Health Affairs, urging action to make this concept a reality. The solution, which they’ve dubbed the Green Button, would revolutionize the practice of medicine by tapping the huge volumes of data lying dormant in the EMRs of millions of patients to tailor treatments to individuals.

The Green Button approach takes advantage of the increasingly routine use of these records and the fast-paced progress taking place in computation and data transmission. It could enable a real-time solution to a big problem: the inadequacy of results from clinical trials — the foundation upon which treatment guidelines are built — for the vast majority of patients. Clinical trials are experiments in which new medications and procedures are tested on people. In order to achieve meaningful results, investigators tend to select participants for trials who are a lot alike in terms of age, sex, ethnicity, medical conditions and treatment history. Yet the average patient walking into a doctor’s office seldom resembles a patient included in those trials.

“Every day I encounter patients for whom we just don’t have the best scientific evidence on how to treat them,” says one of the authors, Christopher Longhurst, MD, who recently stepped down from his position as clinical professor of pediatrics in system medicine and chief medical information officer for Stanford Children’s Health. Longhurst is now a professor of biomedical informatics and chief information officer at the University of California-San Diego Health Sciences.

In their article, Longhurst, along with Nigam Shah, MBBS, PhD, assistant professor of biomedical research and assistant director of the Stanford Center for Biomedical Informatics Research, and Robert Harrington, MD, professor and chair of medicine, outlined a vision for drawing medical guidance from day-to-day clinical practice in hospitals and doctors’ offices. The idea was to give doctors access to aggregate patient data, right there and then, from a vast collection of EMRs. This near-instant output isn’t a substitute for a clinical trial, but it’s a lot better than nothing — or than resorting to the physician’s own bias-prone memory of one or two previous encounters with similar patients.

“You don’t have to type anything in,” says Shah. “Just press the Green Button.”

From the gold standard to the Green Button

The randomized clinical trial is considered the gold standard of medical research. In a randomized clinical trial, a number of participants are randomly assigned to one of two — sometimes more — groups. One group gets the drug or the procedure being tested; the other is given a placebo or undergoes a sham procedure. Ideally, the study is blinded — patients don’t know which option they’re getting — or even better, double-blinded — the investigators and their assistants don’t know, either. Once the trial’s active phase ends, rigorous statistical analysis determines whether the hypothesis, spelled out in advance of the trial, was fulfilled.

“It goes without saying that you should use randomized trial evidence when it’s available,” says Harrington, who also holds the Arthur L. Bloomfield Professorship of Medicine. “But a lot of times, it’s not.”

Harrington’s specialty, cardiovascular medicine, exemplifies that generalization. “Remarkably, even in the well-studied field of cardiology, only 19 percent of published guidelines are based on randomized controlled trials,” he and his co-authors wrote in the 2014 Health Affairs paper. Even those trials’ findings apply to fewer than one in five of the actual patients with the problems explored in them. Shah concurs. “Clinical trials select only a small, artificial subset of the real population,” he says. “A regular, ordinary person who walks into the doctor’s office doesn’t usually fit.”

As a result, “only about 4 percent of the time have you got a clinical-trial-based guideline applicable to the patient facing you right now,” Shah says. The rest of the time, doctors must rely on their own judgment.

Yet even though there may not be clinical-trial evidence to guide a doctor’s choice of treatment options for a particular patient, “tons of applicable evidence” are locked away in health systems’ EMRs, Shah says. The inspiration for the Green Button concept was a real-life, real-time data search conducted by Jennifer Frankovich, MD, now a clinical assistant professor of pediatric rheumatology at Stanford. A 13-year-old girl with lupus had been admitted to Lucile Packard Children’s Hospital Stanford with severe kidney and pancreatic inflammation. She was considered at risk for blood clots. While anticoagulants could counteract clotting, they would also increase her risk of bleeding from some procedures likely to be used during her hospital stay. There were no clear clinical-trial-based guidelines on whether to give the girl anticoagulants, and different clinicians had different thoughts about what was advisable.

But owing to a research project she was involved in, Frankovich had access to a Stanford database containing the EMRs of pediatric lupus patients admitted between 2004 and 2009. So she was able to perform an on-the-spot analysis of the outcomes of 98 kids who’d been in situations similar to the one confronting her patient. Within four hours, it was clear to Frankovich that kidney and pancreatic complications put kids with lupus at much higher risk of clotting.

Frankovich and her teammates decided to give the girl anti­coagulants right away. The young patient suffered no clotting or other adverse events. Frankovich was the lead author of a 2011 article describing the story, of which Longhurst was a co-author.

That serendipitous result, says Longhurst, led to a follow-on question: “How can we go about doing this in a purposeful way on a continuing, case-by-case basis?”

‘The point is not to outsmart the physician. The point is to tell you the outcomes of the best guesses of 100 of your colleagues. You can choose to interpret it or ignore it.’

With advancing technology, the kind of analysis Frankovich performed can be completed in considerably less than an hour today — soon enough for an outpatient finishing an appointment.

Since then, Stanford researchers including Shah have published numerous studies establishing the power of pooling large volumes of data to derive clinically beneficial results — although not yet in real time as would be necessary for implementing the Green Button approach. The Stanford Center for Population Health Sciences, directed by Mark Cullen, MD, professor of medicine, is putting in place a data library housing the records of some 10 million different patients, purchased from another institution.

These developments are keyed to efforts around precision health, Stanford Medicine’s push to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill. Precision health aims to give researchers and physicians better tools for predicting individual risks for specific diseases, developing approaches to early detection and prevention, and helping clinicians make real-time decisions about the best way to care for particular patients.

But there are several obstacles to putting the Green Button idea into practice.

Surmountable hazards

The stumbling blocks along the road to the Green Button’s realization aren’t primarily technical — the methodologies are available, and the infrastructure is buildable. But the more idiosyncratic your patient’s case is, the larger the initial pool of patient data needs to be. And scaling up presents some challenges.

As Shah puts it: “What if you press the Green Button and nothing happens?” If you can’t access enough records of similar patients to begin with, you’re out of luck.

Assembling that huge data pool gets easier if numerous institutions can be coaxed into contributing to it. The numbers are certainly there: Stanford Health Care alone has close to 2 million patient EMRs. Kaiser Permanente, which has been using EMRs for a decade or more, has 9 million, and the University of California health system has 14 million. The U.S. Department of Veterans Affairs has 20 to 25 years’ worth of longitudinal data on many millions of veterans.

The key lies in integrating these disparate databases to yield valuable, personalized medical insights.

But sharing data between institutions is no simple matter. “Any hospital CEO today would kick you out of the office if you propose data sharing,” Shah says. “That’s rational on their part. Sharing data puts you at risk of leaks, and compromised patient privacy can mean big financial and public-relations pitfalls.”

Federal law guards patients’ privacy, but it doesn’t make the data in their medical records totally off limits. For instance, as Longhurst points out, the law specifically allows the use of patient data for improving quality of care.

Even if the patient-privacy issue turns out to be insurmountable in the short run, there’s a workaround, Shah says: Health systems could share with one another descriptions of the kinds of patients they’re looking for, rather than request raw patient data. Thus, a health system that received a request for information on female middle-aged patients of Vietnamese descent with asthma and high blood pressure would, in accordance with such an arrangement, automatically search its own database and share only statistical summaries of what it found, such as the range of outcomes for certain medications given to this cohort.

There’s a third stumbling block. Asked whether the Green Button idea could meet resistance from medical practitioners who object to taking orders from an algorithm, Shah says, “The point is not to outsmart the physician. The point is to tell you the outcomes of the best guesses of 100 of your colleagues. You can choose to interpret or ignore it.”

Some smart money is betting these stumbling blocks can be hurdled. Kyron Inc., a Palo Alto-based start-up Shah cofounded in 2013 with technologist Louis Monier, PhD, and Stanford-trained biomedical informaticist Noah Zimmerman, PhD, raised several million dollars and licensed informatics-associated technology from Stanford’s Office of Technology Licensing to do just that. Kyron has since merged with Learning Health Inc., another start-up, which now holds licenses on Stanford intellectual property for de-identifying clinical documents, searching patient records and more.

Build your own randomized trial

Virtually 100 percent of the 3,000 kids who get diagnosed with cancer every year in the U.S. are in clinical trials,” says Longhurst. “How many adults with cancer are in clinical trials? Maybe 2 or 3 percent — we can’t possibly afford to put 100 percent of adults into trials. So the other 97 percent may be getting treated, but the health-care system isn’t learning anything from their outcomes.” For his part, cardiologist Harrington notes that fewer than 10 percent of heart-attack patients are actually enrolled in a clinical trial.

The Green Button approach may be able to support clinical trials in a way not yet possible. Suppose you’re a doctor, and a patient walks into your office. You take the patient’s history, perform a workup, update the patient’s EMR accordingly and hit the Green Button. As it turns out, there’s not enough data on similar patients to provide decent information on which of two treatment options is best for this patient.

But that’s not the end of it. The Green Button now shifts gears from merely downloading outcomes of other patients to suggesting what Harrington and others have termed “point-of-care randomization”: You give this patient one of the two treatments — call it Treatment A — and the next similar patient who walks through your door (or, more accurately, through any door in your mega-EMR network) gets Treatment B. The similar patient after that one will be prescribed Treatment A, then B, and so on. (Either prescription would be equally ethical because both are within the standard of care.) Keep alternating prescriptions to successive similar patients — while monitoring their responses to minimize the chances of either treatment doing them any harm — and you will have increasingly large cohorts fueling an informed conclusion. A test run of this type of study has already been conducted by a team including Stanford professor of biomedical data science Philip Lavori, PhD, and published in Clinical Trials in 2011.

After agreeing to participate in this trial, patients were randomly assigned one of two insulin protocols for diabetes, both equally appropriate according to current medical knowledge. As the trial progressed, EMR software tracked which of the two approaches was associated with the best outcome.

“Applied this way, the Green Button will let clinicians learn more from the patients they’re caring for each time they see one of them,” says Shah. “Every patient becomes part of a scientific experiment.”

Meanwhile, Shah continues to push forward with funding from multiple sources including the National Institutes of Health and, here at Stanford Medicine, the office of the dean’s Biomedical Data Science Initiative. Among his front-burner projects: an improved search engine that will be able to deliver a Green Button head count in less than a millisecond.

Big things often take a longer time than you expect to happen, but when they do happen they happen fast.

Author headshot

Bruce Goldman

Bruce Goldman is a science writer in the Office of Communications. Email him at goldmanb@stanford.edu.

Email the author