Crunching the image data

Using artificial intelligence to look at biopsies

Letter from the dean

From electronic medical records to personal monitoring to genomics, we are producing more medical data than any one human can analyze.

Lloyd Minor, MD, dean of the Stanford School of Medicine. (Glenn Matsumura photo)
Lloyd Minor, MD, dean of the Stanford School of Medicine. (Photo by Glenn Matsumura)

Big data is transforming the health care landscape, and according to Stanford Medicine’s inaugural Health Trends Report, that’s both an opportunity and a challenge. To help us turn this flood of data into knowledge, we increasingly turn to machines. But what if there aren’t numbers to crunch, only images to see?

Currently, a computer can’t look at a suspicious mole and tell you if it’s cancer or assess whether tumor tissue is adenocarcinoma or squamous cell carcinoma. For those determinations, you need an MD.

Typically, skin cancer is diagnosed visually by a dermatologist, sometimes with the help of a hand-held microscope called a dermatoscope, with careful consideration of the ABCDEs — asymmetry, border, color, diameter and evolving. A diagnosis of cancer is confirmed with a biopsy. In Sebastian Thrun’s lab in the Stanford Artificial Intelligence Laboratory, graduate students Andre Esteva and Brett Krupel are seeking to upend this model.

Working with Helen Blau, professor of microbiology and immunology, and dermatology faculty from Stanford Medicine, Esteva and Krupel took a modified version of Google’s reverse image search and added only two inputs: images and biopsy-confirmed diagnoses. They did not ask the program to apply the ABCDEs, but rather to figure out on its own what constellation of pixels signify malignancy.

Then the team put the algorithm to the test. It passed with flying colors, matching the performance of 21 board-certified dermatologists in the accuracy and specificity of their diagnoses of some of the most common and most deadly skin cancers. A machine, with no medical knowledge and no diagnostic criteria, had been taught to accurately distinguish between malignant and benign lesions.

It’s an inspiring sign of what might be possible, and not just in dermatology. Recently, Stanford Medicine faculty, led by geneticist Michael Snyder and radiologist Daniel Rubin, developed a software program to help distinguish between the two most common types of lung cancer, an essential distinction for determining the best treatment plan. Again, the researchers began with only two inputs: the histopathology images and the diagnoses, then let the program create a prediction model.

When this algorithm was put to the test, it was more accurate than human pathologists in predicting patient survival times. It had been trained to examine nearly 10,000 characteristics of lung-tumor tissue, far more than any human eye.

For those in the developing world and other rural areas who don’t have access to human diagnosticians, computer vision offers new hope for prediction, prevention and personalized treatment. For everyone, these exciting developments offer the possibility of speeding and improving diagnosis.

Artificial intelligence, once an academic discipline, is now on the cusp of transforming health care. A machine may never take the place of the trained eye, but it’s already extending and enhancing human vision, allowing us to see things we never knew were there.

Sincerely,

Lloyd Minor, MD

Carl and Elizabeth Naumann Dean of the School of Medicine

Professor of Otolaryngology-Head & Neck Surgery