We put a lot of trust in our doctors — we listen to their diagnoses, take out their prescriptions, follow their dietary suggestions. Would we do the same for a computer?
As the global market for artificial intelligence (AI) in healthcare surges — expected to rise from $1.3 billion in 2019 to $10 billion by 2024, according to Morgan Stanley — it seems we may have to.
Deep learning is an AI approach modeled on the neural networks of the brain. It can analyze complex layers of information and identify abnormalities or trends in medical images.
AI and eyes
Pearse Keane, a consultant ophthalmologist at Moorfields Eye Hospital, embraced the potential of deep learning five years ago.
“We’re drowning in the numbers of patients that we have to see, and because of that, there are some people who are losing sight irreversibly as they can’t get seen and treated quickly enough,” he tells CNN Business.
By applying the technology to OCT (Optical Coherence Tomography) scans, he thought it would help prioritize patients with sight-threatening diseases.
Keane approached DeepMind, a UK-based AI research center owned by Google (GOOGL), and with them developed an algorithm, trained on 14,884 retinal scans, that can return a detailed diagnosis in roughly 30 seconds.
It can detect 50 different eye diseases including glaucoma, diabetic retinopathy and age-related macular degeneration, provide a score and recommend how urgently patients should be referred for care.
Early results for the system, published in the journal Nature Medicine, show that it has the same level of accuracy as leading specialists, correctly identifying types of eye disease 94.5% of the time.
However, before the technique can be implemented in Moorfields Eye Hospital and beyond it must pass through the lengthy process of regulatory approval and clinical trials.
“We’re tremendously excited about AI,” says Keane, “but also, we’re kind of cautious. We know that it has huge potential, but there are some ways that it might not work.”
Scientists at Weill Cornell Medicine are also deploying deep learning algorithms as a time-saving device, identifying which embryos have the best chance of developing into a healthy pregnancy during in vitro fertilization (IVF).
The algorithm, dubbed Stork, analyzes time-lapse images of early-stage embryos and is able to discriminate between poor and good embryo quality. According to the research paper published in NPJ Digital Medicine, it performed with 97% accuracy.
Usually this is a manual process, by which an embryologist sorts through multiple images, assigning a quality score that helps them decide which ones to implant first.
“Grading of the embryo by a human is very subjective,” Nikica Zaninovic, an embryologist at the Center for Reproductive Medicine at Weill Cornell Medicine, tells CNN Business. “Using AI to grade the embryos means we can do some standardization.”
The tool would also have a positive impact on the process of IVF as a whole. It could improve the success rate, minimize the risk of multiple pregnancies, and help to reduce the cost of the procedure, says Zev Rosenwaks, director of the Center for Reproductive Medicine at Weill Cornell Medicine.
Currently, the tool is only available to embryologists at Weill Cornell Medicine in an experimental setting. Expect it to be in practice more widely “within the next year or two,” says Zaninovic.
Predicting cancer risk
One initiative by MIT’s Computer Science and AI Lab can predict from a mammogram if a patient is likely to develop breast cancer in the future.