Google Achieves Healthcare Breakthrough Using EyePACS Retinal Images

Question: Can a computer using “deep learning” (a new type of artificial intelligence) be successfully applied to medical imaging? In other words, can a computer, given enough “practice” examples, learn to detect diabetic retinal disease as well as a board-certified medical specialist? More specifically, is it possible for a computer to create its own algorithm that will allow it to examine images of human retinas and correctly diagnose diabetic retinopathy (DR) or macular edema?

Answer: Google has just announced that “an algorithm based on deep learning had high sensitivity and specificity for detecting referable diabetic retinopathy.” The study was published in the Journal of the American Medical Association (JAMA) on December 1, 2016.

So, why is this announcement so important to primary care? Because, according to Google’s announcement, “automated grading of diabetic retinopathy has potential benefits such as increasing efficiency and coverage of screening programs; reducing barriers to access; and improving patient outcomes by providing early detection.” Those three benefits resonate loudly in the healthcare safety net, where access to care and screening is a challenge, and achieving better patient outcomes in chronic disease management is always high on the list of priorities in any primary care setting.

The study was led by Lily Peng, MD, PhD, of Google Research, Inc., using retinal images provided by EyePACS as well as sources in France and India. EyePACS (which stands for Eye Picture Archive Communication System) places digital cameras in primary care clinics to image the retinas of diabetic patients and then upload the images to “the cloud” where they are read by certified specialists who render an opinion and recommendation within 24 hours.

Thousands of competing algorithms for detecting diabetic retinopathy were developed through a Kaggle data science competition funded by the California Health Care Foundation, using EyePACS images, developed by over 600 teams from around the world in early 2015. Encouraged by that successful project, Dr. Peng and her colleagues used 128,000 retinal images from EyePACS and Messidor, a French retinal image database, to train a new neural network optimized for image classification. Images were graded three to seven times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was then validated using EyePACS-1 and Messidor-2 data sets, both graded by at least seven US board-certified ophthalmologists.

The EyePACS-1 data set consisted of nearly 10,000 retinal images. The prevalence of referable diabetic retinopathy (RDR), defined as “moderate and worse diabetic retinopathy, referable diabetic macular edema, or both,” was eight percent of fully gradable images. The Messidor-2 data set had 1,700 images from 874 patients. The prevalence of RDR was 15 percent of fully gradable images. “Use of the algorithm achieved high sensitivities (97.5 percent [EyePACS-1] and 96 percent [Messidor-2]) and specificities (93 percent and 94 percent, respectively) for detecting referable diabetic retinopathy,” according to Google’s announcement.

In the JAMA article, the authors explain, “These results demonstrate that deep learning neural networks can be trained, using large data sets and without having to specify lesion-based features, to identify diabetic retinopathy or diabetic macular edema in retinal fundus images with high sensitivity and high specificity. This automated system for the detection of diabetic retinopathy offers several advantages, including consistency of interpretation (because a machine will make the same prediction on a specific image every time), high sensitivity and specificity, and near instantaneous reporting of results.”

The authors observe that “further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.”

Why is such a breakthrough significant, and what will it mean for diabetes patients? According to Jorge Cuadros, OD, PhD, CEO of EyePACS, 415 million people worldwide have diabetes, and of those, about 15% are at increased risk for vision loss and blindness due to diabetic retinopathy. DR is the leading cause of blindness among working age adults in most developed countries, yet 90% of vision impairment is avoidable through early detection by retinal screening and appropriate treatment.

“The problem,” Dr. Cuadros explained, “is that DR often presents no symptoms until the disease has progressed beyond the point of effective treatment. Regular screening of all diabetic patients is important to detect DR before it’s too late.” Cuadros added that, even with screening and detection, patient adherence to referral recommendations is often the next roadblock to timely treatment. This algorithm will raise an instantaneous red flag while the patient is still in the clinic, and will hopefully activate that patient to take charge of their eye care and prevent disease progression and blindness.”

Direct application of this algorithm in the primary care setting is still off in the future, of course. But now that the first question has been answered (Yes, deep learning can be successfully applied to medical imaging of the human retina) the next step will be to test the hypothesis that immediate feedback to generate concerned intervention by a trusted provider will activate diabetic patients to follow through on sight-saving recommendations. “Our ultimate goal is to actually prevent vision loss and blindness, not just to check the box that says we conducted a retinal exam on our diabetic patients” Cuadros explained.