Retinal diseases are varied, in both their symptoms and severity. Left untreated, some diseases can cause severe vision loss, or even blindness. The importance of proactive treatment and prevention is vital; yet not everybody has that equal opportunity.
In third-world countries in particular, a critical issue is a lack of public access to specialised medical care. Even if the equipment is there, the expertise required to interpret test results and get the correct diagnosis and prognosis may not be.
For retinal diseases this is an acute problem. Accurate diagnosis and interpretation of test results are of particular difficulty, especially for inexperienced doctors.
PixelPlex, a development and consulting company focused on blockchain, artificial intelligence (AI) and Internet of Things (IoT) technologies among others, realised that, through its contacts with medical facilities, it may be able to help. “We realised that we could actually use AI for the automatic diagnostical aids,” explains Alex Dolgov, Head of Consulting at PixelPlex.
AIRA, an artificial intelligence retina analyser, is the result. Being a pattern recognition use case, AI was the obvious solution. While a neural network will ultimately not diagnose better than a medical professional, the undeniable advantage of AI is in analysing large data sets, as well as evaluating screening results in specific numbers. AIRA can assist a doctor in getting more accurate data for further work.
PixelPlex was originally given a ‘large array’ of data containing images from fundus cameras, which contained various symptoms and anatomical structures of the human eye, such as exudates, hemorrhages, and degenerative retinal changes, vessels and optic nerve pathologies. The company added its own data sets from later research, to be used in neural network training.
“We created our own data set, which was quite a challenge, and with the help of qualified medical professionals, created the UI data set that trained the AI to detect various diseases and other defects, just based on photographs taken by a fundus camera,” said Dolgov.
The model architecture was based on variations of U-Net, a convolutional neural network developed for biomedical image segmentation. These variations included LinkNet, a light deep neural network architecture for semantic segmentation, and Dilated U-Net, which has been used in other initiatives for assessing the risk of cancer in certain organs.
The photos taken by the fundus camera are sent and analysed by the software created by PixelPlex.The trained neural network is capable of identifying information to determine diagnoses, then provide medical staff with that information.
“The source of these images, having them analysed, providing the medical diagnosis based on the photographs, and then creating this data set – that was the biggest challenge,” said Dolgov.
Currently, the solution operates at approximately 85% accuracy. However, the project is still in active development, and it is hoped to increase this to at least 95%. It is also worth noting that prevention is better than cure. PixelPlex notes that AIRA will be able to spot symptoms of disease with such high precision which a regular physician could not accomplish, and also create mathematical models later to be used to enhance the neural network analysis process even more.
You can find out more about AIRA here.
Editor’s note: This article is in association with PixelPlex.
Photo by Amanda Dalbjörn on Unsplash