AI Predicts Glaucoma Progress in High-Risk Individuals

Subject to further refinement with larger numbers of people, this may prove helpful diagnostic aid for doctors, say researchers

AI (artificial intelligence) that is trained to recognise red flags in retinal images and clinical information can predict if and when people at high risk of glaucoma, usually referred to as 'glaucoma suspects,' go on to actually develop it, finds research published online in the British Journal of Ophthalmology.

Subject to further refinement with larger numbers of people, this may prove a helpful diagnostic aid for doctors, conclude the researchers.

Recent advances in AI have prompted the design of algorithms to better detect glaucoma progression. But none has so far drawn on clinical features to predict disease progression among people at high risk, point out the researchers.

Glaucoma is one of the leading causes of blindness worldwide. But its it's particularly difficult for doctors to know if and when people with suspicious signs of early optic nerve damage, but without the cardinal diagnostic feature of abnormally high internal pressure within the eye-intraocular pressure or IOP for short-will go on to develop glaucoma and risk losing their sight, they explain.

With a view to using AI to try and bridge this gap, the researchers reviewed the clinical information for 12,458 eyes with suspicious early signs of glaucoma.

From among these, they focused on 210 eyes that had progressed to glaucoma and 105 that hadn't, all of which had been monitored every 6-12 months for at least 7 years.

They then used red flag signs in retinal images taken during the monitoring period plus 15 key clinical features to produce a set of 'predictive' combinations, which were then fed into 3 machine learning classifiers-an algorithm that automatically orders or categorises data.

The clinical features included age, sex, IOP, corneal thickness, retinal nerve layer thickness, blood pressure and weight (BMI).

All three algorithms performed well and were able to consistently predict progression to glaucoma, and when, with a high degree of accuracy: 91-99%

The 3 most important predictive clinical features were baseline IOP, diastolic blood pressure-the second number in a blood pressure reading which measures arterial pressure between heart beats-and average thickness of the retinal nerve fibre layer.

The average age of participants at the start of the monitoring period was 55, ranging from 33 to 76. Baseline age didn't emerge as a key predictive factor, but the average age of those who progressed to glaucoma was significantly lower than that of those who didn't, note the researchers.

They acknowledge various limitations to their findings. For example, the AI training results were based on relatively little information, and only those with normal IOP who had not been given any glaucoma treatment over the course of the monitoring were included in the study.

"The current results, thus, demonstrate only that the built model works well for a limited range of patients," they caution.

Nevertheless they conclude: "Our results suggest that [deep learning] models that have been trained on both ocular images and clinical data have a potential to predict disease progression in [glaucoma suspect] patients.

"We believe that with additional training and testing on a larger dataset, our [deep learning] models can be made even better, and that with such models, clinicians would be better equipped to predict individual [glaucoma suspect] patients' respective disease courses."

They add: "Prediction of disease course on an individual-patient basis would help clinicians to present tailored management options to patients with regard to issues such as follow-up duration, starting (or not) of IOP-lowering treatment, and targeting of IOP levels."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.