Explainable AI for glaucoma prediction analysis to understand risk factors in treatment planning

Md Sarwar Kamal, Nilanjan Dey, Linkon Chowdhury, Syed Irtija Hasan, KC Santosh

Research output: Contribution to journalArticlepeer-review

42 Citations (Scopus)

Abstract

Glaucoma causes irreversible blindness. In 2020, about 80 million people worldwide had glaucoma. Existing machine learning (ML) models are limited to glaucoma prediction, where clinicians, patients, and medical experts are unaware of how data analysis and decision-making are handled. Explainable artificial intelligence (XAI) and interpretable ML (IML) create opportunities to increase user confidence in the decision-making process. This article proposes XAI and IML models for analyzing glaucoma predictions/results. XAI primarily uses adaptive neuro-fuzzy inference system (ANFIS) and pixel density analysis (PDA) to provide trustworthy explanations for glaucoma predictions from infected and healthy images. IML uses sub-modular pick local interpretable model-agonistic explanation (SP-LIME) to explain results coherently. SP-LIME interprets spike neural network (SNN) results. Using two different publicly available datasets, namely fundus images, i.e., coherence tomography images of the eyes and clinical medical records of glaucoma patients, our experimental results show that XAI and IML models provide convincing and coherent decisions for clinicians/medical experts and patients.
Original languageEnglish
Article number2509209
Pages (from-to)1-9
Number of pages9
JournalIEEE Transactions on Instrumentation and Measurement
Volume71
DOIs
Publication statusPublished - 02 May 2022

Fingerprint

Dive into the research topics of 'Explainable AI for glaucoma prediction analysis to understand risk factors in treatment planning'. Together they form a unique fingerprint.

Cite this