Abstract
Glaucoma causes irreversible blindness. In 2020, about 80 million people worldwide had glaucoma. Existing machine learning (ML) models are limited to glaucoma prediction, where clinicians, patients, and medical experts are unaware of how data analysis and decision-making are handled. Explainable artificial intelligence (XAI) and interpretable ML (IML) create opportunities to increase user confidence in the decision-making process. This article proposes XAI and IML models for analyzing glaucoma predictions/results. XAI primarily uses adaptive neuro-fuzzy inference system (ANFIS) and pixel density analysis (PDA) to provide trustworthy explanations for glaucoma predictions from infected and healthy images. IML uses sub-modular pick local interpretable model-agonistic explanation (SP-LIME) to explain results coherently. SP-LIME interprets spike neural network (SNN) results. Using two different publicly available datasets, namely fundus images, i.e., coherence tomography images of the eyes and clinical medical records of glaucoma patients, our experimental results show that XAI and IML models provide convincing and coherent decisions for clinicians/medical experts and patients.
Original language | English |
---|---|
Article number | 2509209 |
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | IEEE Transactions on Instrumentation and Measurement |
Volume | 71 |
DOIs | |
Publication status | Published - 02 May 2022 |