Using explainable and interpretable artificial intelligence methods to identify and understand factors affecting resilience

Research output: Resource/documentPreprint

Abstract

Resilience after a natural disaster is the ability of a community to recover from the disaster and adapt to its effects. It is the ability to withstand, adapt to and learn from shocks and stresses in a way that reduces chronic vulnerability and enables inclusive growth. Resilience indicators are the elements that help individuals or systems overcome adversity, and they are critical in many domains. Identifying resilience factors or variables would be valuable in understanding the problems that exist in society and in determining the impact of resilience outcomes. Many studies have examined the different resilience indicators related to different social and environmental challenges, such as droughts, climate change, floods, disasters, crime, poverty and political imbalances. However, few research works have used data analytics or data mining to identify resilience indicators. In the age of information, there is a need for an automated framework that is comprehensive and easy to understand. Artificial intelligence (AI) and machine learning have great potential, but their decision-making processes can often be difficult to understand for domain experts such as social scientists, biologists, and policymakers, which hinders its application in identifying resilience indicators. This research examines how well people in the United Kingdom (UK), Afghanistan and Belgium cope with natural disasters. It uses explainable and interpretable methods and World Bank data to identify key indicators of their well-being and preparedness. This research demonstrates the use of two explainable and interpretable AI approaches, namely, belief rule-based systems (BRBS) and dynamic probabilistic graphical models (DPGMs), to identify resilience indicators and their effects on personal well-being outcomes. The decision-making processes of both BRBS and DPGMs are explainable and interpretable and show a stepwise analysis to select the indicators. These methods provide interpretable predictions and insights into their operation and reveal valuable knowledge about resilience indicators and their interactions. In addition, this research conducts a sensitivity analysis using two different interpretable approaches, namely, locally interpretable model-agnostic explanations (LIME) and Shapley additive exPlanations (SHAP). The sensitivity analysis shows that both the proposed BRBS and DPGMs provide similar results and identify factors that are valuable for understanding resilience outcomes.
Original languageEnglish
Pages1-19
Number of pages19
VolumePreprint article
DOIs
Publication statusPublished - 2023

Publication series

NameSSRN Electronic Journal

Fingerprint

Dive into the research topics of 'Using explainable and interpretable artificial intelligence methods to identify and understand factors affecting resilience'. Together they form a unique fingerprint.

Cite this