Publication:
LISA : Enhance the explainability of medical images unifying current XAI techniques

Research Projects

Organizational Units

Journal Issue

Abstract

This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.

Description

Keywords

LISA, Enhance, explainability, medical images, unifying current, XAI techniques

Citation

S. H. P. Abeyagunasekera, Y. Perera, K. Chamara, U. Kaushalya, P. Sumathipala and O. Senaweera, "LISA : Enhance the explainability of medical images unifying current XAI techniques," 2022 IEEE 7th International conference for Convergence in Technology (I2CT), 2022, pp. 1-9, doi: 10.1109/I2CT54291.2022.9824840.

Endorsement

Review

Supplemented By

Referenced By