Abeyagunasekera, S. H. PPerera, YChamara, KKaushalya, USumathipala, P2022-09-082022-09-082022-07-18S. H. P. Abeyagunasekera, Y. Perera, K. Chamara, U. Kaushalya, P. Sumathipala and O. Senaweera, "LISA : Enhance the explainability of medical images unifying current XAI techniques," 2022 IEEE 7th International conference for Convergence in Technology (I2CT), 2022, pp. 1-9, doi: 10.1109/I2CT54291.2022.9824840.978-1-6654-2168-3https://rda.sliit.lk/handle/123456789/2978This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.enLISAEnhanceexplainabilitymedical imagesunifying currentXAI techniquesLISA : Enhance the explainability of medical images unifying current XAI techniquesArticle10.1109/I2CT54291.2022.9824840