Singapore Institute of Technology
Browse
1-s2.0-S1566253522002238-main.pdf (4.51 MB)

Beyond explaining: Opportunities and challenges of XAI-based model improvement

Download (4.51 MB)
journal contribution
posted on 2023-09-25, 12:11 authored by Leander Weber, Sebastian Lapuschkin, Alexander BinderAlexander Binder, Wojciech Samek

Abstract

Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. Despite the development of a multitude of methods to explain the decisions of black-box classifiers in recent years, these tools are seldomly used beyond visualization purposes. Only recently, researchers have started to employ explanations in practice to actually improve models. This paper offers a comprehensive overview over techniques that apply XAI practically to obtain better ML models, and systematically categorizes these approaches, comparing their respective strengths and weaknesses. We provide a theoretical perspective on these methods, and show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning, among others. We further discuss potential caveats and drawbacks of these methods. We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifiable model properties, these methods need to be applied carefully, since their success can vary depending on a number of factors, such as the model and dataset used, or the employed explanation method.

History

Journal/Conference/Book title

Information Fusion

Publication date

2022-11-23

Version

  • Published

Sub-Item type

  • Magazine article

Usage metrics

    Categories

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC