Singapore Institute of Technology
Browse

Detecting and Mitigating Algorithmic Bias in Binary Classification using Causal Modeling

Download (261.13 kB)
conference contribution
posted on 2024-10-29, 00:37 authored by Wendy Wan Yee HuiWendy Wan Yee Hui, Wai Kwong Lau

This paper proposes the use of causal modeling to detect and mitigate algorithmic bias. We provide a brief description of causal modeling and a general overview of our approach. We then use the Adult dataset, which is available for download from the UC Irvine Machine Learning Repository, to develop (1) a prediction model, which is treated as a black box, and (2) a causal model for bias mitigation. In this paper, we focus on gender bias and the problem of binary classification. We show that gender bias in the prediction model is statistically significant at the 0.05 level. We demonstrate the effectiveness of the causal model in mitigating gender bias by cross-validation. Furthermore, we show that the overall classification accuracy is improved slightly. Our novel approach is intuitive, easy-to-use, and can be implemented using existing statistical software tools such as lavaan in R. Hence, it enhances explainability and promotes trust.

History

Journal/Conference/Book title

4th International Conference on Computer Communication and Information Systems (CCCIS 2024), Phuket, Thailand, February 27-29, 2024.

Publication date

2024-02-27

Version

  • Post-print

Rights statement

© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC