Download PDFOpen PDF in browser

Bridging the Gap: Exploring Explainable AI for Interpretable Machine Learning Models in Software Defect Detection

EasyChair Preprint no. 13181

11 pagesDate: May 6, 2024

Abstract

In recent years, the adoption of machine learning (ML) in software defect detection has shown promising results, revolutionizing the way defects are identified and rectified in software development processes. However, the opacity of complex ML models presents a significant challenge, hindering their acceptance in critical domains where interpretability and trust are paramount. Explainable AI (XAI) has emerged as a crucial research area aimed at addressing this challenge by providing insights into the decision-making processes of ML models.

 

This paper delves into the integration of XAI techniques into interpretable ML models for software defect detection. By elucidating the inner workings of these models, XAI not only enhances their transparency but also enables stakeholders to understand, validate, and refine the detection process. We survey various XAI methods, including feature importance analysis, local and global interpretability techniques, and model-agnostic approaches, exploring their applicability and effectiveness in the context of software defect detection.

Keyphrases: Adoption, learning, machine

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13181,
  author = {Louis Frank and Saleh Mohamed},
  title = {Bridging the Gap: Exploring Explainable AI for Interpretable Machine Learning Models in Software Defect Detection},
  howpublished = {EasyChair Preprint no. 13181},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser