Download PDFOpen PDF in browser

Automated Reasoning for Explainable Artificial Intelligence

5 pagesPublished: November 8, 2017

Abstract

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This short paper is a non-technical position statement that aims at prompting a discussion of the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligence. We suggest that the emergence of the new paradigm of XAI, that stands for eXplainable Artificial Intelligence, is an opportunity for rethinking these relationships, and that XAI may offer a grand challenge for future research on automated reasoning.

Keyphrases: Big Data, Conflict-driven reasoning, explanation

In: Giles Reger and Dmitriy Traytel (editors). ARCADE 2017. 1st International Workshop on Automated Reasoning: Challenges, Applications, Directions, Exemplary Achievements, vol 51, pages 24--28

Links:
BibTeX entry
@inproceedings{ARCADE2017:Automated_Reasoning_for_Explainable,
  author    = {Maria Paola Bonacina},
  title     = {Automated Reasoning for Explainable Artificial Intelligence},
  booktitle = {ARCADE 2017. 1st International Workshop on Automated Reasoning: Challenges, Applications, Directions, Exemplary Achievements},
  editor    = {Giles Reger and Dmitriy Traytel},
  series    = {EPiC Series in Computing},
  volume    = {51},
  pages     = {24--28},
  year      = {2017},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {https://easychair.org/publications/paper/HtJp},
  doi       = {10.29007/4b7h}}
Download PDFOpen PDF in browser