XLoKR 2022: The Third Workshop on Explainable Logic-Based Knowledge Representation FLoC 2022 Haifa, Israel, July 31, 2022 |
Conference website | https://sites.google.com/view/xlokr2022 |
Submission link | https://easychair.org/conferences/?conf=xlokr2022 |
Conference program | https://easychair.org/smart-program/FLoC2022/ |
Submission deadline | May 10, 2022 |
XLoKR 2022 is the third workshop in the XLoKR series and aims at bringing together researchers interested in explainable logic-based knowledge representation.
Embedded or cyber-physical systems that interact autonomously with the real world, or with users they are supposed to support, must continuously make decisions based on sensor data, user input, knowledge they have acquired during runtime as well as knowledge provided during design-time. To make the behavior of such systems comprehensible, they need to be able to explain their decisions to the user or, after something has gone wrong, to an accident investigator.
While systems that use Machine Learning (ML) to interpret sensor data are very fast and usually quite accurate, their decisions are notoriously hard to explain, though huge efforts are currently beingmade to overcome this problem. In contrast, decisions made by reasoning about symbolically represented knowledge are in principle easy to explain. For example, if the knowledge is represented in (some fragment of) first-order logic, and a decision is made based on the result of a first-order reasoning process, then one can in principle use a formal proof in an appropriate calculus to explain a positive reasoning result, and a counter-model to explain a negative one. In practice, however, things are not so easy also in the symbolic KR setting. For example, proofs and counter-models may be very large, and thus it may be hard to comprehend why they demonstrate a positive or negative reasoning result, in particular for users that are not experts in logic. Thus, to leverage explainability as an advantage of symbolic KR over ML-based approaches, one needs to ensure that explanations can really be given in a way that is comprehensible to different classes of users (from knowledge engineers to laypersons).
The problem of explaining why a consequence does or does not follow from a given set of axioms has been considered for full first-order theorem proving since at least 40 years, but there usually with mathematicians as users inmind. In knowledge representation and reasoning, efforts in this direction are more recent, and were usually restricted to sub-areas of KR such as AI planning and description logics. The purpose of this workshop is to bring together researchers from different sub-areas of KR and automated deduction that are working on explainability in their respective fields, with the goal of exchanging experiences and approaches.
Submission Guidelines
We invite extended abstracts of 2-5 pages on topics related to explanation in logic-based KR. The papers should be formatted in Springer LNCS Style.
List of Topics
- AI planning
- Answer set programming
- Argumentation frameworks
- Automated reasoning
- Causal reasoning
- Constraint programming
- Description logics
- Non-monotonic reasoning
- Probabilistic reasoning
Committees
Program Committee
- Franz Baader
- Sander Beckers
- Bart Bogaerts
- Annemarie Borg
- Stefan Borgwardt
- Gerhard Brewka
- Sarah Alice Gaggl
- Joerg Hoffmann
- Ruth Hoffmann
- Thomas Lukasiewicz
- Pierre Marquis
- Cristian Molinaro
- Rafael Peñaloza
- Nico Potyka
- Francesco Ricca
- Stefan Schlobach
- Zeynep G. Saribatur
- Mohan Sridharan
- Matthias Thimm
- Francesca Toni
- Markus Ulbricht
Organizing committee
-
Franz Baader, TU Dresden
-
Bart Bogaerts, Vrije Universiteit Brussel
-
Gerd Brewka, University of Leipzig
-
Jörg Hoffmann, Saarland University
-
Thomas Lukasiewicz, University of Oxford
-
Nico Potyka, Imperial College London
-
Francesca Toni, Imperial College London
Venue
The workshop will be co-located with FLoC 2022 (https://www.floc2022.org/) in Haifa, Israel.
Contact
All questions about submissions should be emailed to npotyka@imperial.ac.uk.