XA2ITS: Explainable AI for Intelligent Transportation Systems |
Website | https://sites.google.com/usmba.ac.ma/xa2its |
Submission link | https://easychair.org/conferences/?conf=xa2its |
Submission deadline | October 25, 2022 |
Explainable Artificial Intelligence (XAI) is an emergent research field that aims to make AI and deep models’ outcomes more human-interpretable without scarifying performance. It holds the potential to increase public acceptance and trust in systems of a safety-critical nature such as Intelligent Transportation Systems (ITS). This book focuses on XAI in the field of ITS, it aims at compiling into a coherent structure the stat-of-the-art research and development of Explainable and Trustful ITS. We are seeking chapters that propose approaches that use interpretable methods to improve the interpretability of ITS applications. Chapters addressing ethical and societal implications of XAI in ITS are also solicited.
For more details, kindly refer to the book website: Call for chapters Website
Submission Guidelines
Academics, researchers, and professionals are invited to contribute chapter(s) to the book. Submissions will be welcome at any point up until the end of September 2022. Notification of chapter acceptance is expected on or before November 2022. There are no submission, acceptance, or publication fees for manuscripts submitted to this book publication.
Authors interested in contributing are invited to submit chapter(s) presenting original, high-quality, unpublished results not currently under review elsewhere. Original research articles proposing new approaches within the scope of the book are welcomed. Original review articles and surveys are also considered.
All submitted manuscripts must be in English, with 15-25 pages including references and any appendix. Manuscripts must be prepared according to the standard guidelines of the CRC Press chapter format. Authors can use either Latex or MS Word to write their chapters. Click to download LATEX format. Edit in the chapter folder. Latex will be the preferred format. The formatting of manuscripts should follow: A4 page format, single column with 11 point Times New Roman font with 1.5 line spacing, while using MS Word document. Prospective authors should submit their manuscripts electronically as PDF files through the provided submission link.
All submitted chapters will undergo a single-blind peer-review process. The chapter should not include any other material (e.g., figures, tables or charts) owned by other authors. Otherwise, it is the responsibility of authors to obtain permission from the previous publisher for the copyrighted material they want to reuse. Papers will be screened for plagiarism before acceptance.
List of Topics
- Explainable models for Autonomous driving systems
- Explainable models for Traffic management and prediction
- Explainable models for Behavioral modeling and interpretation
- Explainable models for Scene perception
- Explainable models for Security, privacy, and safety systems
- Explainable models for Urbain and rural transportation management
- Explainable models for air, road, and rail traffic management
- Explainable models for ports, waterways, and vessel traffic management
- Explainable models for Green Transportation, Sustainability, and Smart Energy Management
- Interpretable DNN for ITS applications
- Interpretable deep reinforcement learning, inverse reinforcement learning for ITS applications
- Inherently interpretable models by design for ITS applications
- Post-hoc, model-agnostic methods for ITS applications including but not limited to: SHapley Additive exPlanations (SHAP), Knowledge Graphs, Local Interpretable Model-Agnostic Explanations (LIME), Fuzzy logic systems, Visualization techniques, feature relevance explanation, Contrastive and Counterfactual Explanation …. etc.
- Interpretability-Accuracy Trade-Off in ITS context
- Interpretability metrics and evaluation in ITS applications
- Responsible AI for ITS problems
- Safe AI and Ethical AI in smart transportation
- Explainability impact on social acceptance
- Safety and trust enabled by interpretability
- Liability, Fairness, and algorithmic accountability
- Explainability and Moral dilemma
- Explainability and Standards, laws, and regulations
Editors
-
Dr. Amina Adadi, Moulay Ismail University, Morocco
-
Dr. Afaf Bouhoute, Sidi Mohamed Ben Abdellah University, Morocco
Publication
The accepted contributions will be published by CRC Press, Taylor & Francis Group. All the accepted papers will send to SCOPUS and WOS by the publisher.
Contact
For any queries contact: afaf.bouhoute@usmba.ac.ma