WAIT-2024: Workshop on Artificial Intelligence Trustworthiness Almaty, Kazakhstan, April 24-25, 2024 |
Conference website | https://ainlconf.ru/wait |
Submission link | https://easychair.org/conferences/?conf=wait20240 |
Submission deadline | March 17, 2024 |
WAIT-2024: Workshop on Artificial Intelligence Trustworthiness
As the use of Artificial Intelligence (AI) continues to increase, it is essential to ensure its safe, secure, and trustworthy use. This workshop aims to provide a platform for experts to share their latest research and developments in AI trustworthiness, identify challenges and opportunities for future work, and foster collaboration and networking among researchers and practitioners in the field.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference.
We invite the submission of papers that present original previously unpublished research. We accept research papers (6-11 pages) and full papers (12+ pages) formatted accordingly to the Springer LNCS style. Although Springer offers both LaTeX style files and Word templates, we highly encourage the authors to use LaTeX, especially for texts containing several formulæ. The papers must be written in English.
We use a double-blind review scheme.
Please anonymize your papers when submitting for initial review.
At least one author of every accepted paper must register for the conference and present the paper offline (more preferably) or online.
The authors should use the EasyChair system to submit their papers: https://easychair.org/conferences/?conf=wait20240
List of Topics
Scope:
We are interested in papers that address the following topics:
- Methods and principles for the integration of AI in critical products and services in a safe, reliable, and secure way;
- Methods for analyzing datasets for detecting anomalies in markup in order to counter attacks on machine learning;
- ML models with certified robustness;
- Model training methods that provide resistance to adversarial attacks;
- Methods for detecting and countering attacks on AI components in intelligent systems;
- Methods for explaining and improving the interpretability of ML models;
- Research on the resistance to attacks of common models, including the typical architecture of artificial neural networks;
- Techniques for Building Trusted Machine Learning Frameworks and Libraries;
- Engineering of innovative industrial products and services integrating AI;
- Large-scale deployment of industrial systems integrating AI;
- Interaction generating confidence between the user and the AI-based system;
- Ethical and societal implications of intelligent system trustworthiness.
Committees
Steering Committee
- Denis Turdakov, ISP RAS
- Ivan Oseledets, AIRI
- Alexander Gasnikov, Innopolis
- Loukachevitch Natalia, MSU
Program Chairs
- Oleg Rogov, AIRI
- Denis Turdakov, ISP RAS
Program Committee
- Aleksandr Lobanov, MAI, MIPT
- Artem Shelmanov, AIRI
- Alexander Rogozin, MIPT
- Mikhail Drobyshevskii, ISP RAS
- Ilya Sochenkov, FRC CSC RAS,
- Mikhail Tikhomirov, MSU
- Daniil Chernyshev, MSU
- Ilya Makarov, MISIS
- Maxim Ryndin, ISP RAS
- Konstantin Arkhipenko, ISP RAS
Invited Speakers
- TBA
Publication
Selected papers will be published in main conference proceedings in the Springer CCIS series indexed by Scopus and Web of Science.
Venue
The workshop is collocated with the 12th conference on Artificial Intelligence and Natural Language (AINL 2024).
Contact
If you have any questions about the workshop or the submission process, please contact the workshop organizers at wait24@ispras.ru.