RAISA3: The ECAI 2020 Workshop on Robustness of AI Systems Against Adversarial Attacks Santiago de Compostela, Spain, August 29, 2020 |
Conference website | https://sites.google.com/view/raisa3-2020 |
Submission link | https://easychair.org/conferences/?conf=raisa3 |
Submission deadline | May 15, 2020 |
The RAISA3 workshop will focus on the robustness of AI systems against adversarial attacks. While most research efforts in adversarial AI investigate attacks and defenses with respect to particular machine learning algorithms, our approach will be to explore the impact of adversarial AI at the system architecture level. In this workshop we will discuss threat-borne adversarial AI attacks that can impact an AI system at each of various processing stages, including: at the input stage of sensors and sources, at the data conditioning stage, during training and application of machine learning algorithms, at the human-machine teaming stage, and during application within the mission context. We will additionally discuss attacks against the supporting computing technologies.
The RAISA3 workshop is a one full day event and will include invited keynote speakers working in the research area, as well as a number of relevant presentations selected through a Call for Participation.
In general, adversarial AI attacks against AI systems take three forms: 1) data poisoning attacks inject incorrectly or maliciously labeled data points into training sets so that the algorithm learns the wrong mapping, 2) evasion attacks perturb correctly classified input samples just enough to cause errors in runtime classification, and 3) inversion attacks repeatedly test trained algorithms with edge-case inputs in order to reveal the previously hidden decision boundaries and training data. Protection against adversarial learning attacks include techniques which cleanse training sets of outliers in order to thwart data poisoning attempts, and methods which sacrifice up-front algorithm performance in order to be robust to evasion attacks. As AI capabilities become incorporated into facets of everyday life, the need to understand adversarial attacks and effects and relevant mitigation approaches for AI systems become of paramount importance.
Central to this methodology is the notion of threat modeling, which will support relevant discourse with respect to potential attacks and mitigations.
The workshop format is structured to encourage a lively exchange of ideas among researchers in AI working on developing techniques to mitigate adversarial attacks on end-to-end AI systems.
Submission Guidelines
Papers should be formatted as shown in the paper template. PDF suitable for ArXiv repository (up to 8 pages). Submission and review of papers will be managed via Easychair.
- Submissions are not anonymized
- Submission due date: May 15, 2020
List of Topics
- AI threat modeling
- Protection against attacks on end-to-end AI architecture:
- Data conditioning stage
- Adversarial machine learning
- Human-machine teaming stage
- Cyber attacks against AI hardware and/or software
- Deployment stage
- Explainable AI
- System lifecycle attacks
- System verification and validation
- System performance metrics, benchmarks and standards
- Protection and detection techniques against black-box, white-box, and gray-box adversarial attacks
- Defenses against training attacks
- Defenses against testing (inference) attacks
- Response and recovery based on:
- Confidence levels
- Consequences of action
- AI system confidentiality, integrity, and availability
Committees
Program Committee
- Lujo Bauer, Carnegie Mellon University
- Daniel Clouse, U.S. Department of Defense
- Courtney Corley, Pacific Northwest National Laboratory
- David Cox, MIT-IBM Watson AI Lab
- Stephan Günnemann, Technical University of Munich
- Ariel Herbert-Voss
- Sven Krasser, CrowdStrike
- Brian Lindauer, Duo
- Aleksander Madry, Massachusetts Institute of Technology
- Nick Malyska, MIT Lincoln Laboratory
- Rebecca Zubajlo, Massachusetts Institute of Technology
Organizing committee
- David R. Martinez (Chair), MIT Lincoln Laboratory
- William W. Streilein (Co-chair), MIT Lincoln Laboratory
- Olivia Brown, MIT Lincoln Laboratory
- Rajmonda Caceres, MIT Lincoln Laboratory
- Eliezer Kanal, CMU Software Engineering Institute
Venue
The RAISA3 workshop is held in conjunction with ECAI 2020.
Santiago de Compostela, Spain
Contact
All questions about submissions should be emailed to: