WAAS 2020: 2020 Workshop on Assured Autonomous Systems |
Website | https://www.ieee-security.org/TC/SPW2020/WAAS/ |
Submission link | https://easychair.org/conferences/?conf=waas2020 |
Submission deadline | February 3, 2020 |
Acceptance notification | February 17, 2020 |
Publication-ready papers due | March 6, 2020 |
Workshop date | May 21, 2020 |
The Workshop on Assured Autonomous Systems (WAAS) plans to address the gap that exists between theory-heavy autonomous systems and algorithms and the privacy, security, and safety of their real-world implementations. Advances in machine learning and artificial intelligence have shown great promise in automating complex decision-making processes across transportation, critical infrastructure, and cyber infrastructure domains. Practical implementations of these algorithms require significant systems engineering and integration support, especially as they integrate with the physical world. This integration is wrought with artificial intelligence (AI) safety, security, and privacy issues.
The primary focus of this workshop is the: (1) detection of, (2) response to, and (3) recovery from AI safety, security, and privacy violations in autonomous systems. Key technical challenges include discriminating between application-layer data breaches and benign process noises, responding to breaches and failures in real-time systems, and recovering from decision making failures autonomously.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Full papers up to six pages describing results on all aspects of AI safety, security, and privacy in autonomous systems. Papers that encourage the discussion and exchange of experimental and theoretical results, novel designs, and works in progress are preferred.
- Works in progress papers up to four pages describing ongoing research in the topic areas.
List of Topics
AI Safety:
- Detecting dataset anomalies that lead to unsafe AI decisions
- Engineering trusted AI software architectures
- Status of existing approaches in ensuring AI/ML safety and gaps to be addressed
- AI safety considerations and experience from industry
- Evaluating safety of AI systems according to their potential risks and vulnerabilities
- Resilient, explainable deep learning, and interpretable machine learning
Security and Privacy
- Detecting dataset anomalies that lead to autonomous system security and privacy violations
- Differential privacy and privacy-preserving learning and generative models
- Adversarial attacks on machine learning and defenses against adversarial attacks
- Theoretical foundations of machine learning security
- Formal verification of machine learning models and systems
- Define and understand AI vulnerabilities and exploitable bugs in ML systems
- Improve resiliency of AI methods and algorithms to various forms of attacks
Committees
Program Committee
- Natalia Alexandrov, NASA Langley
- Yair Amir, Johns Hopkins University
- Saurabh Bagchi, Purdue University
- Raheem Beyah, Georgia Institute of Technology
- Yinzhi Cao, John Hopkins University
- Anupam Chattopadhyay, Singapore Nanyang Technological University
- Joel Coffman, United States Air Force Academy
- Misty Davies, NASA Ames Research Center
- David Doria, HERE Technologies
- Abhishek Dubey, Vanderbilt University
- Ashutosh Dutta, Johns Hopkins University Applied Physics Lab
- Mike Hinchey, University of Limerick
- Dezhi Hong, University of California San Diego
- Yan Huang, Indiana University
- John S. Hurley, National Defense University
- Avinash Kalyanaraman, University of Virginia
- Gabor Karsai, Vanderbilt University
- Mykel Kochenderfer, Stanford University
- Xenofon Koutsoukos, Vanderbilt University
- Jose A. Morales, Carnegie Mellon University
- Sirajum Munir, Bosch Research and Technology Center
- William H. Robinson, Vanderbilt University
- Yasser Shoukry, University of Maryland
- Houbing Song, Embry-Riddle University
- Tamim Sookoor, Johns Hopkins University Applied Physics Lab
- Roy Sterritt, Ulster University
- Jeremy Straub, University of North Dakota
- A. Selcuk Uluagac, Florida International University
- Kristen Walcott, University of Colorado
- Louis Whitcomb, Johns Hopkins University
- Paul Wood, Johns Hopkins University Applied Physics Lab
Organizing committee
- Lanier Watkins, Johns Hopkins University & Applied Physics Lab
- Howard Shrobe, MIT Computer Science & Artificial Intelligence Lab
- Chris Rouff, Johns Hopkins University Applied Physics Lab
- Reza Ghanadan, Google
Invited Speakers
-
Dr. Dandeep Neema, DARA I2O
Publication
WAAS 2020 proceedings will be published in conjunction with IEEE Symposium on Security and Privacy
Venue
The conference will be held at the San Fransico, CA Hyatt Regency
Contact
All questions about submissions should be emailed to christopher.rouff at jhuapl dot edu