DeceptAI2021: 2nd International Workshop on Deceptive AI @ IJCAI2021 Montreal, Canada, August 21-23, 2021 |
Conference website | https://sites.google.com/view/deceptai2021 |
Submission link | https://easychair.org/conferences/?conf=deceptai2021 |
Submission deadline | May 14, 2021 |
There is no dominant theory of deception. The literature on deception treats different aspects and components of deception separately, sometimes offering contradictory evidence and opinions on these components. Emerging AI techniques offer an exciting and novel opportunity to expand our understanding of deception from a computational perspective. However, the design, modelling and engineering of deceptive machines is not trivial from either conceptual, engineering, scientific, or ethical perspectives.
The aim of DeceptAI is to bring together people from academia, industry and policy-making to discuss and disseminate the current and future threats, risks, benefits and challenges of designing deceptive AI. The workshop proposes a multidisciplinary (Computer Science, Psychology, Sociology, Philosophy & Ethics, Military Studies, Law etc.) approach to discuss the following aspects of deceptive AI:
-
Behaviour
-
What type of machine behaviour should be considered deceptive?
-
How do we study deceptive behaviour in machines as opposed to humans?
-
-
Reasoning
-
What kind of reasoning mechanisms lie behind deceptive behaviour?
-
What type of reasoning mechanisms are more prone to deception?
-
-
Cognition
-
How does cognition affect deception and how does deception affect cognition?
-
What function, if any, do agent cognitive architectures play in deception?
-
-
AI, Ethics, & Society
-
How does the ability of machines to deceive influence society?
-
What kinds of measures do we need to take in order to neutralise or mitigate the negative effects of deceptive AI?
-
-
Engineering Principles
-
How should we engineer autonomous agents such that we are able to know why and when they deceive?
-
Why should or shouldn't we engineer or model deceptive machines?
-
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Full papers (16 pages +references) describing novel work in the area of Deceptive AI.
- Short Papers (8 pages + references) describing novel work in the area of Deceptive AI, this may include Work-in-Progress.
- Position Papers (2-6 pages) describing research challenges related to Deceptive AI.
Note the formatting has wide margins making the page length much longer than expected. 5 pages is equivalent to 2 pages in IJCAI's formatting.
All papers will be reviewed by at least two members of the Program Committee. Review process will be double-blind, therefore please remove author names and affliations. All papers should be formatted following the Springer Lecture Notes in Computer Science LNCS/LNAI style and submitted through the EasyChair link below.
LNCS Latex: ftp://ftp.springernature.com/cs-proceeding/llncs/llncs2e.zip
LNCS Word: ftp://ftp.springernature.com/cs-proceeding/llncs/word/splnproc1703.zip
List of Topics (Non-Exhaustive)
-
Deceptive Machines
-
Multi-Agent Systems and Agent-Based Models
-
Trust and Security in AI
-
Machine Behaviour
-
Argumentation
-
Machine Learning
-
Explainable AI - XAI
-
Human-Computer(Agent) Interaction - HCI/HAI
-
Human-Robot Interaction - HRI
-
Philosophical, Psychological, and Sociological aspects
-
Ethical, Moral, Political, Economical, and Legal aspects
-
Storytelling and Narration in AI
-
Computational Social Science
-
Applications related to Deceptive AI (Cybersecurity, Red Teams, Social Media, Social Engineering, etc.)
Committees
Organizing committee
-
Peta Masters, University of Melbourne, AUS
-
Stefan Sarkadi, INRIA CNRS, France
-
Ben Wright, US Naval Research Laboratory, USA
Contact
All questions about submissions should be emailed to the DeceptAI Chairs at deceptai.organisers@gmail.com , if there is an issue you can contact Ben Wright at benjamin.wright.ctr@nrl.navy.mil