Safe RL 2023: Safe RL workshop @ IJCAI 2023 |
Website | https://sites.google.com/view/safe-rl-2023 |
Submission link | https://easychair.org/conferences/?conf=saferl2023 |
Submission deadline | May 8, 2023 |
Reinforcement learning (RL) is the dominant AI paradigm for learning interactively from the environment in which the AI agent is embedded. While model-free and deep reinforcement learning methods have made significant progress in simulated environments such Atari and and the game of Go, such advancements rest on strong assumptions on the environment and the learning process. The application to physically embodied systems in the real world is significantly more challenging as the world comes with many unexpected events and is not forgiving -- one cannot run the experiment again if a catastrophe occurs. Therefore, there is a need for reinforcement learning systems that are robust to unexpected disturbances, avoid the dangerous side-effects that come with trial and error, and that satisfy certain constraints. This need has resulted in the growing field of safe reinforcement learning, or Safe RL.
To bring the diverse community of Safe RL researchers together, and give an opportunity to researchers to discuss fundamental algorithmic as well as practical insights into safe reinforcement learning, we propose the Safe RL 2023 Workshop. The workshop is proposed for IJCAI 2023 building further on the success of the previous Safe RL workshop @ IJCAI 2022. The proposed format is kept similar to last year in the sense that there will be a combination of invited talks and contributed talks with opportunities for researchers to interact with the speakers, discuss novel and exciting research, and establish new and fruitful collaborations. We had a great experience last time and continuing the workshop series will help to establish a research community around safe reinforcement learning at the IJCAI venue. The workshop will be a 1-day event with 6 invited talks and the remainder being filled in by contributed talks. In terms of the invited talks, the main reasoning behind our schedule is to have invited speakers from the various approaches to safe RL to capture the full interdisciplinary nature of the ongoing research.
The goal of the workshop is to bring together researchers that are working on safe reinforcement learning systems, where safety is defined widely as avoiding self-harm, harm to the environment, significant financial or societal costs, and violations of social, ethical, or legal norms. With this notion of safety in mind, we encourage submissions in extended abstract style on the following topics:
-
Definitions of safety
-
Incorporating safety, social norms, and user preferences into RL policies
-
Safe exploration
-
Satisfying safety constraints in non-stationary environments
-
Safe off-policy, off-dynamics decision making
-
Predicting safety constraint violations
-
Interventions to prevent failures when RL agent is at risk with no safe options left
-
Risk-aware and robust decision making
-
Application use cases, demonstrations or problem statements
-
Simulation platforms and data sets to help safe RL application use cases, demonstrations or problem statements
In terms of application areas, we are interested in aerospace, power systems, robotics, cyber-physical systems, safety-critical systems, and others. The call is open to submissions from a variety of disciplines relevant for safe RL, including but not limited to constrained optimisation, control theory, robust optimisation, human-robot interaction, formal methods, industrial robotics, and societal perspectives.
Submission Guidelines
Submissions should be anonymous and use the IJCAI author kit (see https://www.ijcai.org/authors_kit). Each paper submitted should be at most 3 pages in the IJCAI double-column format.
Paper submission will take place through EasyChair. Go to the Safe RL 2023 easy chair submission site (link to follow) and click on "make a new submission" to start your submission.
Authors are welcome to submit supplementary information with details on their implementation; however, reviewers are not required to consult this additional material when assessing the submission.
The workshop will allow for the submission of papers similar to papers being concurrently submitted elsewhere, as the aim of the workshop is to get an overview of the relevant ongoing work in Safe RL. However, be aware that this has to be OK for the other venue for publication as well. Note that the papers will be showcased on the website rather than on the formal proceedings so this should generally be OK.
Double-blind review
Authors are required to submit their paper anonymously. To submit anonymously, all names and affiliations must be removed from the paper. This also involves removing any linked pages with personal identifiers (e.g. github code).
Each submission will be reviewed by at least two reviewers, who will assess the submission based on relevance, novelty, impact, and technical soundness. Each submission will be reviewed by at least two reviewers, who will assess the submission based on relevance, novelty, impact, and technical soundness. Submissions will be accepted based on this assessment. There will be no rebuttal period.
In-person presentation
For each accepted paper, at least one co-author should present it at the workshop. Like all IJCAI workshops this year, the Safe RL 2023 event is fully in-person, which implies presentations must be in-person. If the number of submissions is large, then only the highest-scoring accepted papers will be presented as a contributed talk while the remaining accepted papers will be presented during poster sessions.
To be able to present, authors must register for the IJCAI 2023 conference.
Committees
Organizing committee
- David Bossens, University of Southampton d(dot)m(dot)bossens(at)soton(dot)ac(dot)uk
- Bettina Koenighofer, TU Graz bettina(dot)koenighofer(at)iaik(dot)tugraz(dot)at
- Sebastian Tschiatschek, Unversity of Vienna sebastian(dot)tschiatschek(at)univie(dot)ac(dot)at
- Anqi Liu, Johns Hopkins University aliu(at)cs(dot)jhu(dot)edu