Download PDFOpen PDF in browser

What criminal and civil law tells us about Safe RL techniques to generate law-abiding behaviour

EasyChair Preprint no. 4844

8 pagesDate: December 29, 2020

Abstract

Safe Reinforcement Learning (Safe RL) aims to produce constrained policies with constraints typically motivated by issues of physical safety. This paper considers the issues that arise from regulatory constraints or issues of legal safety. Without guarantees of safety, autonomous systems or agents (A-bots) trained through RL are expensive or dangerous to train and deploy. Many potential applications for RL involve acting in regulated environments and here existing research is thin. Regulations impose behavioural restrictions which can be more complex than those engendered by considerations of physical safety. They are often intertemporal, require planning on behalf of the learner and involve concepts of causality and intent. By examining the typical types of laws present in a regulated arena, this paper identifies design features that the RL learning process should possess in order to ensure that it is able to generate legally safe, compliant policies.

Keyphrases: autonomous agents, causal model, Intent, Law and AI, Safe Reinforcement Learning, safe rl research, Safety Robustness and Trustworthiness, Structural Causal Model

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:4844,
  author = {Hal Ashton},
  title = {What criminal and civil law tells us about Safe RL techniques to generate law-abiding behaviour},
  howpublished = {EasyChair Preprint no. 4844},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser