Download PDFOpen PDF in browser

When Can My AI Lie?

EasyChair Preprint no. 10063

4 pagesDate: May 10, 2023

Abstract

A considerable amount of research is currently in progress to examine the potential of an artificial intelligence system employing deception for beneficial purposes (e.g., in the education or the health sector). In this paper, we consider the permissibility of deploying an algorithm with adjustments unknown to the algorithm’s user to counteract their biases.

We reason that this is necessary for human-AI systems whose decisions impact other humans and where the algorithm’s user is biased. After illustrating this need through an example in the healthcare setting, we introduce a framework for identifying where the altered system can be implemented. We also discuss the autonomy-related consequences of such algorithms and conclude with some conjectures about how the framework could be employed in various domains.

Keyphrases: autonomy, Deceptive Artificial intelligence, human biases

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10063,
  author = {Nandhini Swaminathan and David Danks},
  title = {When Can My AI Lie?},
  howpublished = {EasyChair Preprint no. 10063},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser