AIML2020: Artificial Intelligence and Moral Learning @ AISB-20 St Mary’s University London, UK, April 6-9, 2020 |
Conference website | https://sites.uci.edu/aiml2020/ |
Submission link | https://easychair.org/conferences/?conf=aiml20200 |
Submission deadline | January 10, 2020 |
Artificial Intelligence and Moral Learning is a one day symposium at the AISB-20 Annual Convention 2020, which will be held at St Mary’s University, Twickenham, London, 6-9 April 2020. The AISB-20 Annual Convention 2020 is organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour.
Abstract
The more commonplace AIs become in human life, the more important it will be that human moral intelligence is intelligible to AIs, and vice versa. AIs will need to be “conversant” in ethics of the human sort, with all of its complexity, and make moral decisions in a way that is at least compatible with human moral decision making.Thus, AIs and humans will need to share at least some of the same “moral world.” Given the complexity of human moral life, how might this be achieved? One approach would be to seek a specific account of the many rules and heuristics humans use in making moral decisions. However, it isn’t clear that this can be done and it isn't clear what rules would actually produce the outcomes we're looking for. One of the earliest insights in western moral philosophy, dating back to Aristotle, is that ethics may be uncodifiable: there may be no set of unyielding or exceptionless rules that captures what it takes to be good. If this is correct, perhaps an alternative for developing AIs that humans can recognize as ethical agents, rather than as mere rule-followers, is to attempt to cultivate ethics in AIs in a similar way to the we do in humans: through a process of apprentice-learning and habituation. This symposium seeks to evaluate and compare the possible methods for AI moral learning (including other AI learning methods which have not yet been applied to the case of moral learning).
Aim
Considering the various ways of facilitating moral learning in AI will require the methodological and theoretical perspectives of computer scientists, philosophers, and cognitive scientists. By bringing together this diversity of disciplinary approaches, the symposium will be an opportunity to examine in a holistic and interdisciplinary way how AI technologies can be developed so as to responsibly.
We aim to take an interdisciplinary approach to artificial intelligence, moral learning, and moral decision making. We welcome and encourage theoretical and methodological perspectives from analytic philosophers, phenomenologists, computer scientists, cognitive scientists, psychologists, and others who study this topic
Keynotes and Invited Talks
Special digital presentation by Alison Gopnik, Professor of Psychology and Philosophy, University of California Berkeley
Submission Guidelines
Submission is by extended abstract, approximatley 700-1000 words. If accepted, completed papers, approximately 3000 words, will be due by March 6th, 2020.
List of Topics
We welcome submissions on the topic of artificial and moral learning, broadly construed. These may include, but are certainly not limited to:
-
How can we translate what we know about human moral learning into a machine learning problem?
-
What are some principles that can ensure that AI systems are accountable to people?
-
How can we make AI systems sufficiently morally generalizable (i.e. have robust behavior in novel ethical situations)?
-
Specifically, given our current awareness of adversarial inputs, what directions can we pursue to ensure reliability of moral AI systems in adversarial situations?
-
-
How can we extend to the moral landscape current efforts in making machine learning systems’ behavior intelligible to humans (e.g. visualization of image recognition neural network layers, saliency maps)?
-
In different moral frameworks, there are different conceptions of moral agency. What directions does interpretation of AI systems as moral agents point to regarding the development of moral learning?
-
Reinforcement learning seems like a promising venue for moral learning. What bottlenecks exist and how can they be overcome in this approach?
-
Can an AI systems develop character virtues? What would that look like?
-
How can we develop systems that can explore their own space of uncertainty and generate “questions” that can be answered by a human “moral trainer”?
-
Is it possible and what would it mean to create AI systems that are morally superior to humans?
-
How might humans and AIs empathize with each other? What would it take for humans to see AIs from a second-personal perspective?
Committees
Program Committee
- Pierre Baldi
- Kyle Stanford
- Jeffrey Helmreich
- David Woodruff Smith
- Nathan Fulton
- Emily Sumner
- Kino Zhao
Organizing committee
- Rebecca Korf
- Nicholas Smith
- Darby Vickers
Contact
All questions about submissions should be directed to aiml.symposium@gmail.com