Responsible Artificial Agency: A Logical Perspective

Project Description

When should an artificial agent intervene to resolve a dilemma? When should it alert its user or a relevant authority instead? Given the increasing number of safety-critical applications of autonomous systems in domains like medicine, engineering, surveillance, transportation, and media, developing rigorous tools to determine when it is responsible for an agent to act becomes increasingly urgent. The aim of our project is to develop logics to reason about the conditions under which an autonomous agent should take the responsibility to act. We first use logic as a meta-analytical tool to analyze artificial systems, and to formalize the main criteria for assessing when AI’s intervention is (ir)responsible. Our medium-term aim is to apply this analysis to develop fully formalized, decidable systems that can in principle be used by the AI, in its internal reasoning about its own and others’ actions and their consequences. The long-term goal is to provide implementable tools for designing intelligent agents that can act responsibly, after correctly determining if and when their intervention is needed.

Team