Towards an Epistemological and Ethical ‘Explainable AI’

AI is developing at a very fast pace these days. One area in which developments have accelerated is computer-assisted decision-making, which involves automatization of human language and argumentation skills. How can we ensure that AI engines, in this but also in other fields, are not black boxes? The approach taken by Russo, Schliesser, and Wagemans is to investigate such systems at two levels simultaneously. One is epistemological: what are the principles that ensure the output of such engines can be explained in a cognitively feasible way, i.e., on a human level of understanding? Another is ethico-political: what are the biases and norms at play in the methodological foundations of software design? Our distinct approach is to treat these together rather than separately. For it is the combination of epistemology and ethics, so we believe, that will provide the building blocks of the guidelines for developing explainable and responsible AI.

 

 

Dr. Federica Russo is Assistant Professor at the Faculty of Humanities, University of Amsterdam. Her research focuses on causality and probability in the social, biomedical and policy sciences, and in the relations between science and technology.

Prof. Eric Schliesser is professor of Political Science, with a focus on Political Theory, at the University of Amsterdam’s (UvA) Faculty of Social and Behavioural Sciences. His esearch encompasses a variety of themes, ranging from economic statistics in classical Babylon, the history of the natural sciences and forgotten 18th-century feminists (both male and female) to political theory and the history of political theory and the assumptions used in mathematical economics

 Dr. Jean Wagemans is a philosopher who specializes in rhetoric, argumentation, and debate, who is currently working as a senior researcher at the Amsterdam Centre for Language and Communication.

See the other projects