The RPA will focus initially on case studies in the area of automated decision making. In the evolving data society, key decisions are not made by humans alone. Datafication, sophisticated algorithms and Artificial Intelligence (AI) mean that Automated Decision-Making (ADM) is becoming increasingly central to public life, in part replacing human decision-making in areas as diverse as justice, the media, health, mobility, work, police and economic market places.
Data, Artificial Intelligence and algorithm-driven automated decision-making (ADM) will transform the basic structures of society, deciding about the distribution of benefits and responsibilities—be it access to information and healthcare or how the justice system operates. A key challenge for science and society will be to ensure that (semi-)automated decision-making procedures reflect and adhere to core public values and principles that we wish to see realized and maintained in the digital society.
The RPA will study automated decision making along three major thematic foci:
1. Re-organising Processes and Players
The migration to more data-driven and machine-learning enabled forms of decision making will directly affect the actual decision making processes and players involved. Existing players (e.g. the media, judges, governments or doctors) will have to adjust their routines and internal processes, and the implementation of new technologies raises a host of new legal, ethical and societal questions about the adequate division of tasks, legal and ethical responsibilities, fundamental rights and societal impact. At the same time, new players are emerging and, thanks to their technological and data supremacy, have potentially disruptive effects on established procedures, value chains, and safeguards for public values.
Important ethical, legal and economic issues are raised because of the increasing reliance on commercial platforms and the rapidly developing platform economy, including players such as Facebook and Google, which still lack adequate systems of governance and accountability but control significant shares of technology development and (training) data. Epistemic issues are triggered by concerns over polarization, homophily, and truth. Societal issues and effects on individuals and groups need investigation, including the influence on trust and individual and collective participation in the political system, economy, and culture. Critical design issues arise when we consider how automated news systems can be engineered and regulated so as represent diverse perspectives and opinions. In so doing, research in this theme will cooperate closely with research in the next theme.
2. Fundamental Rights, Public Values and Ethical Norms
Next to concrete societal, legal and ethical issues that the transition to automated decision making triggers, there is clearly the need for more conceptual, normative research into the fundamental rights, societal values and ethics of automated decision making, what these values are and should be, and how they must guide automated decision making. To kick-start collaborative research on this second, more normative strand of research, the RPA will initiate a project on the procedural, substantive and skills dimensions of automated judicial decision-making.
AI is increasingly used to moderate disputes and enforcement policies in the public judiciary branch and in private commercial initiatives, raising fundamental questions of procedural and substantive fairness, bias, non-discrimination and privacy. Research on the changing relationship between the justice sector, citizens, and society will help determine what skills and ethical/professional guidelines legal professionals, such as judges, public prosecutor, lawyers, police, legislators as well as citizens, need to work effectively and sensibly with AI. The research in this theme will also feed into the next theme.
3. Methodological, Legal and Ethical Challenges and Opportunities for AI Research
AI technology invites continuous critical assessment of the origin and composition of data, how technology processes data and shapes its output, and requires the participation of societal actors. Research includes the development of strategies to translate normative frameworks for evaluating bias, detecting presupposed notions of meaning in reasoning processes, explaining the effect of epistemic and rationality assumptions, analyzing methods to boost comprehensibility, explainability and interpretability, and formulating recommendations for best practices and methodological criteria.
Research in this stream involves the use of both symbolic and sub-symbolic AI (including a.o. machine learning) to develop new methods of research to ask and answer new research questions. As such, this research also contributes to AI research in general but also to the responsible use of AI by e.g. policy makers, businesses and professionals dealing with automated decision making.
In combination, these lines of research will generate strong methodological engagement, debate, and innovation in research on fundamental issues triggered by the proliferation of AI and ADM in society. The program can among others tackle questions of bias, discrimination, accountability, transparency, interpretability, and bounded rationality in AI systems and human-machine interactions (critical awareness of methodological challenges) that arise from research done elsewhere at the University.