Workshop – (Un)fairness of Artificial Intelligence Call for proposals

Socio-technical systems imbued with artificial intelligence (AI) increasingly shape democracy by making important decisions, for example, in public administration (AlgorithmWatch, 2019), media (Thurman et al., 2019), and the legal system  (Chouldechova, 2017). AI can support democracy through faster and better decision-making because algorithms are impartial, do not grow tired, and are not distorted by emotions (Lee, 2018). However, unfair AI systems can also undermine democracy as they have been shown to systematically reinforce racial or gender stereotypes, marginalize minorities or denigrate certain members of society (Veale & Binns, 2017). For instance, AI systems have wrongly excluded citizens from food support programs, mistakenly reduced their disability benefits, or falsely accused them of fraud (Richardson et al., 2019). 

To mitigate AI-based social inequalities and discrimination, fairness is endorsed as one of the four main principles for trustworthy AI by the OECD (2019) and the European Commission (2019), and it has been mentioned in more than 80 percent of guidelines for AI ethics (Jobin et al., 2019). AI fairness has further emerged as a growing research field in computer science, social science, legal science, and philosophy. 

This 2-day workshop, hosted by Human(e) AI of the University of Amsterdam, will explore ethical, social, and regulatory aspects of AI (un)fairness. Submitted proposals should focus on AI systems designed to support democracy such as government decision-making, public administration or the media. We aim to bring together researchers from different academic disciplines to zoom in on the intricate democratic issues that arise when transferring decision authority to an AI-based system. The workshop is dedicated to presenting cutting-edge research on AI (un)fairness and creating space for discussing future collaborations. Contributions can include (but are not limited to):

  • Conceptual work on AI (un)fairness
  • Measurement models on (perceived) AI (un)fairness
  • Drivers and consequences of perceived AI (un)fairness
  • AI fairness and social power structures 
  • AI decision-making vs. human decision-making 
  • Governance and regulation of AI (un)fairness 

 

The workshop will take place from 27.10. to the 28.10.2022 in Amsterdam at the VolksHotel. Please submit your extended abstract (max. 500 words excluding references), clearly stating the relevance, research question, and method. Submissions are due 15.05.2022 via email to humane-ai@uva.nl

The workshop will not publish proceedings, but we aim at submitting a proposal for a special issue to a high-impact academic journal. Submissions will be peer-reviewed (single-blind) and workshop acceptance is not a guarantee for journal acceptance.

For registration, please use the form down below.

We look forward to welcoming you this fall in Amsterdam, 

Tobias Blanke, Claes de Vreese, Natali Helberger, Irina Ioniţă, Ljubiša Metikoš, Sonja Smets, Chris Starke