View this email in your browser

Call for papers: special issue on interdisciplinary AI

After successfully completing a 2-day workshop in Amsterdam on October 2022 on the ethical, social, and regulatory aspects of AI (un)fairness, we are now pleased to announce the following call for papers and welcome new paper submissions on AI (un)fairness to Minds and Machines | Call for papers: interdisciplinary perspectives on the (un)fairness of artificial intelligence (  The deadline for full paper submissions is 31 May 2023. In this Special Issue, we will explore the interdisciplinary perspectives of AI (un)fairness. The Special Issue is guest edited by members of the interdisciplinary project Human(e) AI funded by the University of Amsterdam as a Research Priority Area. 
Learn more!
A Call for (Global) South to North Methodological Approaches in Critical AI Studies

Join us on Thursday, May 19th, from 17:00 to 18:00 CET for a new conversation!
In this talk,
Dr Andrea Medrado and Mathias Felipe de Lima Santos interrogate ideas of artificial intelligence (AI) for social good, drawing inspiration from Participatory Action Research (PAR) and the work of Latin American thinkers such as Freire and Fals Borda. Medrado presents data from the project and paper “AI for Social Good?” co-authored with Dr Pieter Verdegem. For the project, participatory workshops were conducted in London with a group of students, activists and tech workers. In the paper, Medrado and Verdegem analyse two transversal themes – AI for Social Good? And Decolonial Perspectives of AI – and delve into three concepts - autonomy, empathy and dialogue. They propose a South-North flow and utilise PAR approaches that stem from Latin America. This is an attempt to challenge the ways in which the North’s centrality is taken for granted when it comes to epistemologies, experiences, and pieces of knowledge related to AI. They argue that PAR can not only empower marginalised communities in the Global South; we can also learn more from its application in the Global North, in contexts where people deal with different struggles.

Yet, there are constraints in applying PAR. These are related to how scholarship is organised in the Global North, often with the aim to generate top-down impact, which stands in opposition to the open agenda and bottom-up approach of PAR. Still, much can be learned from this research journey. Inspired by Fals Borda’s (2003) “sentipensante” notion, they embrace an “epistle-method-philosophical” approach in which making, thinking and feelings are all combined. Rather than following the AI hype, we argue that more is to be learned from everyday AI stories, which we must tell, listen to and share in pluriversal ways.
Sign up!
Call for Papers: Politics of Machine Learning Evaluation 
Is the data good enough for training purposes? Does the model perform accurately enough? Is the error rate low enough? Such questions of ‘good enough’ are at the very core of the process of Machine Learning (ML) evaluation and can also be considered a highly political process in the development of ML systems. There is already a growing interest in the political implications of ML in relation to, for example, dataset construction and the political capacities of specific ML models or foundational algorithmic techniques. However, there has been less focus on the politics of evaluation practices and techniques in ML.

To further explore this issue, we invite contributions to a workshop on ‘The Politics of Machine Learning Evaluation’ at the University of Amsterdam in November 2023. The aim of this workshop is to collectively engage with conceptualisations and the methodologies of how to study ML evaluation techniques. We invite papers that engage with conceptual, methodological, and political questions in relation to topics, such as but are not limited to:
  • Dataset construction
  • Data labelling practices
  • Ground truths and benchmarks
  • Biases in evaluation
  • Metrics
  • Errors and error analysis
  • Evaluation techniques
Concretely, we invite papers that engage in conceptualising or historizing ML evaluation as a politically contested practice and provide methodological approaches to the study of evaluation techniques or empirical examples of ML evaluation in practice. It will be an interdisciplinary workshop and we invite scholars from various disciplines.

The workshop:
The workshop is organised by  
Dieuwertje Luitse, Anna Schjøtt Hansen, and Tobias Blanke, and it will take place on Thursday 16 (afternoon) & Friday 17 November 2023, at the Institute for Advanced Study (IAS) located in the city centre of Amsterdam.

The workshop will feature three keynotes by Florian Jaton (University of Lausanne), Nanna Bonde Thylstrup (Copenhagen Business School) and Claudia Aradau (Kings College London).

Accepted papers will get the opportunity to receive feedback from one of the three keynote speakers as well as peer-discussants. Furthermore, we aim to develop a special issue in a peer-reviewed journal on the basis of the submitted papers, to which all workshop participants will have the opportunity to contribute

Submission details:
The deadline for submission is June 30th of the 300-500 words abstract. Abstracts should be sent to and, with the subject line ‘Workshop: Politics of Machine Learning Evaluation’. Before the workshop in November, participants will be expected to send draft discussion papers for other participants and keynotes to read.  
Learn more!
Missed one of our last Humane Conversations?  Listen to this talk, moderated by our postdoctoral researchers, Randon Taylor, and Mathias Felipe de Lima Santos, with Claudia Aradau, Professor Dr in International Politics at King's College University London, and our RPA's Principal Investigator and Professor in Digital Humanities Tobias Blanke, on their latest book, "Algorithmic Reason: the Government of Self and Other." 

Want to know more? Check out our YouTube channel!
YouTube - Human(e) AI
Copyright © 2022 RPA Human(e) AI, All rights reserved.

Our mailing address is:

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.


This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences