View this email in your browser

Call for papers: special issue on interdisciplinary AI

After successfully completing a 2-day workshop in Amsterdam on October 2022 on the ethical, social, and regulatory aspects of AI (un)fairness, we are now pleased to announce the following call for papers and welcome new paper submissions on AI (un)fairness to Minds and Machines | Call for papers: interdisciplinary perspectives on the (un)fairness of artificial intelligence (  The deadline for full paper submissions is 31 May 2023. In this Special Issue, we will explore the interdisciplinary perspectives of AI (un)fairness. The Special Issue is guest edited by members of the interdisciplinary project Human(e) AI funded by the University of Amsterdam as a Research Priority Area. 
Learn more!
The proliferation and use of Artificial intelligence (AI) have been met with celebratory acceptance in scholarly discussions and the public sphere. However, in these intellectual debates, scepticism about AI adoption in mainstream newsrooms has rarely been interrogated.
On the 9th of February at 4 P.M. CET, dr.
Allen Munoriyarwa, Research Fellow in the Department of Media and Communication at the University of Johannesburg in South Africa, will explore widespread scepticism among journalists about AI deployment in South Africa’s newsrooms. They will discuss the genesis of AI scepticism among South Africa’s mainstream journalists. Two main broad arguments will be contoured in this talk. Firstly, they will argue that scepticism about AI among journalists in South Africa should be linked to the broader debates about the future and purpose of news in post-apartheid South Africa. Secondly, they argue that AI corporations view AI technology through profit lenses. This feeds into scepticism and mistrust, as journalists are accustomed to serving democracy- a discourse rarely aligned with capitalist profit-making intentions. This talk will contribute to the ongoing debates on AI and news production practices in less-explored contexts of the global South.
Find more about this topic in a conversation with Mathias Felipe de Lima Santos and Chris Starke - our postdoctoral researcher in Responsible AI for the RPA Human(e) AI.
Sign up!
Fair, Resilient, & Inclusive Societies (FRIS) Grant

Do you aim to create an inclusive educational environment where emancipatory and developmental learning flourish in order to contribute to a more socially just, inclusive, regenerative society? This grant could give you the time and space to develop a teaching innovation connected to this theme. The grant consists of a maximum of 80 development hours and a support system aimed at exchanging ideas with other recipients of the grant.
The application round of the next FRIS grant starts on 8 February 2023! You can submit your application online from then on until 24 March 2023. See also the practical information below that applies when applying for a FRIS grant. Or register for the information session on 8 February 2023.
Learn more!
‘Theme-based collaboration’ for Seed grants & Midsize projects

The Theme-based collaboration programme invites academics to formulate new, sometimes unexpected research questions on socially relevant themes at the interface of disciplines and faculties - and to also incorporate these in teaching. This programme is an important step in realising UvA’s Strategic Plan's ambition to innovate research and teaching through collaboration, both between disciplines within the UvA and with external partners. In a dialogue between the Executive Board, deans and the research community, four societal themes were selected, on which there is much relevant expertise within the UvA: 
  • Responsible digital transformations 
  • Fair and resilient societies 
  • Sustainable prosperity 
  • Healthy future 
A budget has been made available from the programme ‘Theme-based collaboration’ for Seed grants & Midsize projects. This call for proposals is directed at interfaculty research collaborations, i.e. co-operations that go beyond the boundaries of individual faculties. The aim is to stimulate researchers to formulate research questions based on societal themes, bringing in the outside world, as well as translating these into education. The call is open until 24 February 2023.
Learn more!
On the Ecological Complexity of Artificial Intelligence by Adam Nocek

The premise of this talk is that we need to think about artificial intelligence as a complex ecosystem. Doing so requires navigating thorny disputes in the theoretical humanities and social sciences concerning the autonomy and environmental dependency of machine learning algorithms. Further, the talk contends that steering this course requires entering into a series of debates concerning AI and its metaphysical, political, and ecological existence, and underway in fields strongly influenced by the history of critical theory and continental philosophy, and also operating under the umbrella of posthumanism and speculative and materialist philosophy. 

Adam Nocek is an Associate Professor of Philosophy of Technology and Science and Technology Studies at the School of Arts, Media + Engineering, Arizona State University. He is the Founding Director of the Center for Philosophical Technologies (CPT) and Editor of Techniques Journal. He is the author of Molecular Capture: The Animation of Biology (2021) and is working on his next monograph, Governmental Design: On Algorithmic Autonomy.

Join this talk and discussion on the 8th of March 2023, between 16.00 – 19.00 at the Humanities Labs (Bushuis), F0.01, Kloveniersburgwal 48, 1012 CX Amsterdam!
Learn more!
Missed one of our last Humane Conversations? Learn all about synthetic media and news creation and what that means for a future with AI like ChatGPT. Watch Nick Diakopolous, the Director of the Computational Journalism Lab at Northwestern University, in a conversation with Mathias Felipe de Lima Santos and Chris Starke - our postdoctoral researchers in Responsible AI for the RPA Human(e) AI.

Want to know more? Check out our YouTube channel!
YouTube - Human(e) AI
Copyright © 2022 RPA Human(e) AI, All rights reserved.

Our mailing address is:

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.


This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences