*|MC_PREVIEW_TEXT|* View this email in your browser (*|ARCHIVE|*) ** Call for papers: special issue on interdisciplinary AI ------------------------------------------------------------ After successfully completing a 2-day workshop in Amsterdam on October 2022 on the ethical, social, and regulatory aspects of AI (un)fairness, we are now pleased to announce the following call for papers and welcome new paper submissions on AI (un)fairness to Minds and Machines | Call for papers: interdisciplinary perspectives on the (un)fairness of artificial intelligence (springer.com) (https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.springer.com%2Fjournal%2F11023%2Fupdates%2F23935770&data=05%7C01%7Ci.ionita%40uva.nl%7C51694553bc544275dc6208db030b820f%7Ca0f1cacd618c4403b94576fb3d6874e5%7C0%7C0%7C638107120187210625%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=7wpxLyZVR6XQV4YXdXiOUMcHZPLqJyzjE%2FtMOZRlcH4%3D&reserved=0) . The deadline for full paper submissions is 31 May 2023. In this Special Issue, we will explore the interdisciplinary perspectives of AI (un)fairness. The Special Issue is guest edited by members of the interdisciplinary project Human(e) AI funded by the University of Amsterdam as a Research Priority Area. Learn more! (https://www.springer.com/journal/11023/updates/23935770) Join us on Monday, April 17th, from 16:00 to 17:30 CET, with professor Dr Rachel E. Stern who will talk about the Chinese legal system and its increasingly widespread adoption of AI in its judiciary. Chinese courts have come to lead the world in their efforts to deploy automated pattern analysis to monitor judges, standardize decision-making, and observe trends in society. Prof. dr. Stern examines these developments and asks how these developments impact judicial power. Although technology is certainly being used to strengthen social control and boost the legitimacy of the Chinese Communist Party, examining recent developments in the Chinese courts complicates common portrayals of China as a rising exemplar of digital authoritarianism. Here you can find a link (https://scholarship.law.columbia.edu/faculty_scholarship/2940/) to the article which will be discussed during the talk. Rachel E. Stern (https://twitter.com/racstern) is a Professor of Law and Political Science at the University of Berkeley and currently holds the Pamela P. Fong and Family Distinguished Chair in China Studies. Her research looks at law in Mainland China and Hong Kong, especially the relationship between legal institution building, political space, and professionalization. This talk will be moderated by Ljubiša Metikoš, our RPA's PhD candidate in Digital Justice. A notable mention is that this Humane Conversation will not be recorded. Sign up! (https://humane-ai.nl/events_report/automating-fairness-artificial-intelligence-in-the-chinese-court/) Special webinar "Digital Transformations in Latin America News Media Landscape" by Humane Conversations, IAMCR & DMSO On the 5th of May, 2023, at 4:30 PM CET, we will be joined by four esteemed speakers, Professor Pablo J. Boczkowski from Northwestern University in the USA, Prof. Mireya Márquez Ramírez from Ibero-American University in Mexico, Dr Silvia Ximena Montaña-Niño from the Queensland University of Technology in Australia, and Dr Mathias-Felipe de-Lima-Santos from University of Amsterdam in the Netherlands and Federal University of São Paulo in Brazil. The Latin American news media landscape has undergone significant changes in recent years due to the widespread adoption of digital technologies. Our panel of experts will discuss the impact of these digital transformations on journalism and news media in the region, including changes in news production, distribution, consumption, and business models. This webinar will be of interest to academics, journalists, media practitioners, and anyone interested in the role of digital technologies in shaping the Latin American news media landscape. Join us for an engaging and insightful discussion on digital transformations in Latin American news media. Sign up! (https://uva-live.zoom.us/meeting/register/tZ0kf-mtpjwrH9WqsiwpC8_M8cwcHrzgdGzT#/registration) Call for abstracts: Workshop on Algorithmic Injustice University of Amsterdam, 26.6 & 27.6 2023 Artificial intelligence applications play an increasingly important role in our daily life. But these technological advances come with serious societal risks. For instance, in recent years we have seen many cases of machine learning applications that show unfairly biased behaviour towards particular groups or individuals. This led to growing concerns about harmful discrimination and the reproduction of structural inequalities once these technologies become institutionalized in society. As result a lot of energy is currently invested in identifying and resolving algorithmic discrimination. However, in AI research and policy, the remedies against algorithmic discrimination are often narrowly framed as design problems, rather than complex, structural, social-political problems. This carries the risk of a highly technocentric, individualist approach to algorithmic discrimination. In this workshop, we aim to bring together researchers from various disciplines who work on the societal impact of AI applications. The goal is to share ideas, and best practices and discuss how the harmful behaviour of AI applications should be approached. We invite abstracts that address this question from a broad range of perspectives. Examples of topics of interest include but are not limited to: • (Critical evaluations of) conceptualizations of algorithmic fairness • The (societal and political) risks of algorithmic discrimination • The advantages and/or challenges of technocratic decision-making regarding fair AI • Systemic and Structural Injustice in AI • Interventions to achieve fairness in AI beyond debiasing • Law, ethics and regulations of algorithmic fairness. Confirmed Keynotes Dr Su Lin Blodgett is a senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal. Dr Erin Beeghly is an Associate Professor of Philosophy at the University of Utah. Her research interests lie at the intersection of ethics, social epistemology, feminist philosophy, and moral psychology. Submission Authors can submit an anonymous abstract of max 800 words(excluding references), with an optional additional page for tables and figures. The time for presentations of accepted submissions is 30 mins plus 15 mins discussion. The deadline for the submission of abstracts is April 19 2023. Authors will be notified of acceptance by April 24. Abstracts can be submitted using EasyChair. Sign up! (https://events.illc.uva.nl/Workshops/AlgorithmicInjustice/Conference/) Tech celebrities argue for a temporary brake on 'risky' development of AI NOS: "Developments in the field of AI (artificial intelligence) are currently moving so fast that a break has to be taken. Many prominent figures from the science and tech sector write this in an open letter (https://futureoflife.org/open-letter/pause-giant-ai-experiments/) . Among the more than 1,100 signatories are tech celebrities such as Elon Musk and Steve Wozniak and numerous academics." Read more on what Claes de Vreese, our RPA's Principal Investigator and professor in AI and Society at the University of Amsterdam, has to say on this topic here (https://nos.nl/artikel/2469350-techprominenten-pleiten-voor-tijdelijke-rem-op-risicovolle-ontwikkeling-ai) . https://www.youtube.com/watch?v=D87ENIutdhs&list=PL_s7CHfNGrybHdQZ_5nM_ElcnzTsGDD6-&index=14 Missed one of our last Humane Conversations? Listen to this talk, moderated by our postdoctoral researchers, Randon Taylor (https://ai4good.org/blog/60-seconds-with-fellow-dr-randon-taylor/) , and Mathias Felipe de Lima Santos (https://www.uva.nl/en/profile/d/e/m.f.de-lima-santos/m.f.de-lima-santos.html) , with Claudia Aradau (https://www.kcl.ac.uk/people/professor-claudia-aradau-1) , Professor Dr in International Politics at King's College University London, and our RPA's Principal Investigator and Professor in Digital Humanities Tobias Blanke (https://tobias-blanke.net/) , on their latest book, "Algorithmic Reason: the Government of Self and Other." Want to know more? Check out our YouTube channel! https://www.youtube.com/channel/UCMJBkuVIB3fzUVlqVy71xSw ============================================================ ** Twitter (https://twitter.com/HumaneAI2) ** Website (https://humane-ai.nl/) ** Email (mailto:humane-ai@uva.nl) ** YouTube (https://www.youtube.com/channel/UCMJBkuVIB3fzUVlqVy71xSw) Copyright © 2022 RPA Human(e) AI, All rights reserved. Our mailing address is: ** humane-ai@uva.nl (mailto:humane-ai@uva.nl) Want to change how you receive these emails? You can ** update your preferences (*|UPDATE_PROFILE|*) or ** unsubscribe from this list (*|UNSUB|*) . This email was sent to *|EMAIL|* (mailto:*|EMAIL|*) why did I get this? (*|ABOUT_LIST|*) unsubscribe from this list (*|UNSUB|*) update subscription preferences (*|UPDATE_PROFILE|*) *|LIST_ADDRESSLINE_TEXT|* *|REWARDS_TEXT|*