The protection of fundamental rights and the human-centric, ethical and responsible use of Artificial Intelligence (AI) technologies is a central ambition of the European AI strategy, with no lesser goal than “spearhead[ing] the development of new ambitious global norms” (European Commission 2021). Like the General Data Protection Regulation before it, the draft AI Act can be expected to set a new tone for the debate around ‘responsible AI’ both within and beyond Europe. It is one of the first attempts worldwide to cut through an increasingly opaque jungle of private and public ethical guidelines to formulate binding regulatory standards for what concretely responsible and human-centric AI must mean.
As citizens, the AI Act can significantly impact the kind of AI-driven services and human-AI interactions we will be seeing. For us as researchers, an entire research agenda is unfolding, with many open questions that still need answers if the European vision of human-centric and responsible AI is to materialise. At the same time, we also see that with the growing complexity of technical, ethical, legal, societal and economic aspects, the role of research and researchers in Brussel’s emerging regulatory framework is changing. Natali Helberger -Distinguished University Professor Law & Digital Technology, with a special focus on AI; PI of the RPA Human(e) AI – will discuss some critical questions that policymakers and academics are still grappling with and discuss what the AI Act can mean for AI research at the UvA.