RPA Human(e) AI researchers win two NWO VENI grants

João and Jef celebrating in style with due respect of social distance

We are happy to announce that two RPA researchers Jef Ausloos and João Pedro Quintais have been awarded two prestigious grants by the NWO Talent Scheme VENI for Social Sciences and Humanities. With these grants, Jef and João will be able to carry out their innovative research projects on transparency and content moderation for a period of three years. You can read more about their exciting projects below:

Empowering Academia through Law: Transparency Rights as Enablers for Research in a Black Box Society

Jef Ausloos (bio)

While modern technological developments have enabled the exponential growth of the ‘data society’, academia struggles to obtain research data from the companies managing our data infrastructures. Jef’s project will critically assess the scientific, legal and normative merits and challenges of using transparency rights as an innovative method for obtaining valuable research data.

With a number of upcoming EU legislative initiatives on transparency, this project could not come at a better time. Indeed, academia is suffering from a great paradox in the digital society. The so-called ‘datafication of everything’ results in unprecedented potential and urgency for independent academic inquiry. Yet, academics are increasingly confronted with unwarranted obstacles in accessing the data required to pursue their role as critical knowledge-institution and public watchdog. An important reason behind this, is the increasing centralisation and privatisation of data infrastructures underlying society. The corporations managing our data-infrastructures have strong (legal, technical and economic) disincentives to share the vast amounts of data they control with academia, effectively reinforcing the ‘black box society’ (Pasquale, 2014). As a result, there is an important power asymmetry over access to data and who effectively defines research agendas. This trend has only worsened after recent ‘data scandals’, especially in the wake of Cambridge Analytica. Calls for more transparency from academia and civil society have been largely unsuccessful and efforts to scrutinise data practices in other ways often run into a range of hurdles. At the same time, new and proposed policy documents on algorithmic transparency and open science/data remain abstract.

Against this backdrop, the project will explore how the law – ie disclosure/transparency obligations and access rights – can be used by the academic research community as a tool for breaking through these asymmetries. This may appear counter-intuitive as academics are often confronted with legal arguments to prevent access to data, often for valid reasons such as privacy protection or intellectual property. Yet, these arguments are often abused as blanket restrictions, preventing more balanced solutions. The project will unpack the many issues underlying this tension and evaluate how the respective information asymmetry between academia and big tech can be resolved in light of the European fundamental rights framework.


Responsible Algorithms: How to Safeguard Freedom of Expression Online

João Pedro Quintais (bio)

Due to the unprecedented spread of illegal and harmful content online, EU law is changing. New rules enhance hosting platforms’ obligations to police content and censor speech, for which they increasingly rely on algorithms. João’sproject examines the responsibility of platforms in this context from a fundamental rights perspective.

Hosting platforms—like Facebook, Twitter or YouTube—are the gateways to information in the digital age. They regulate access to content through a range of ‘moderation’ activities, including recommendation, removal, and filtering. These activities are governed by a mix of public and private rules stemming from the law and platforms’ internal norms, such as Terms of Use (TOS) and Community Guidelines.

In light of the unprecedented spread of illegal and harmful content online, the EU and Member States have in recent years enacted legislation enhancing the responsibility of platforms and pushing them towards content moderation. These rules are problematic because they enlist private platforms to police content and censor speech without providing adequate fundamental rights safeguards. The problem is amplified because to cope with the massive amounts of content hosted, moderation increasingly relies on Artificial Intelligence (AI) systems.

In parallel, the EU is ramping up efforts to regulate the development and use of AI systems. However, at EU level, there is little policy or academic discussion on how the regulation of AI affects content moderation and vice-versa. This project focuses on this underexplored intersection, asking the question: How should we understand, theorize, and evaluate the responsibility of hosting platforms in EU law for algorithmic content moderation, while safeguarding freedom of expression and due process?

João’s project answers this question by combining doctrinal legal research, empirical methods, and normative evaluation. First, the research maps and assesses EU law and policies on platform’s responsibility for algorithmic moderation of illegal content, including three sectoral case-studies: terrorist content, hate speech, and copyright infringement. Second, the empirical research consists of qualitative content analysis of platforms’ TOS and Community Guidelines. Finally, the project evaluates the responsibility implications of algorithmic moderation from a fundamental rights perspective and offers recommendations for adequate safeguards.