#10 Humane Conversation – Stereotypes in Language & Computational Language Models with Katrin Schulz, Robert van Rooij, Anthi Solaki

In these two interdisciplinary projects, Katrin Schulz – professor of logic at the department of philosophy of Universiteit van Amsterdam – and Robert van Rooij – director of the Institute for Logic, Language and Computation of Universiteit van Amsterdam – have combined knowledge from linguistics, psychology, and natural language processing. In one project they study whether and how generic sentences — which typically express (perhaps stereotypical) generalisations — should be interpreted, and whether corresponding implicit generalisations can be found in computational language models trained by (huge) linguistic corpora. The main hypothesis is that the generalisations that we accept are due to how such generalisations are learned and that similar ideas are behind the learned language models. In the other project, the main object of study is the impact of (online) media on stereotypical beliefs people hold. In order to do so, it will be measured which stereotypes are expressed by computational language models and how such biases relate to stereotypical beliefs that individuals may hold. Another goal is to develop a computational model to understand how media sources affect stereotypical beliefs. The discussion was moderated by our RPA’s, Human(e) AI, postdoctoral researcher in Responsible AI, Anthi Solaki.