Public pressure on platform companies to more soundly monitor the content on their sites is constantly increasing. To address this, platforms are turning to algorithmic content moderation systems. These systems prioritize content that promises to increase engagement and block content that is deemed illegal or is infringing the platforms own policies and guidelines. But content moderation is a ‘wicked problem’ that raises many questions all of which eschew simple answers. Where is the line between hate speech and freedom expression – and how to automate and deploy this on a global scale? Are platforms overblocking legitimate content, or are they rather failing to limit illegal speech on their sites?
Within the framework of a ten-week virtual research sprint hosted by the HIIG, thirteen international researchers from various disciplines came together to tackle the challenges posed by automation in content moderation. Their work resulted in policy briefings focused on algorithmic audits and on increasing the transparency and accountability of automated content moderation systems. The Alexander von Humboldt für Internet und Gesellschaft warmly invite you to learn more about the researcher’s findings and attend their output presentations followed by a panel discussion.