#4 Humane Conversations – Sennay Ghebreab and Hinda Haned Understanding and mitigating bias in AI automated systems

Guest speakers: Sennay Ghebreab and Hinda Haned (Civic AI Lab)

Moderator: Sarah Eskens (VU Amsterdam)


“The AI community has been focusing on developing fixes for harmful bias and discrimination, through so-called ‘debiasing algorithms’ that either try to fix data for known or expected biases, or constrain the outcomes of a given predictive model to produce ‘fair’ outcomes. We argue that creating more AI solutions to fix harmful biases in data is not the only solution we should be pursuing. A fundamental question we are facing as researchers and practitioners, is not how to fix harmful bias in AI with new algorithms, but rather; if we should be designing and deploying such potentially biased systems in the first place”