Humane Conversations – Marieke van Erp

“Biases in data can be both explicit and implicit. A simple two-word phrase can carry strong contestations, and entire research fields, such as post-colonial studies, are devoted to them. However, these sometimes subtle (and sometimes not so subtle) differences in voice are as yet not often found in the results of automatic analyses or datasets created using automated methods. Current AI technologies and data representations often reflect the popular or majority vote. This is an inherent artefact of the frequentist bias of many statistical analysis methods resulting in simplified representations of the world in which diverse perspectives are underrepresented. In this lecture, I will discuss how the Cultural AI Lab is working towards mitigating this”.


Registration Humane Conversations #7

  • The name and email address entered above will be processed by the RPA Human(e) AI, exclusively for the purpose of organising the Humane Conversation and will be retained until the date of the event. The RPA Human(e) AI will not share this data with third parties. Alternatively, in case you do not wish to share your name and e-mail address with the RPA Human(e) AI website, you can send an e-mail to stating your intention to participate to the event.
  • This field is for validation purposes and should be left unchanged.