“Biases in data can be both explicit and implicit. A simple two-word phrase can carry strong contestations, and entire research fields, such as post-colonial studies, are devoted to them. However, these sometimes subtle (and sometimes not so subtle) differences in voice are as yet not often found in the results of automatic analyses or datasets created using automated methods. Current AI technologies and data representations often reflect the popular or majority vote. This is an inherent artefact of the frequentist bias of many statistical analysis methods resulting in simplified representations of the world in which diverse perspectives are underrepresented. In this lecture, I will discuss how the Cultural AI Lab is working towards mitigating this”.
Oops! We could not locate your form.