Tobias Blanke on AI and Humanities New intelligence with eye for small details

 

Algorithms are playing a growing role in our life. How should we deal with that? According to Tobias Blanke (University Professor in Humanities and AI), it is essential for researchers in humanities to explore ethical and societal aspects of AI. ‘In order to be able to correct errors in algorithms, you need knowledge of cultural concepts’.

We are living in interesting times: for the first time in history humans are confronted with another form of intelligence. This is not taking shape as you would imagine from science fiction movies – aliens, the Terminator, robots who take control – but it is rather situated everywhere among us. We deal with algorithms on a daily basis, bearing influence on us and on everything around us. They order our timeline on our social media, and give suggestions on Netflix, but are also implemented by authorities to address criminality.

‘The interesting thing, and perhaps also worrying, is that we do not really know how many algorithms work. Through machine learning algorithms can determine their own rules. You feed them with large amounts of data, and then they start training themselves on such rules to observe that data. But how they reason and why they do what they do is not clear. How do we have to deal with this new form of intelligence among us? That is one of the great challenges of our time, whereby an important role is assigned to researchers in the humanities.’

 

Algorithms can often be wrong

‘Algorithms are not infallible, we humans need to be able to correct those mistakes. Do you know the story of Stanislav Petrov? He was a Soviet lieutenant who in the ’80s possibly prevented a nuclear war by overruling a decision made by a computer system. One day the alert system of the Soviet Union that the United States had fired five rockets over its territory. Petrov deemed it to merely be a false alarm – rightfully, as it later appeared – and decided to ignore the warning. Had the algorithm been capable of autonomously deciding, then the Soviet Union would have initiated a counterattack, with all the consequences thereof.’

This is an extreme example, but there are also many other instances of algorithms that commit mistakes or act in a way that we would not consider ethical. Think for example of the algorithm that the tax authorities used to detect fraud around childcare allowance. That seemed to be discriminatory for people with a dual nationality. Or another example, that recently was in the news: a tool on Twitter to crop portray photos seemed to be uncapable of recognising faces of darker skin colour. This sort of mistakes does not really lie with the algorithms themselves, but it has to do with the data with which the algorithms were fed. If you only give a dataset with mainly white faces, then the algorithms will not be able to properly recognise faces of other colour’

Understanding of culture is fundamental

‘ Algorithms  are trained on the basis of datasets from society, from human culture, and that is one of the reasons why researchers in humanities are needed. To spot decision-maing errors in an algorithm you need indeed to really have an understanding of the data that we feed to the algorithms, and what the problems in the data are. Knowledge of cultural concepts is therefore indispensable. Imagine for example that you want to remove the colonialist baggage from the algorithms with which the police tracks criminal action. Then you would indeed need to know what colonialism exactly encompasses, and how it manifests in the datasets upon which the algorithms are trained. For this reason the algorithms cannot be corrected only by a programmer, but rather historians and researchers in humanities are also necessary’.

Research around the ethical side of AI is fundamental. A computer scientist has, in general, mainly interest in how algorithms can better operate.  There is for this purpose an array of criteria in computer science, but it is often less relevant why an algorithm does what it does in a specific context. That is precisely what the humanities indeed try to find out. We want to learn how to understand the functioning of algorithms, and to compare that with how we think about that. We want to understand the context of the data with which we feed our algorithms.

 

Critical analysis of sources

‘Another reason why the humanities are important in these times has to do with another great issues: fake news. We as researchers in the humanities are in the position to be able to critically assess sources like no one else. We have in this sense a long-standing tradition and have developed many advanced mechanisms to guarantee the authenticity of sources. Why are journalists, historians and Een andere reden waarom de geesteswetenschappen belangrijk zijn in deze tijd, heeft te maken met een ander groot probleem: nepnieuws. Wij geesteswetenschappers zijn als geen ander in staat om bronnen kritisch te beoordelen. We hebben daar een lange traditie in en hebben heel geavanceerde mechanismen ontwikkeld om authenticiteit van bronnen te waarborgen. Why are journalists, historians and archive scientists hardly involved in the discussion over fake news?

I think that is because in the Internet industry rules the idea that everything has to be done by algorithms. If you ask Mark Zuckerberg how you should solve the problem of fake news, he would say: let us develop a nice algorithm for it. In my opinion, people tend to be overly confident over the fact that algorithms can solve these problems for us. Instead of that, everyone should be trained more in the assessment of sources and in the consumption of information.

AI advances humanities

‘In this manner the humanities can also pluck the fruits of AI from several perspectives. Just think of databases and search engines. Everyone who has been involved in humanities longer than I have – more or less 15 years – would still remember the times where you had to embark on a search for sources and you could not find articles in digital format. Nowadays we have incredible search engines that can carry out a lot of that work for us.

But there also challenges around the implementation of AI for research in the humanities. First of all, the data that we researchers in humanities are interested in are generally not available in easily accessible digital formats. I have been therefore working for a while on a project where we collect material over the Holocaust, such as letters and documents of authorities, on a worldy scale. These documents are kept in traditional archives and of course were not made for digital consumption. It costs a lot of time to process this sort of data to something that computers can deal with. This challenge is often overlooked.

The second great challenge is that AI is at search for large patterns, whereas researchers in the humanities are often just interested in the small details. That is in my eyes the holy grail, the development of a type of AI that is apt to this task. I think that we, as researchers in humanities, do not need to adjust to humanities, as it might be suggested sometimes in politics, but rather that AI adjust to us’.

Everyone needs to learn programming

‘At the same time, I think also that researchers in the humanities have a lot to learn in the area of AI. It would be a good idea for everyone to learn a little programming. In the curricul of humanities studies it is time that this would also be introduced. Because in order to correct algorithms, you still need to know how they are built’.

We humans learn throughout our lives the subtle art of how we deal with other people and who we are to trust or not. That needs also be learned in relation to with the other intelligence, AI. I have forgotten who said this, but I find it a nice expression: ‘Programme or be programmed’. It is exactly so. If you do not understand how algorithms work, how can you relate yourself to them?’

Original Article on the FGw page