Human(e) AI Course Spring 2020

In this section we have selected some of the best scientific blog posts and pitches produced by the first generation of Human(e) AI students as part of their final assessment.

 

Predictive policing is not the answer to the racist police

Casper Metselaar, Eelke Vlieger, Linde Reus, and Savina van Rielova,

Image by Patrick Behn from Pixabay

Predictive policing, a relatively new technique to simplify the work of policemen, should not be put into practice yet. Current protests against racially biased police brutality, following the death of George Floyd, might lead » coverage direct to your inbox Sign up today to get weekly science people to consider and advocate methods of removing ‘the human flaw’ from police-work. One of these methods is predictive policing. This technique, however, would work counterproductively and should therefore not be implemented yet. Predictive policing can in fact increase racism within the police force as it leads policemen to base their decisions on opaque, biased software which can not be revised.

Predictive policing is a method designed to predict crimes, their place and time, possible offenders, and the identity of their victims based on large datasets. Types of data that are used to predict the crimes are, for example, the race of the offender, location of the crime scene, number of arrests and the weather forecast. With the assembled data, models are developed through predictive policing software.

 

Deep-learning, an innovative technique within the field of artificial intelligence (Al), is used to train software programmes to find correlations in large data sets. It is important to note that it is currently near impossible to retrace what exact data these programs ultimately use to draw conclusions from, as they process enormous amounts of data. Some might argue that predictive policing helps overcome personal biases of policemen, but this can only be the case if both the dataset and software are neutral and the software’s decisions are transparent. If, however, the dataset is not neutral due to, for example, racially-biased police work, the software could end up reinforcing inequality.

 

Research into predictive policing systems has shown that parts of the data used to train software are inherently flawed. This causes some predictive policing systems to be biased against African Americans. This bias finds its origin in the fact that a disproportionate amount of incarcerated people in the US are black. Using existing data for the training of predictive policing Al could therefore reinforce disproportional police action against African Americans. One predictive policing system was found to only correctly predict violent crimes in 20 percent of cases, which illustrates the disastrous effects inherent biases can have on these systems. This is only one example out of many where predictive policing programs display some form of bias.

 

Then there is another, maybe even more dangerous problem with predictive policing software. We do not know exactly how the software comes to its conclusions regarding, for example, who will commit a crime and where. This decision-making process is a black box; it cannot be opened. Predictive policing makes use of deep-learning: a method which can quickly find associations in large amounts of data, which makes it very suitable for predictive policing software. However, the technique behind deep- learning is so complex that it is often not possible to follow the reasoning of the system. This could lead to the system making decisions that are based on irrelevant factors in the data and that will lead to wrongful arrests. Also, if the police were to find out that their software was not working properly, the complexity of the software would make it incredibly difficult to correct the mistake.

 

Using predictive policing systems in our current society is not desirable. The George Floyd incident shows that we live in a racially biased society. We do not want opaque, uncontrollable predictive policing software to perpetuate this bias.

 

 

Why the Development of Artificial Intelligence Should Be Democratised

Xuanyue Guo, Marieke Knorz, Chaitanya Pandey, Aleksandra Warkocka

 

Is AI representative of the whole population?

The development of artificial intelligence has changed the world fundamentally. It is deeply integrated into our daily lives in various forms such as Siri, the Apple voice assistant, and automated movie recommendation systems on Netflix. However, the conveniences made possible by artificial intelligence and the concerns about it go hand in hand. A monumental issue that lies in the development of AI is the fact that its course will be determined by a small number of corporations and not by society at large. That is why we suggest the idea of democratization of artificial intelligence. Its development should be decentralised and representative of society as a whole.

 

AI is everywhere

Currently, artificial intelligence is used in a variety of areas, such as forecasting and recruiting, as well as search engines and self-driving cars, amongst others. It is so intertwined with, and seemingly hidden within the products, that it may not be obvious to consumers that they are using AI-based devices. Big companies such as Google, Microsoft and Amazon focus a lot on the development of artificial intelligence, and with their financial resources, they can achieve significant developments and progress. This results in the development of artificial intelligence being guided by a handful of big corporations, a phenomenon known as ‘technocracy’​.(1)​

 

Why technocracy is problematic

The fact that artificial intelligence is further developed exclusively by big corporations can contribute to unequal access to new technologies for the people living in regions where the development of artificial intelligence is slow. This can make the gap​ between wealthy and poor more pronounced​.(2) Other problems, resulting from AI being developed by a sample of people not representative of the whole population, include bias and discrimination which are already present in artificial intelligence algorithms. This became apparent to the public eye after a scandal in 2015 when Google Photo software identified two​ people of African American origin with images of gorillas.(3) Another similar case occurred​ when Amazon software automatically downgraded the CVs of female applicants​.(4) These examples illustrate the burning issues of ethical violations and systematic bias in artificial intelligence. Therefore, it is important to focus on democratic development so that all voices are heard and to ensure a diverse representation of society and equal access to artificial intelligence.

 

Ethical AI

The course of the development of artificial intelligence can still be altered, and it is important that it is manoeuvred in a direction where the outcome is beneficial to society as a whole. Ethical artificial intelligence can only be achieved when the way it is developed is already ethical. It is crucial in the current scenario that public interventions and legislations are included in the development of artificial intelligence. Initiatives from public bodies such as the United Nations and the European Commission​ focus on providing legislative frameworks for the development of artificial intelligence in a safe and ethical manner. It is also important that there are initiatives to increase public understanding of artificial intelligence. The lack of knowledge of artificial intelligence among the general public is causing a vast amount of people not to be able to participate in the discourse for the development of AI. Alongside these interventions, governments should increase public funding to support smaller developers and initiatives in the field of artificial intelligence, as a way to decentralise the power distribution.

The greatest invention – but only if regulated and democratised

The potential of artificial intelligence is immense and so far has afforded society many conveniences. A portion of society has experienced the advantages of this – but at an expense: issues like corporate-centred development, algorithmic bias and human rights violations are unfortunately a component of the use of AI today. The future of artificial intelligence is dependent on the actions we take today. While artificial intelligence could be the greatest innovation in the history of humankind, it also needs to be regulated and developed in a manner that positively affects societies at large. One of the promising approaches might be democratising the process of AI development. By doing so, people from a variety of backgrounds will have the ability to influence the course of the development of artificial intelligence. To achieve this, society, governments and tech companies need to come together.

 

Artificial Intelligence in Healthcare

WHY AI APPLICATIONS ARE NOT TRUSTWORTHY… YET

Lilli Mannsdörfer, Amber van Eekeren, Bruno Sotic, and Alessandro Tellone

Doctors in our health system are increasingly relying on making decisions guided by AI systems. In 2018, the first-ever autonomous Artificial Intelligence (AI) diagnostic system has been introduced to the American health system. For many reasons we are amazed to see the promising advancement​s of AI: It relieves the overloaded healthsystem by supporting surgery, management and the monitoring of patients, as well as prescribing treatments. Imagining AI to increase healthcare’s efficiency seems too good to be true and​ raises concerns as to whether our trust in AI exceeds its capabilities.​ AI seems to be a powerful tool to boost our health system, but it also has controversial drawbacks. Are we actually ready to place trust in AI when it comes to our highest good: health?

 

Trust, per definition, is the belief that something is reliable or go​od. This belief might include a fair amount of uncertainty. If you are convinced that someone or something is reliable there is no need to have trust. Trust establishes stable expectations that make risk considerations less relevant. Surely, AI can outperform humans in many domains. But can we really allow ourselves to neglect a rational assessment of its risks?

 

A fundamental problem with AI is that its reasoning is based on information that might be biased. In the relatively short history of AI, we repeatedly heard of discriminatory cases. AI has classified black people as gorillas and detected Asians to be blinking: These are biases we need to be aware of. One fatal example in medicine is skin cancer detection using AI. Decades of clinical research focused primarily on middle aged white men. This specific data is fed into AI when training to detect dangerous molecules in the skin. The underrepresentation of patients of colour leads to lower sensitivity in detecting cancer in dark skin. This increases the potential risk to misdiagnose certain groups of people.

This is further problematised by the fact that most AI tools deployed in healthcare are “black boxes”. While we know the data that AI is fed with (medical information) and the results it produces (diagnostic decisions), we are lacking understanding of the processes in between. This makes it difficult for patients and doctors to place trust in AI. If we cannot understand the reasoning of AI, we cannot detect and reduce the systematic biases. This is important to keep in mind, since mistakes in the field of healthcare can have catastrophic consequences. Luckily, we can look into the future with hope. Explainability is an active field of research, trying to design AI that is more transparent to our understanding.

 

Another important question is the following: If a person of colour dies from cancer because the software is not trained to detect cancer in dark skin, who is responsible? Is it the provider of the AI selling a faulty product, the practitioner using the wrong tool, the government for allowing its use, or even the AI itself? When doctors need to make decisions, they have to carry the consequences themselves. However, embracing trust in AI easily fosters this diffusion of responsibility. We have to be aware that AI is merely a tool and cannot hold moral responsibility. Rather than placing our trust in AI, we should trust all of the entities mentioned above in evaluating the risks of AI.

 

We are convinced that AI needs to be implemented with care, not trust. When it comes to our health, we want the best help we can get. We have to be wary of biases that can be incorporated by training data, making sure we are able to detect them, and establishing relations of trust with all parties that are added to the client practitioner relationship. Thankfully, all of these issues are of central attention to researchers around the world. There are still a number of issues to be tackled before we can benefit from AI’s strengths in medicine without concerns. For now, it is crucial to step back from the hype around AI to ask: Is it trustworthy enough?

 

The misuse of Artificial Intelligence can be a threat to democracy

Lisanne Fridsma, Bob Lijnse, Mira Bandsom and Domiziana Scarfagna, June 2020

Picture by Mantra AI

The 2020 United States presidential election will have one of the most digitized campaigns ever. Due to the corona virus political parties won’t try to gain votes by canvassing, using pamphlets and giving speeches in public. This election, political parties will embrace all online ways possible. New powerful ways to advertise online are using artificial intelligence in political microtargeting and bot usage. A new powerful marketing strategy is political micro-targeting. Micro-targeting is the of creating personalized messages aimed at individual voters, based on an intelligent prediction of personal preferences, most of the time derived from social networks. Another new online tool are bots, they are used to leave comments on social media to promote their political party or candidate. The misuse of political microtargeting and bots could pose a large threat to democracy. Therefore we should consider which effects will these new applications of artificial intelligence have on the elections?

 

Micro-targeting

Artificial intelligence has seen much progress over the last few decades. Due to the use of social networks such as Facebook and Instagram, large data sets are being collected. This data includes your network of friends, your online behaviour and your personal preferences. Based on all this data, an algorithm can predict who you are most likely to vote for or even what kind of advertising you might be susceptible to. Political parties and other groups with political interest can customize their messages to your personal profile.

In the Brexit election and the election of Trump in 2016, we have seen more discussion than ever about this microtargeting, fake news and whether or not Russia meddled in the elections. The question is how much this manipulative opinion-forming has affected the election results. It is hard to form an objective opinion when you don’t get information fed from the entire political spectrum, but the information is skewed towards a specific party based on your psychological profile.

 

Bots

Another form of artificial intelligence that is used in political campaigning is bots. These bots can generate and post messages on online platforms. Even when looking at an argument on a neutral platform, it could seem that one side of the argument is more sympathized due to comments made by bots. Because of this opinion-forming traffic, the line between human and artificial intelligence has become blurred. The problem is that it’s hard to distinguish a bot post from a human post. Therefore, it is important to establish rules to what extend robots will be allowed to carry human traits. Otherwise, there is a danger that social media bots will continue to be used as a deceptive tool.

 

Implications for future elections

An implication of the use of microtargeting and bots in political campaigns is the possible shift in power balance. The parties that invest the most in technology and work with the most advanced systems will gain an advantage in the political landscape. If this would result in different voting behaviour in the polling booth is unclear. However, it is prudent that there are clear risks here. For a properly functioning democracy in which people can form their own opinion freely, it is essential that people have access to truthful information and that this access is equal.

Therefore rules and regulations should be made to keep the democratic process as fair as possible. These regulations should restrict manipulative use of information and transparency should be demanded of all parties applying micro-targeting and bots. For example by making a ‘why am I seeing this add?’ button and disclaimers stating ‘this message was generated by a bot’ mandatory, voters can make a better distinction about the source of their information and the validity of it. Then, the US presidential elections of 2020 and beyond can be saved from unwanted opinion-forming traffic.

 

 

 Pitch: Microtargeting

  Marta Diale and Florian Burger

 

 

 Pitch: AI and Medicine

   Amber van Eekeren

Want to be part of the next Human(e) AI class? Find out more