In this section we have selected some of the best scientific blog posts and pitches produced by the first generation of Human(e) AI students as part of their final assessment.
Predictive policing is not the answer to the racist police
Casper Metselaar, Eelke Vlieger, Linde Reus, and Savina van Rielova,
Predictive policing, a relatively new technique to simplify the work of policemen, should not be put into practice yet. Current protests against racially biased police brutality, following the death of George Floyd, might lead » coverage direct to your inbox Sign up today to get weekly science people to consider and advocate methods of removing ‘the human flaw’ from police-work. One of these methods is predictive policing. This technique, however, would work counterproductively and should therefore not be implemented yet. Predictive policing can in fact increase racism within the police force as it leads policemen to base their decisions on opaque, biased software which can not be revised.
Predictive policing is a method designed to predict crimes, their place and time, possible offenders, and the identity of their victims based on large datasets. Types of data that are used to predict the crimes are, for example, the race of the offender, location of the crime scene, number of arrests and the weather forecast. With the assembled data, models are developed through predictive policing software.
Deep-learning, an innovative technique within the field of artificial intelligence (Al), is used to train software programmes to find correlations in large data sets. It is important to note that it is currently near impossible to retrace what exact data these programs ultimately use to draw conclusions from, as they process enormous amounts of data. Some might argue that predictive policing helps overcome personal biases of policemen, but this can only be the case if both the dataset and software are neutral and the software’s decisions are transparent. If, however, the dataset is not neutral due to, for example, racially-biased police work, the software could end up reinforcing inequality.
Research into predictive policing systems has shown that parts of the data used to train software are inherently flawed. This causes some predictive policing systems to be biased against African Americans. This bias finds its origin in the fact that a disproportionate amount of incarcerated people in the US are black. Using existing data for the training of predictive policing Al could therefore reinforce disproportional police action against African Americans. One predictive policing system was found to only correctly predict violent crimes in 20 percent of cases, which illustrates the disastrous effects inherent biases can have on these systems. This is only one example out of many where predictive policing programs display some form of bias.
Then there is another, maybe even more dangerous problem with predictive policing software. We do not know exactly how the software comes to its conclusions regarding, for example, who will commit a crime and where. This decision-making process is a black box; it cannot be opened. Predictive policing makes use of deep-learning: a method which can quickly find associations in large amounts of data, which makes it very suitable for predictive policing software. However, the technique behind deep- learning is so complex that it is often not possible to follow the reasoning of the system. This could lead to the system making decisions that are based on irrelevant factors in the data and that will lead to wrongful arrests. Also, if the police were to find out that their software was not working properly, the complexity of the software would make it incredibly difficult to correct the mistake.
Using predictive policing systems in our current society is not desirable. The George Floyd incident shows that we live in a racially biased society. We do not want opaque, uncontrollable predictive policing software to perpetuate this bias.
Why the Development of Artificial Intelligence Should Be Democratised
Xuanyue Guo, Marieke Knorz, Chaitanya Pandey, Aleksandra Warkocka
The development of artificial intelligence has changed the world fundamentally. It is deeply integrated into our daily lives in various forms such as Siri, the Apple voice assistant, and automated movie recommendation systems on Netflix. However, the conveniences made possible by artificial intelligence and the concerns about it go hand in hand. A monumental issue that lies in the development of AI is the fact that its course will be determined by a small number of corporations and not by society at large. That is why we suggest the idea of democratization of artificial intelligence. Its development should be decentralised and representative of society as a whole.
AI is everywhere
Currently, artificial intelligence is used in a variety of areas, such as forecasting and recruiting, as well as search engines and self-driving cars, amongst others. It is so intertwined with, and seemingly hidden within the products, that it may not be obvious to consumers that they are using AI-based devices. Big companies such as Google, Microsoft and Amazon focus a lot on the development of artificial intelligence, and with their financial resources, they can achieve significant developments and progress. This results in the development of artificial intelligence being guided by a handful of big corporations, a phenomenon known as ‘technocracy’.(1)
Why technocracy is problematic
The fact that artificial intelligence is further developed exclusively by big corporations can contribute to unequal access to new technologies for the people living in regions where the development of artificial intelligence is slow. This can make the gap between wealthy and poor more pronounced.(2) Other problems, resulting from AI being developed by a sample of people not representative of the whole population, include bias and discrimination which are already present in artificial intelligence algorithms. This became apparent to the public eye after a scandal in 2015 when Google Photo software identified two people of African American origin with images of gorillas.(3) Another similar case occurred when Amazon software automatically downgraded the CVs of female applicants.(4) These examples illustrate the burning issues of ethical violations and systematic bias in artificial intelligence. Therefore, it is important to focus on democratic development so that all voices are heard and to ensure a diverse representation of society and equal access to artificial intelligence.
Ethical AI
The course of the development of artificial intelligence can still be altered, and it is important that it is manoeuvred in a direction where the outcome is beneficial to society as a whole. Ethical artificial intelligence can only be achieved when the way it is developed is already ethical. It is crucial in the current scenario that public interventions and legislations are included in the development of artificial intelligence. Initiatives from public bodies such as the United Nations and the European Commission focus on providing legislative frameworks for the development of artificial intelligence in a safe and ethical manner. It is also important that there are initiatives to increase public understanding of artificial intelligence. The lack of knowledge of artificial intelligence among the general public is causing a vast amount of people not to be able to participate in the discourse for the development of AI. Alongside these interventions, governments should increase public funding to support smaller developers and initiatives in the field of artificial intelligence, as a way to decentralise the power distribution.
The greatest invention – but only if regulated and democratised
The potential of artificial intelligence is immense and so far has afforded society many conveniences. A portion of society has experienced the advantages of this – but at an expense: issues like corporate-centred development, algorithmic bias and human rights violations are unfortunately a component of the use of AI today. The future of artificial intelligence is dependent on the actions we take today. While artificial intelligence could be the greatest innovation in the history of humankind, it also needs to be regulated and developed in a manner that positively affects societies at large. One of the promising approaches might be democratising the process of AI development. By doing so, people from a variety of backgrounds will have the ability to influence the course of the development of artificial intelligence. To achieve this, society, governments and tech companies need to come together.
Artificial Intelligence in Healthcare
WHY AI APPLICATIONS ARE NOT TRUSTWORTHY… YET
Amber van Eekeren, Bruno Sotic, and Alessandro Tellone
Doctors in our health system are increasingly relying on making decisions guided by AI systems. In 2018, the first-ever autonomous Artificial Intelligence (AI) diagnostic system has been introduced to the American health system. For many reasons we are amazed to see the promising advancements of AI: It relieves the overloaded healthsystem by supporting surgery, management and the monitoring of patients, as well as prescribing treatments. Imagining AI to increase healthcare’s efficiency seems too good to be true and raises concerns as to whether our trust in AI exceeds its capabilities. AI seems to be a powerful tool to boost our health system, but it also has controversial drawbacks. Are we actually ready to place trust in AI when it comes to our highest good: health?
Trust, per definition, is the belief that something is reliable or good. This belief might include a fair amount of uncertainty. If you are convinced that someone or something is reliable there is no need to have trust. Trust establishes stable expectations that make risk considerations less relevant. Surely, AI can outperform humans in many domains. But can we really allow ourselves to neglect a rational assessment of its risks?
A fundamental problem with AI is that its reasoning is based on information that might be biased. In the relatively short history of AI, we repeatedly heard of discriminatory cases. AI has classified black people as gorillas and detected Asians to be blinking: These are biases we need to be aware of. One fatal example in medicine is skin cancer detection using AI. Decades of clinical research focused primarily on middle aged white men. This specific data is fed into AI when training to detect dangerous molecules in the skin. The underrepresentation of patients of colour leads to lower sensitivity in detecting cancer in dark skin. This increases the potential risk to misdiagnose certain groups of people.
This is further problematized by the fact that most AI tools deployed in healthcare are “black boxes”. While we know the data that AI is fed with (medical information) and the results it produces (diagnostic decisions), we are lacking understanding of the processes in between. This makes it difficult for patients and doctors to place trust in AI. If we cannot understand the reasoning of AI, we cannot detect and reduce the systematic biases. This is important to keep in mind, since mistakes in the field of healthcare can have catastrophic consequences. Luckily, we can look into the future with hope. Explainability is an active field of research, trying to design AI that is more transparent to our understanding.
Another important question is the following: If a person of colour dies from cancer because the software is not trained to detect cancer in dark skin, who is responsible? Is it the provider of the AI selling a faulty product, the practitioner using the wrong tool, the government for allowing its use, or even the AI itself? When doctors need to make decisions, they have to carry the consequences themselves. However, embracing trust in AI easily fosters this diffusion of responsibility. We have to be aware that AI is merely a tool and cannot hold moral responsibility. Rather than placing our trust in AI, we should trust all of the entities mentioned above in evaluating the risks of AI.
We are convinced that AI needs to be implemented with care, not trust. When it comes to our health, we want the best help we can get. We have to be wary of biases that can be incorporated by training data, making sure we are able to detect them, and establishing relations of trust with all parties that are added to the client practitioner relationship. Thankfully, all of these issues are of central attention to researchers around the world. There are still a number of issues to be tackled before we can benefit from AI’s strengths in medicine without concerns. For now, it is crucial to step back from the hype around AI to ask: Is it trustworthy enough?
Pitch: Microtargeting
Marta Diale and Florian Burger
Pitch: AI and Medicine
Amber van Eekeren
Want to be part of the next Human(e) AI class? Find out more