RPA Human(e)AI Statement on the Russian invasion of Ukraine

The members of the RPA Human(e)AI of the University of Amsterdam stand strongly by Ukraine. We condemn the invasion undertaken by president Putin. Our sincerest thoughts go to all Ukrainians as well as towards Russians who oppose this war.

As AI researchers, we are especially concerned about the development and use of autonomous weapons in response to this conflict. Currently, both sides possess (semi-)autonomous weapons. Ukraine has reportedly been using the Turkish TB2-drone, as well as Clearview’s AI facial recognition software. Russia meanwhile possesses the ‘Lancet’, which it has used before in the Syrian civil war. The deployment of such weapons has also seen widespread use outside of the Russo-Ukrainian conflict, possibly also in Libya, Israel and Azerbaijan. An AI arms race has truly started, as countries like the U.S. have started to invest more in these kinds of weapons in response to the Ukrainian conflict.

Ethical and regulatory issues have long been present in the military use of computer-based technologies. Rogue or faulty computer systems can cause serious misunderstandings and escalate military conflicts. Modern AI systems are very sophisticated, but at the same time, still not foolproof. We need to have human control over such systems before critical errors are made. 

AI can currently make many important but complex decisions in an almost minuscule timeframe, based on very complex and hard to re-trace self-taught calculations. However, when fast autonomous weapon systems are used by an enemy combatant or are used in the form of ‘swarms’, there is no time for a human to check every single automated decision. This can in turn lead to less human control on the quality and ethicality of automated military decision making.

Although politicians such as the German foreign minister and various European MPs have stressed the importance of regulation of autonomous weapons, much actual progress has not yet been made. In December 2021, a disarmament meeting by the U.N. failed to propose legally binding restrictions on autonomous weapons after 8 years of debate. Furthermore, the national and international regulatory focus on military AI has also been lacklustre. The national AI strategies of Germany, Japan, and the European Commission do not mention military applications at all. Furthermore, when it comes to actual legislation, only very few nations have regulations that address the deployment of autonomous weapons directly.

We therefore strongly call for more national policies and international cooperation to regulate the use of autonomous weapons and military AI. We have entered a new AI age of weaponry, which we can no longer ignore. Now more than ever, these ethical and legal issues must be addressed by academics, policy-makers, and civil society groups.