The role of artificial intelligence is gaining increasing prominence across a myriad of sectors in our society. In fields such as healthcare, security, agriculture, media and more, AI applications have both the potential to offer beneficial solutions as well as unprecedented disruptions for the well-being of humanity. In the last few years a growing amount of resources and attention are devoted to AI in both research and higher education. The University of Amsterdam has recently launched a series of ambitious initiatives, also in cooperation with public institutions and leading industry players, to position the skills and talent of its academic researchers at the centre of AI advancement and innovation processes. Similarly, the removal of the numerus fixus requirement for admission in the artificial intelligence bachelor programme also confirms the commitment of the university to render AI a pivotal element of its agenda. On the other hand, the risks and threats emerging from the expanding use of AI technologies have raised a series of ethical considerations and dilemma’s that our academic community needs to address. A series of important questions that we, as academics, should ask ourselves is what do we need to do to ensure that AI research and education evolve along ethical standards? How do we strike the right trade-off between technological innovation and “getting it right” and where do we see these tensions lying? Discussing these questions will be a propaedeutic step to prepare the next generation of scholars and sensitize them to the need for AI that is responsible and humane.
In this roundtable conversation, part of the Humane AI conversation series, Beate Rössler, Raquel Fernández and Max Welling will discuss these topics, moderated by Natali Helberger.