The social and moral psychology of human interactions with algorithmic agents
For long, machines have been entrusted with repetitive, mostly industrial, tasks. Today, and tomorrow, AI-technology is set to become so powerful that algorithmic agents will become colleagues and supervisors in the workplace in the full sense of the word, entrusted with making decisions with clear social and moral implications. This means cooperation between humans and these non-human agents in the social domain is set to increase manifold. However, we know little about the drivers of effective collaboration with non-human agents. This project is aimed at making a head-start at tackling this question. We start from the intuition that people have evolved tendencies to engage in collaboration with other human beings. This means that people are generally aversive to collaborate with non-human agents because they are perceived to lack humanity. This perspective also provides potential avenues to explore solutions to this thorny issue: the more people can be induced to attribute mindfulness to AI-agents, the higher their willingness to collaborate is likely to be.
Dr. Gijs van Houwelingen is assistant professor at Amsterdam Business School. His research focuses on the socio-moral foundations of relations with(in) organizations.
Dr. Bastiaan Rutjens is assistant professor at the psychology department of the UvA. His research focuses on the psychology of belief systems and worldviews.
Prof. Dr. Ir. Jan Willem Stoelhorst is Professor of Strategy and Organization at Amsterdam Business School. His research program is to ground management theory in evolutionary social science research.