The antecedents and consequences of ambivalence about agentic technologies with Gijs van Houwelingen and Chris Starke

Both in the popular press and in scientific literature, it is often assumed that people are quite wary of agentic (i.e, smart) forms of technology. Yet, at the same time, there is also widespread enthusiasm about the prospects of such technologies, and the use of certain forms (e.g., search algorithms) is widespread, seemingly contradicting this assumed wariness. In this project, Dr. Gijs van Howelingen – professor at the Faculty of Economics and Business of the University of Amsterdam for the Strategy & International Business Section – sets to investigate if both these suggestions, viz., the expectations that people are suspicious of agentic technologies and are enthusiastic about them, are correct. More specifically, he suggests that at a psychological level, people are ambivalent about such technologies – they may hold both positive and negative attitudes toward the technology at the same time. This psychological ambivalence about smart technologies is both scientifically and practically relevant. Psychologically, ambivalence may cause “oscillation”, that is, frequent switches in behavioural and attitudinal response from positive to negative. Additionally, people are particularly prone to want to resolve their ambivalence by polarizing towards or against a specific attitude object (i.e., a type of technology) in this case. As such, both social aversion and appreciation of agentic technologies may be the reflection of the same underlying psychological process (the resolution of ambivalence). Ambivalence resolution in this way may lead to the unreflected rejection or embrace of certain types of technology, and both these responses can be socially detrimental, making it important to understand when and why people feel ambivalent about agentic technologies as well as how they may resolve it. Find more about this topic in a conversation with Chris Starke – postdoctoral researcher in Responsible AI for the RPA Human(e) AI.