“Both the computer science and law disciplines try to define ideal qualities of automated decision systems. In these contexts, concepts, such as autonomy, authenticity, agency, or privacy – both individual and social – are often used as if their translation into legal rules or machine architectures would be a straightforward task. However, what is treated as a rock-solid foundation for legal thinking and AI design, are in fact complex, often contested ideas, rooted and discussed in social, political, historic contexts. As these concepts are translated from one domain into another, there is a risk of collapsing the meanings into a single fixed technical design, or a set of legal expectations.
The problems which ethics and philosophy should raise are therefore, starting with privacy, the effects a loss of privacy would have. Why would the consequences for our individual autonomy, our social practices, and our democratic societies – and thus also for our ability to lead a well-lived life – rather fatal? Another essential question is the more general one of how digital technologies change us and change or affect the possibilities for being human. How do we understand these transformations philosophically? What kinds of conditions are required to enable humans to direct their own lives, enjoy flourishing relationships and be “worthy of respect”? Privacy, autonomy and (digital) humanity are among the most fundamental concepts we have to make sense of in the digitally transforming society”.