The ICA 2021 conference theme of Engaging the Essential Work of Care: Communication, Connectedness, and Social Justice calls for our examination of how care forms the fabric of our social and interconnected lives. From the moment that we enter this world we are completely dependent on the care of others, and as we move through our lives, the care of our teachers, doctors, leaders, and artists shape us into the adults that we are today. Even as we leave this earth, on our last days, we are comforted by the care of loved ones.
“Care” can be understood from a variety of perspectives relevant to communication. Namely, care can refer to: Providing Assistance for Others (She takes care of my aunt.) Being Interested in a Topic/Issue/Idea (They care about the notion of compassion.) Concern about Others’ Well-Being (He cares what will happen to his children.) The Provision of Needed Attention or Resources (Do they provide care at the hospital?)
The concept of care can also be understood from at least two vantage points that intersect with those meanings: self-directed and community-centered. The relative priority of self and community care within a given community reflects deeply embedded cultural values, experiences of oppressions, access to resources, and histories of trust.
The concept of “care” requires our thoughtful examination and reflection. Against the backdrop of the COVID-19 pandemic, the crisis of climate change, and militarized police brutality that continues to target, harass, and kill people of color, the urgency of care to address entrenched inequalities, an overarching climate of neglect, and a global political economy of individualized self-help has been rendered visible. Communication emerges in this backdrop as a transformative site for re-working care, anchoring it in relationships, communities, organizing processes, media systems, and social formations. Care is both constituted by and constitutive of communication, as a register for creating spaces of compassion and connectedness.
We are inviting applications for the (full-time) position of a junior or senior researcher for the soon to be launched Digital Services Act Observatory (DSA Observatory) at the Institute for Information Law (IViR). The DSA will set the terms for the relation of European democracies with dominant digital platforms for the coming decades. The researcher (DSA Research Fellow) will follow and analyse legislative proposals and policy developments related to the DSA; identify opportunities for independent academic experts at IViR, the Amsterdam Law School, and elsewhere to contribute their knowledge to the DSA process; draft and solicit reports, blogs and academic publications with other researchers involved in the initiative; and contribute to the activities of other stakeholders, including civil society, through workshops and expert meetings.
The DSA Observatory is a new project of IViR. The project is led by prof. Joris van Hoboken, dr Ronan Fahy, prof. Natali Helberger and prof. Martin Senftleben in collaboration with other researchers at IViR, the broader research initiatives on the Digital Transformation of Decision-Making and the Information, Communication & the Data Society (ICDS) research priority area. The project is made possible with funding from the Information Program of the Open Society Foundations.
What are you going to do?
The DSA will set the terms for the relation of European democracies with dominant digital platforms for the coming decades. The DSA legislative package is very complex and engages a wide variety of economic, social, and political issues, impacting the conditions for the effective exercise of fundamental rights and the operationalisation of democratic values and economic welfare. Considering the breadth of perspectives feeding into the legislative process, the DSA covers a number of different approaches to the regulation of platforms, with significant potential trade-offs between them.
The researcher will be the key driver for the DSA Observatory. Our goal with the Observatory is to identify opportunities for the IViR research community and our broader networks to engage in the DSA process and provide independent, robust input for policy makers and other relevant stakeholders. The DSA Observatory will provide analysis of relevant policy documents, coordinate engagement on DSA proposals by other researchers at IViR as well as our broader network of platform regulation experts in academia. On the basis of our expertise and analysis, it will interact with and provide value to the DSA related activities of other relevant stakeholders, including civil society organisations, regulators, and industry. The Observatory will be initially created for a period of 1 year, with the aim of continuing it afterwards for the remainder of the DSA legislative process. The project output will include a primer on DSA and platform regulation, regular analysis of relevant aspects of the DSA proposals by individual researchers or collectively, monthly high-level expert meetings, and a final report.
In this position you will have the possibility to develop AI and machine learning algorithms that enable conversational social robots to partake in and contribute to collaborative design settings. This position will have a strong focus on working with data gathered from sensors such as audio, video etc. to explore how verbal and non-verbal communication see in humans can also be used for human-robot interaction. Therefore, the ideal candidate should have a background in computer science/engineering. He/she should be enthusiastic about developing interactive conversational systems and carry out human-robot experiments in real-life settings. He/she should be interested in collaborating with and learning from colleagues in industrial design. Target venues for publication include, next to top tier journals, the international conference of human-robot interaction (HRI) as well as the interactional conference of multi-modal interaction (ICMI).
Together with 3 other PhD students the candidate would be a part of the new Designing Intelligence lab that works on investigating how humans and an artificial intelligence can work together creatively over extended periods of time. The goal of the Lab is to move the idea of design thinking closer to that of artificial intelligence, developing new types of design methods to help design professionals in their design processes.
The Designing Intelligence Lab is a collaboration between computer science and industrial design and is led by Catharine Oertel and Senthil Chandrasegaran. Successful candidate will be affiliated to the Faculty of Electrical Engineering, Mathematics, and Computer Science (Section: Interactive Intelligence) but will collaborate closely with the Faculty of Industrial Design and Engineering.
The Interactive Intelligence (II) section focuses on socially interactive, intelligent agents. They research the intelligence that underlies and co-evolves during the repeated interactions of human and technology “agents” who cooperate to achieve a joint goal. Their research program aims for synergy and social interaction between humans and technology, to empower humans in their social context. The new technological challenges we face arise from the need to integrate Artificial Intelligence, Cognitive Engineering, and behavioural sciences.
Are you a computer scientist with a passion for research and teaching? The Department of Data Science and Knowledge Engineering (DKE) at Maastricht University is excited to welcome a new colleague who is keen to continue growing our research and to keep up our excellent standard of education.
We invite applications for a tenure-track position in computer science, focused on explainable artificial intelligence, and ability to collaborate with social sciences. DKE research lines include human-centered aspects of recommender systems, as well as a strong applied mathematics component such as dynamic game theory (differential, evolutionary, spatial and stochastic game theory).
The position is supported by the large and growing Explainable and Reliable Artificial Intelligence (ERAI) group of DKE. The group consists of Associate & Assistant Professors, postdoctoral researchers, PhD candidates and master/bachelor students. The ERAI group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements. We conduct both fundamental and applied research, with a focus on explainable Artificial Intelligence.
This special issue aims at bringing together researchers, engineers, and practitioners from both academia and industry to report, review, and exchange up-to-the-date progress of proper use of artificial intelligence related techniques in the development of well-beings in the Web as well as human society, to explore future research directions, and to prompt better service provision in specific domains for a wider target audience from diverse fields. Original and research articles are solicited in all aspects, including theoretical studies, practical applications, new social technology and experimental prototypes.
The European Laboratory for Learning and Intelligent Systems offers an interdisciplinary PhD program. The ELLIS PhD program is a key element of the ELLIS initiative and its goal is to foster and educate the best talent in machine learning related research areas by pairing outstanding students with leading academic and industrial researchers in Europe.
The program supports excellent PhDs across Europe by giving them access to leading research through boot camps, summer schools and workshops of the ELLIS programs. Every PhD student is supervised by one ELLIS fellow/scholar and one ELLIS member from a different country and conducts a 1 year exchange at the other location.
Research areas include (but are not limited to) the following ML driven research areas: Machine Learning Algorithms Machine Learning Theory Optimization Deep Learning Interactive and Online Learning Reinforcement Learning and Control Computer Vision Computer Graphics Robotics Human Computer Interaction Natural Language Processing Causality Interpretability and Fairness Robust and Trustworthy ML Quantum and Physics based ML Symbolic Machine Learning Computational Neuroscience Earth and Climate Sciences Bioinformatics Health
Interested candidates should apply online through our application portal (portal will open on October 12, 2020): Application website: apply.ellis.eu (register first at: apply.ellis.eu/registration/ellis_2020) Application deadline: December 1st, 2020, 23:59 CEST
Please note that only students who are not yet working as a PhD with an ELLIS fellow/scholar/member can apply through the central recruiting process (but see PhD Program, option 2, for PhD students already working with an ELLIS fellow/scholar/member).
ELLIS seeks to increase diversity and the number of women in areas where they are underrepresented and therefore explicitly encourages women to apply. We are also committed to employing more people living with disabilities and strongly encourage them to apply.
The University of Amsterdam is hiring an Assistant professor in Computer Vision by Machine Learning for their QUVA lab, a research collaboration with Qualcomm AI research at the Faculty of Science. You are expected to work on fundamental aspects of computer vision by machine learning, deep learning models and algorithms.
You will also be the manager of the QUVA lab and will be responsible for executing the daily activities in the lab and co-supervising the lab’s PhD students, most of whom are expected to start early 2021. The PhD projects cover: video action recognition, multi-task multi-modal learning, video representation and efficiency, hardware-aware learning, federated learning, combinatorial optimization, unsupervised learning for source compression, temporal causality learning and continuous learning. Your emphasis will be on the vision projects.
What are you going to do?
You are going to carry out AI research in association with the projects mentioned above, as part of the QUVA lab at the University of Amsterdam. You will be the QUVA lab manager and will be responsible for daily activities in close cooperation with the directors prof. Snoek, Prof. Welling and dr. Gavves. There will be regular visits to and interactions with the researchers at Qualcomm AI Research, who have an office on campus.
In terms of teaching, you are expected to contribute to strengthening the curriculum in computer vision and deep learning of the Bachelor and Master AI and related programs such as the bachelor and master Information Systems. The total teaching load will be around 30%. You should have a broad interest in computer vision by machine learning which means that you must be able to teach a wide variety of AI, vision and learning courses in both BSc and MSc.
Your tasks will be to: develop new computer vision by machine learning methods within the context of the lab’s research projects and develop your own independent research line; acquire independent funding from sources such as the national funding agency NWO (e.g. VIDI), EU funding via H2020 (e.g. ERC starting grant) and industry; collaborate with the PhD students within the QUVA lab and researchers of Qualcomm AI Research. This includes helping Qualcomm write patent applications to protect inventions from the lab when requested; regularly present intermediate research results at top-tier international conferences and workshops, and publish them in proceedings and journals; organize and execute relevant teaching activities; contribute to the organization of the QUVA lab, the institute and the faculty.
A 2-year (with possible extension of one year) full-time PostDoc position is available at the Centre for Information and Innovation Law (CIIR) of the Faculty of Law at the University of Copenhagen. The position is part of the GrandSolutions project “LEGALESE – Danish language Processing for Legal Texts” and financed by the Innovation Fund Denmark.
The position is available from 1 December 2020 and for the duration of 2 years (with possible extension of one year) or
The faculty is seeking an enthusiastic and outstanding post-doctoral candidate with a strong interest on the intersection of law and computer science.
LEGALESE is a joint venture by the University of Copenhagen’s Faculty of Law and Department of Computer Science, the private company Schultz, and Ankestyrelsen. The aim of LEGALESE is to research and develop a product that facilitates efficient legal information retrieval and automated recommendation using cutting-edge natural language processing (NLP).
Bias detection and removal (debiasing) are essential for the trustworthiness and legality of such NLP system. The PostDoc will work on the related cutting-edge legal issues with a view to establishing a standard for NLP implementations in terms of both privacy and bias and shall focus research on how debiasing may be implemented in operational NLP systems. The research will be conducted in ongoing collaboration with the Department of Computer Science as well as the involved practice partners, Schulz and Ankestyrelsen.
Applicants are encouraged to familiarize themselves with the faculty's research areas and education programmes by visiting the Faculty’s website: www.jura.ku.dk.
The Faculty actively supports the effort to learn Danish.
Rethinking (Human) Communication in the Era of Artificial Intelligence
Eun-Ju Lee (Seoul National University)
S. Shyam Sundar (Pennsylvania State University)
This special issue aims to bring together communication/media/journalism scholars who directly tackle the questions of how artificial intelligence (AI) might change communication and research on communication phenomena. Ranging from message production to message dissemination to message consumption, across various contexts including one-to-one conversation to mass-mediated communication, AI is now replacing, assisting, and/or augmenting human communicators in diverse roles, thereby potentially modifying the processes and outcomes of communication – for better or for worse. Tight integration of AI in mediated communication has created or aggravated issues such as filter bubble and mis/disinformation. At the same time, AI is also considered a solution to social ills, like hate speech. While praising the capability of AI tools like smart speakers and chat bots to reduce loneliness and depression among otherwise socially isolated individuals, people also express concerns about them lacking human touch. With rapid and seemingly fundamental changes in how communication is performed, it is imperative for communication scholars to critically evaluate the relevance and utility of existing theories and research findings, and propose new ones. How is human communication changing, and what should communication researchers study in the emergent AI landscape?
Topics include (but are not limited to): How does AI affect the way we communicate and what comes out of it (e.g., obtaining information, making sense of the world, connecting with each other, entertaining ourselves, etc.)? What are the psychological, social, political, and cultural implications of such changes? What are the emerging research agendas in each subfield of the communication discipline – e.g., interpersonal communication, organizational communication, political communication, health communication, mass communication, journalism studies, communication law and ethics, etc. – that are attributable to the advancement of AI? What are the key constructs in communication research that need to be revisited in light of the current integration of AI in communication processes? How can various models and theories about communication help us better understand, predict, and explain potential impact of AI on individuals and society? How can they inform and guide the development of AI applications and services to promote the public interest?
As recruiters respond to social-distancing challenges of the COVID-19 pandemic, such tools may appear increasingly attractive. Not only can AI analyse job applications more efficiently, it is often touted as fairer in its selection of applicants. Designers of these systems claim that they are free of individual prejudice, systematic bias, and are even better at discerning the virtues and vices of applicants. Furthermore, some proponents have even claimed that AI could select candidates using synthetic categories that are capable of better predicting future job performance than even the most experienced HR professionals. Despite such claims, ethicists increasingly are finding reason to be sceptical of this technology. On the one hand, the ethical challenges of AI in the HR domain mirror those that have received significant attention in other AI application domains, such as policing and criminal justice, especially surrounding the problem of discriminatory profiling (Angwin et al., 2016; Barocas and Selbst, 2016). One common issue in these cases is the use of historically biased data sets in the training of AI algorithms, which results in reinforcement of historical and existing discrimination. Similarly, in the HR domain, data sets based on existing hiring practices are to be expected to replicate existing prejudice (Tambe et al., 2019). On the other hand, an emerging ethical problem that has been less investigated thus far is the manner in which the use of AI in HR infringes candidates’/employees’ autonomy over self-representation (Van den Hoven and Manders-Huits, 2008) — their ability to choose and control how they communicate their skills, motivation, personality, and experiences, while being subjected to reductionist and opaque quantification of these highly nuanced, contextual, and dynamic qualities (Delandshere and Petrosky, 1998; Govaerts and Van der Vleuten, 2013; Lantolf and Frawley, 1988). This special issue focuses on both kinds of problems. We invite contributions on the ethics of applying AI in the HR domain for the purpose of recruiting, hiring, employee performance assessment, etc., with special focus on (but not limited to): •Autonomy and control over self-representation/presentation •Fairness, transparency, and justice in socio-technical HR organizational practices involving AI •Erosion of the idea of a labour market •Privacy and the right to a non-work private life •Appropriate distribution of roles and responsibilities among candidates/employees/employers/AI •Deselection by individual idiosyncrasy and other factors that are not relevant to employment