This position invites applicants interested in developing research in critical computing, exploring new methods in computer science or software production, and/or addressing the challenges of ensuring that computational infrastructures serve the public interest.
Possible topics for this position include, but are not limited to: privacy enhancing technologies and privacy engineering protective optimization technologies or other critical approaches in machine learning measurement studies in service ecosystems and cloud infrastructures political economy of software industries, including cloud infrastructures, service architectures, production environments etc. empirical studies of software production and integration in programmable infrastructures requirements engineering or software engineering methodologies alternative design and infrastructures for public interest technologies
Successful candidates will benefit from working in a dedicated setting that appreciates and promotes interdisciplinary research at a leading technical university. Succesfull applicants will be able to work with dr. Seda Gürses in building an international community of scholars advancing socio-technical and critical studies of computer science and current computational infrastructures. Successful candidates will also benefit from close collaborations with dr. Roel Dobbe and his group in the Engineering Systems and Services (ESS) department working on AI from a socio-technical perspective. The selected candidates will have access to a broad network of academics, civil society actors, activists and artists working on public interest computing at TU Delft and elsewhere.
The scholarships are funded by the Novo Nordisk Foundation. The project goal is to develop robust handwriting recognition models to digitize historical tables of names and birth dates, and the prospective PhD students can engage in all related research topics, including, but not limited to, handwriting (digit/word) recognition, constrained inference for natural language generation, transfer learning and modeling historical drift, cross-lingual transfer of handwriting recognition and language models, ensemble/committee-based learning, and recognition/language modeling with document-level consistency constraints.
Principal supervisor is Full Professor, Anders Søgaard, Department of Computer Science, firstname.lastname@example.org, Direct Phone: +45 35329065.
The position is available for a 3-year period and your key tasks as a PhD student at SCIENCE are: - To manage and carry through your research project Attend PhD courses - Write scientific articles and your PhD thesis - Teach and disseminate your research - To stay at an external research institution for a few months, preferably abroad - Work for the department
This project focuses on the development and implementation of AI technology that respects the values central to public institutions in the area of media and culture. Core research themes revolve around public values, e.g. diversity and inclusivity, investigating how technology can deal with biases in data, account for multiple perspectives and subjective interpretations and bridge cultural differences. The impact of AI on media and culture is studied from a socio-technological perspective. The project may focus on one or more use cases, for example in public broadcasting, media archiving, or other relevant public institutions and other relevant related research questions. The PhD candidate will contribute to the research on the impact of AI on media in the AI4Media network of research excellence centres funded by the European Commission.
The successful candidate will be appointed in the department of Media Studies and based in the creative environment of the CREATE Lab for digital humanities research. They will work within the broader Humane AI community at the University of Amsterdam via the RPA Human(e) AI. Depending on the profile of the chosen candidate, their research time will be allocated at the Amsterdam School for Heritage, Memory and Material Culture or the Institute for Language, Logic and Computation.
The candidate is expected to:
- complete and defend a PhD thesis within the official employment duration of four years;
- regularly present research results at international workshops and conferences, and to publish them in conference proceedings and journals;
- participate in and to contribute to the organisation of research activities and events at the RPA Humane AI, such as workshops and colloquia;
- participate in the Faculty of Humanities PhD training programme; - teach courses at bachelor’s and/or assist at master’s level in the 2nd and 3rd year (0.2 FTE per year).
Rethinking (Human) Communication in the Era of Artificial Intelligence
Eun-Ju Lee (Seoul National University)
S. Shyam Sundar (Pennsylvania State University)
This special issue aims to bring together communication/media/journalism scholars who directly tackle the questions of how artificial intelligence (AI) might change communication and research on communication phenomena. Ranging from message production to message dissemination to message consumption, across various contexts including one-to-one conversation to mass-mediated communication, AI is now replacing, assisting, and/or augmenting human communicators in diverse roles, thereby potentially modifying the processes and outcomes of communication – for better or for worse. Tight integration of AI in mediated communication has created or aggravated issues such as filter bubble and mis/disinformation. At the same time, AI is also considered a solution to social ills, like hate speech. While praising the capability of AI tools like smart speakers and chat bots to reduce loneliness and depression among otherwise socially isolated individuals, people also express concerns about them lacking human touch. With rapid and seemingly fundamental changes in how communication is performed, it is imperative for communication scholars to critically evaluate the relevance and utility of existing theories and research findings, and propose new ones. How is human communication changing, and what should communication researchers study in the emergent AI landscape?
Topics include (but are not limited to): How does AI affect the way we communicate and what comes out of it (e.g., obtaining information, making sense of the world, connecting with each other, entertaining ourselves, etc.)? What are the psychological, social, political, and cultural implications of such changes? What are the emerging research agendas in each subfield of the communication discipline – e.g., interpersonal communication, organizational communication, political communication, health communication, mass communication, journalism studies, communication law and ethics, etc. – that are attributable to the advancement of AI? What are the key constructs in communication research that need to be revisited in light of the current integration of AI in communication processes? How can various models and theories about communication help us better understand, predict, and explain potential impact of AI on individuals and society? How can they inform and guide the development of AI applications and services to promote the public interest?
As recruiters respond to social-distancing challenges of the COVID-19 pandemic, such tools may appear increasingly attractive. Not only can AI analyse job applications more efficiently, it is often touted as fairer in its selection of applicants. Designers of these systems claim that they are free of individual prejudice, systematic bias, and are even better at discerning the virtues and vices of applicants. Furthermore, some proponents have even claimed that AI could select candidates using synthetic categories that are capable of better predicting future job performance than even the most experienced HR professionals. Despite such claims, ethicists increasingly are finding reason to be sceptical of this technology. On the one hand, the ethical challenges of AI in the HR domain mirror those that have received significant attention in other AI application domains, such as policing and criminal justice, especially surrounding the problem of discriminatory profiling (Angwin et al., 2016; Barocas and Selbst, 2016). One common issue in these cases is the use of historically biased data sets in the training of AI algorithms, which results in reinforcement of historical and existing discrimination. Similarly, in the HR domain, data sets based on existing hiring practices are to be expected to replicate existing prejudice (Tambe et al., 2019). On the other hand, an emerging ethical problem that has been less investigated thus far is the manner in which the use of AI in HR infringes candidates’/employees’ autonomy over self-representation (Van den Hoven and Manders-Huits, 2008) — their ability to choose and control how they communicate their skills, motivation, personality, and experiences, while being subjected to reductionist and opaque quantification of these highly nuanced, contextual, and dynamic qualities (Delandshere and Petrosky, 1998; Govaerts and Van der Vleuten, 2013; Lantolf and Frawley, 1988). This special issue focuses on both kinds of problems. We invite contributions on the ethics of applying AI in the HR domain for the purpose of recruiting, hiring, employee performance assessment, etc., with special focus on (but not limited to): •Autonomy and control over self-representation/presentation •Fairness, transparency, and justice in socio-technical HR organizational practices involving AI •Erosion of the idea of a labour market •Privacy and the right to a non-work private life •Appropriate distribution of roles and responsibilities among candidates/employees/employers/AI •Deselection by individual idiosyncrasy and other factors that are not relevant to employment
Over the past decades, racial and other bias-driven inequities have persisted or increased, diversity remains low in many educational and vocational contexts, and educational gaps have increased. Despite efforts to address these issues, biases based on factors such as race and gender persist. These issues have come to the forefront with recent crises around the world. In this conference, we invite the AIED community to reflect on issues of equity, diversity, and inclusion in regards to the educational tools and algorithms that we build, how we assess the efficacy and impact of our applications, the theories that we build on and contribute to, and within the AIED society. The use of intelligent educational applications has increased, particularly within the past few years. As a community, development and assessment practices mindful of potential (and likely) inequities are necessary. Likewise, planful diversity, equity, and inclusion practices are necessary within the AIED society and home institutions and companies.
Potential topics related to the conference theme include: Promoting equity in research Biases in algorithms, AI, or applications Multicultural aspects of AI in Ed Supporting underachieving students Cultural and population differences AI in Ed for underserved communities and marginalized populations Gender and sex-based biases Equity, diversity, and inclusion in the community Data mining techniques to measure equity
Current debates on artificial intelligence often conflate the realities of AI technologies with the fictional renditions of what they might one day become. They are said to be able to learn, make autonomous decisions or process information much faster than humans, which raises hopes and fears alike. What if these useful technologies will one day develop their own intentions that run contrary to those of humans?
The line between science and fiction is becoming increasingly blurry: what is already a fact, what is still only imagination; and is it even possible to make this clear-cut distinction? Innovation and development goals in the field of AI are inspired by popular culture, such as its portrayal in literature, comics, film or television. At the same time, images of these technologies drive discussions and set particular priorities in politics, business, journalism, religion, civil society, ethics or research. Fictions, potentials and scenarios inform a society about the hopes, risks, solutions and expectations associated with new technologies. But what is more, the discourses on AI, robots and intelligent, even sentient machines are nothing short of a mirror of the human condition: they renew fundamental questions on concepts such as consciousness, free will and autonomy or the ways we humans think, act and feel.
Imaginations about the human and technologies are far from universal, they are culturally specific. This is why a cross-cultural comparison is crucial for better understanding the relationship between AI and the human and how they are mutually constructed by uncovering those aspects that are regarded as natural, normal or given. Focusing on concepts, representations and narratives from different cultures, the conference aims to address two axes of comparison that help us make sense of the diverse realities of artificial intelligence and the ideas of what is human: Science and fiction, East Asia and the West.
Papers are invited on the following topics (among others): - Which meanings and functions are ascribed to AI technologies and robots? - How is science informed by popular discursive images of AI? - Which cultural differences are there concerning the relationship between the natural and the artificial? What are the particular traditions of how to represent the human and its technological surrogates? - What can the different cultural and conceptual histories tell us about our present and future with artificial intelligence?
Besides papers on these more general topics, HIIG also invites case studies on innovative technologies and their fictional precursors as well as on the social, ethical or political contexts in which they are applied. All contributions are expected to address the comparative perspective on East Asian and Euro-American discourses.
Relevant issues and perspectives for these comparisons include but are not limited to cyberpunk and science-fiction in literature and film, public debates and imaginations of AI, the relation between simulation and reality, materiality, historical and legal accounts, sociotechnical imaginaries and politics.
HIIG welcomes contributions from scholars of diverse disciplines, such as cognitive science, computer science, cultural studies, literature and film studies, media and communication studies, psychology, political science, science and technology studies or sociology. Interdisciplinary approaches (e.g., those combining social, cultural and technical perspectives) as well as perspectives from practitioners and developers are particularly encouraged.
The 18th International Conference on Artificial Intelligence and Law (ICAIL 2021) will be held ONLINE, organized by the University of São Paulo from Monday, June 21 to Friday, June 25.
Since 1987, the International Conference on Artificial Intelligence and Law (ICAIL) has been the foremost international conference addressing research in Artificial Intelligence and Law. It is organized biennially under the auspices of the International Association for Artificial Intelligence and Law (IAAIL), and in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI). The conference proceedings are published by ACM.
The 18th International Conference on Artificial Intelligence and Law will be held entirely online due to the overall Covid-19 pandemic situation.
We invite submissions of papers, technology demonstrations, as well as proposals for workshops and tutorials.
The Department of Humanities, Social and Political Sciences (D-GESS, www.gess.ethz.ch) at ETH Zurich invites applications for the above-mentioned position.
The successful candidate has competences in individual ethics, social theory/social philosophy, sociology and history of technology in contemporary societies. He or she is expected to build and sustain an excellent research and teaching programme in an interdisciplinary environment and should have an internationally recognised outstanding record of publication. He or she should be willing to cooperate with colleagues in D-GESS and other departments (e. g. computer sciences, robotics, architecture, law, history of technology, science studies and health sciences). In her or his teaching activities, the new professor will contribute to the “Science in Perspective” course and specialised master programmes at D-GESS, attracting a wide range of excellent students from across the university. Generally, at ETH Zurich undergraduate level courses are taught in German or English and graduate level courses are taught in English.
"e invite cutting-edge theoretical and empirical research from across the globe on the normative implications of AI for journalism and journalism research, and the democratic, ethical, and fundamental rights-related implications of the use of AI and data analytics in the media. We specifically invite young scholars to submit their work. We welcome contributors from a broad range of disciplines, including journalism, history, communication, and media studies, law, philosophy, STS, and computer science interested in the ethical and fundamental rights questions raised by the use of AI and algorithms in the media industry. We invite papers on, for example, but not exclusively:
- Theoretical and empirical contributions on the role of AI and algorithms in journalism and the democratic role of journalism,
- investigations into how the integration of AI and algorithms changes the political economy in media markets, creates new, or removes old institutional dependencies and the role of external parties such as tech companies,
- studies into how the use of AI and algorithms in the media affects the ability of citizens to benefit from their right to freedom of expression, to form and hold opinions, and to make informed political choices,
- how journalistic and public values such as diversity, objectivity, relevance, etc. can be translated and preserved in algorithmic design and routines,
- Governance and regulation of the use of data, AI and algorithms in the media,
- the role of fundamental rights, law and ethics in the digital media and potential areas of regulation
- comparative normative/legal work across different European/non-European countries"