The ICAIL 2021 Doctoral Consortium aims at promoting the exchange of ideas from PhD reseachers in the area of Artificial Intelligence and Law, and to provide them an opportunity to interact and receive feedback from leading scholars and experts in the field. Specifically, the Consortium seeks to provide opportunities for PhD students to: - obtain fruitful feedback and advice on their research projects; - meet experts from different backgrounds working on topics related to the AI & Law and Legal Information Systems fields; - have a face to face mentoring discussion on the topic and methodology of the PhD with an international senior scholar; - discuss concerns about research, supervision, the job market, and other career-related issues.
To be eligible for the Consortium, a candidate must be a current doctoral student within a recognised university. Ideally, the candidate should have at least 8—12 months of work remaining before expected completion. The participants of the Doctoral Consortium must register for and attend the main conference. The PhD student should be the sole author of the submission. Note that submissions to the Doctoral Consortium are entirely separate from any papers that students may have submitted to the main conference.
The accepted thesis descriptions or research descriptions will be presented to an interested audience and subject to discussion during the ICAIL 2021 conference.
To try and bridge the gap between the two disciplines the Amsterdam UMC (AUMC) and Vrije Universiteit Amsterdam (VU) are joining forces and are creating a joint position between the Cancer Center Amsterdam of the Amsterdam UMC and the Department of Computer Science at the Vrije Universiteit Amsterdam.
The position will focus on the use and re-use of structured and semi-structured clinical data (opposed to medical imaging or bioinformatics) to deploy and improve machine learning techniques to develop clinically relevant models, mainly (but not exclusively) targeting predictive modeling and personalization of healthcare. Besides purely data-driven approaches, making a combination with the vast amount of domain knowledge present in the medical domain is also in scope.
The Human-Centered Computing group currently offers a 5-year PhD position. You will work under the daily supervision of Professor Judith Masthoff. The position includes research as well as teaching, where the teaching commitments account for 30% of employment time.
Emotional support plays an important role in keeping people motivated. Previously, we have investigated automated adaptive emotional support for individuals. In this PhD project, this will be extended to emotional support for collaborative groups.
You will investigate how a computer using Artificial Intelligence (AI) can automatically provide emotional support to individual group members as well as the group as a whole, adapted to individual and group characteristics (for example, personalities, affective states, effort, performance). You will build on previous research on emotional support by computers / affective computing, e-coaching, computer-supported collaborative work/learning, personalization, and user/group/context modelling.
You will use a mixture of qualitative (e.g. interviews, focus groups), quantitative (e.g. empirical studies), design, AI (e.g. user modelling, personalization), and prototyping methods. You will have some freedom in selecting the domain of collaboration and how the support will be provided. Your research should result in high-quality scientific publications as well as real world impact.
You will be involved in supporting the preparation of Bachelor's and Master's courses, offered by the Department of Information and Computing Sciences. Furthermore, you will teach such courses and supervise student theses.
"e invite cutting-edge theoretical and empirical research from across the globe on the normative implications of AI for journalism and journalism research, and the democratic, ethical, and fundamental rights-related implications of the use of AI and data analytics in the media. We specifically invite young scholars to submit their work. We welcome contributors from a broad range of disciplines, including journalism, history, communication, and media studies, law, philosophy, STS, and computer science interested in the ethical and fundamental rights questions raised by the use of AI and algorithms in the media industry. We invite papers on, for example, but not exclusively:
- Theoretical and empirical contributions on the role of AI and algorithms in journalism and the democratic role of journalism,
- investigations into how the integration of AI and algorithms changes the political economy in media markets, creates new, or removes old institutional dependencies and the role of external parties such as tech companies,
- studies into how the use of AI and algorithms in the media affects the ability of citizens to benefit from their right to freedom of expression, to form and hold opinions, and to make informed political choices,
- how journalistic and public values such as diversity, objectivity, relevance, etc. can be translated and preserved in algorithmic design and routines,
- Governance and regulation of the use of data, AI and algorithms in the media,
- the role of fundamental rights, law and ethics in the digital media and potential areas of regulation
- comparative normative/legal work across different European/non-European countries"
he position is embedded in the current Explainable and Reliable Artificial Intelligence (ERAI) group of DKE. The group consists of Associate & Assistant Professors, postdoctoral researchers, PhD candidates and master/bachelor students. The ERAI group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements. We conduct both fundamental and applied research, with a focus on explainable Artificial Intelligence.
The PhD candidate will also benefit from a strong industry and (both national and international) research network such as PIs involvement with a Marie-Curie European International Training Network on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI https://nl4xai.eu/), where 11 other early stage researchers are working on explanations.
The full-time position is offered for a duration of four years, with yearly evaluations.