ELIAS: An Agentic Interview Service

With ELIAS, we aim to simplify and reduce bias in the interview process by utilizing a friendly, human-like chatbot to engage with individuals, and complement it with another intelligent tool to facilitate sorting and understanding of the conversation.
Interviews are a standard method for gathering information in the social and health sciences, but they require a significant amount of time and effort. Interviews can also be influenced by the personal views or behavior of the person conducting the interview, which can lead to variations in how questions are asked or how answers are interpreted. This can make it hard to get consistent and honest answers, especially when people are asked about sensitive or personal topics. Some people might feel uncomfortable talking face-to-face or worry about being judged or identified.
To address these problems, this project aims to develop an agentic interview service, called ELIAS. ELIAS is a computer-based interview tool that uses a friendly, human-like digital assistant to ask questions in a caring and unbiased way. The system also includes smart software that helps sort and make sense of people’s answers, basically doing the very time-consuming job of organizing the information. Researchers can still oversee the interviews and how the answers are processed, but the system makes it much faster and easier to manage a large number of interviews.
The answers are anonymized, so no personal details are shared. However, they can still be instrumental: researchers can utilize this information to write scientific papers or to aid in training advanced computer programs that leverage large language models, such as chatbots or voice assistants. For example, these digital assistants could leverage the lessons learned from the interviews to help people prevent, manage, or treat certain conditions.
References
Budig, T., Nißen, M.K., Kowatsch, T. (2025) Towards the Embodied Conversational Interview Agentic Service ELIAS: Development and Evaluation of a First Prototype, LLM4Good: The 1st Workshop on Sustainable and Trustworthy Large Language Models for Personalization co-located with the 33rd ACM International Conference on User Modeling, Adaptation and Personalization (UMAP 2025), New York, USA, June 16-19, 2025, 10.1145/3708319.3733810.
Research Team
Tobias Budig, Dr. Marcia Nißen, Prof. Dr. Janna Hastings, Prof. Dr. Tobias Kowatsch
Advisors
t.b.d.
t.b.d. Prof. Noah Bubenhofer, Faculty of Humanities & Prof. Beth Singler, Faculty of Theology and Religious Studies, University of Zurich
Runtime
September 2025 – December 2026
Funding
This project is funded by the UZH Digital Society Initiative in the form of a DIZH Infrastructure/Lab-Project.