Webinar: How to deliver safe and effective agentic health interventions?
Agentic systems are quickly moving beyond “chat” toward interventions that can plan, personalize, monitor, and take actions across time. In Precision Digital Therapeutics (PDTx), that creates a unique bar: it is not enough to be generally safe. These systems must be clinically grounded, privacy-preserving by design, and reliably effective in real-world, longitudinal use.
What we will focus on
This session is moderated by Dr. Samantha Weber and designed to help DTx builders, clinical teams, and regulators align on what “safe and effective” means when an agent is delivering health interventions. We will discuss DTx-relevant dimensions such as:
- Working alliance and engagement: how an agent builds trust appropriately without fostering unhealthy dependence, manipulation, or boundary violations
- Risk detection and escalation: handling suicidal ideation and self-harm signals, minimizing friction, and routing to the right local resources when needed (including known failure modes in today’s chatbots)
- Clinical boundaries and scope control: preventing overreach into diagnosis, medication changes, or crisis management beyond intended use
- Sensitive health data handling: consent, retention, deletion, secondary use, and security posture as first-order clinical safety issues
- Longitudinal safety: consistency across time, robustness to prompt manipulation, and monitoring drift as models and tools evolve
What we want to achieve together
This webinar also kicks off a focused workstream for Precision DTx-quality agentic interventions:
- Identify the relevant and critical dimensions for safe and effective agentic health interventions
- Prioritize the dimensions
- Assign work groups to each relevant dimension
- Technical implementation of an agentic test suite that reflects all prioritized dimensions
Who should attend?
- Precision DTx product and clinical leads
- Safety teams building agentic systems
- Quality, regulatory, and post-market surveillance stakeholders
- Data protection and security owners responsible for health-grade deployments
- Researchers with interest in agentic health interventions and backgrounds in computer, health, and management science.
When and where?
Friday, 16 January 2026, 10:30 – 11:30 am CET, via Zoom
Suggested pre-reading and keynotes
- Keynote by Dr. Weber on From Understanding to Dialogue: AI-based Models for Depression Evaluation, 18 November 2025, Webinar
- Weber et al (2025) Using a fine-tuned large language model for symptom-based depression evaluation, npj Digital Medicine 8, 598, 10.1038/s41746-025-01982-8
- Ostermann et al (2025) If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck, npj Digital Medicine 8, 741, 10.1038/s41746-025-02175-z
- Flathers et al (2025) Contextualizing Clinical Benchmarks: A Tripartite Approach to Evaluating LLM-Based Tools in Mental Health Settings, Accepted version
- Huang et al (2024) TRUSTLLM: trustworthiness in large language models, ICML’24: Proceedings of the 41st International Conference on Machine Learning Article No.: 813, Pages 20166 – 202 10.5555/3692070.3692883, https://github.com/HowieHwong/TrustLLM, https://trustgen.github.io/
- Hart (2025) Chatbots are struggling with suicide hotline numbers, The Verge, https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure
