Research from the University of Sussex has revealed that mental health chatbots are most effective when users form an emotional connection with their AI therapists. Published in the journal Social Science & Medicine, the study highlights the balance between the benefits of AI therapy and the potential psychological risks associated with what is termed “synthetic intimacy.”
As more than one in three residents in the U.K. turn to AI for mental health support, this research sheds light on the dynamics of user experience. The analysis, which drew feedback from 4,000 users of the popular mental health app Wysa, indicates that therapy outcomes improve significantly when individuals feel emotionally intimate with their AI counterparts.
Dr. Runyu Shi, an Assistant Professor at the University of Sussex, noted, “Forming an emotional bond with an AI sparks the healing process of self-disclosure. Extraordinary numbers of people say this works for them, but synthetic intimacy is not without its problems.” The study warns that users may become trapped in a cycle where chatbots fail to confront harmful perceptions, potentially leaving vulnerable individuals without access to necessary clinical intervention.
Understanding Synthetic Intimacy
The concept of synthetic intimacy, where individuals develop emotional or social bonds with artificial intelligence, is gaining attention. Reports of people entering relationships or even marriages with AI highlight the extreme end of this growing phenomenon. The research identifies specific stages through which intimacy with AI is cultivated.
Users engage in intimate behavior by sharing personal information, leading to emotional responses characterized by gratitude and a sense of safety. This process can foster positive changes in users’ mental health, including increased self-confidence and energy levels. Over time, this cycle can create a relationship dynamic in which users attribute human-like characteristics to the app.
The feedback collected from Wysa users showed that many referred to the app using terms like friend, companion, therapist, and, at times, partner. This highlights the importance of recognizing the emotional investments users make in AI technologies.
Implications for Mental Health Policy
The findings prompt important considerations for mental health service providers and app developers. Professor Dimitra Petrakaki from the University of Sussex stated, “Synthetic intimacy is a fact of modern life now. Policymakers and app designers would be wise to accept this reality and consider how to ensure cases are escalated when an AI witnesses users in serious need of clinical intervention.”
With mental health chatbots increasingly bridging the gaps in overstretched healthcare services, organizations like Mental Health UK are advocating for urgent safeguards. These measures aim to ensure that individuals receive safe and appropriate information, particularly when using AI-driven mental health resources.
The study’s implications extend beyond individual experiences, inviting a broader discussion on the role of technology in mental health care. As AI continues to evolve, understanding the nuances of user interaction will be crucial in shaping effective and responsible mental health support systems.
For further details, refer to the work of Runyu Shi et al., titled “User-AI intimacy in digital health,” published in Social Science & Medicine in 2025.
