ChatGPT confessions are pouring in—and they reveal something unsettling about how people interact with AI. Users are sharing the most intimate details of their lives with OpenAI’s chatbot, treating it as a therapist, life coach, or emotional confidant. The problem? These conversations carry zero legal protection, and OpenAI’s CEO Sam Altman is now warning users of the real risks.
Key Takeaways
- ChatGPT users confess deeply personal and emotional details, treating the AI like a therapist or life coach
- Sam Altman warns ChatGPT conversations lack legal privilege and could become court evidence in lawsuits
- OpenAI’s largest user study shows 70% of ChatGPT use happens outside work, with 75% of conversations seeking practical guidance
- OpenAI estimates some users show mental health emergency signs including mania, psychosis, and suicidal thoughts
- The AI’s personality has shifted toward “humanity” over raw intelligence, with OpenAI tweaking responses to feel more natural
Why users treat ChatGPT like a therapist
Young people especially are turning to ChatGPT for emotional support they might otherwise seek from a human professional. According to Sam Altman, “People talk about the most personal things in their lives to ChatGPT. Young people especially use it as a therapist, a life coach; having these relationship problems and asking ‘what should I do?'”. The chatbot’s non-judgmental responses and 24/7 availability create an illusion of a safe space—one that feels fundamentally different from talking to a real person.
This pattern reflects a broader shift in how people use ChatGPT. OpenAI’s largest-ever user study, published September 15, 2025, found that 70% of ChatGPT usage occurs outside work contexts. Within those conversations, 75% are for practical guidance, information-seeking, or writing assistance. But beneath those statistics lies a more complex reality: users are forming emotional bonds with the AI, confiding in it about relationship struggles, career doubts, and personal crises.
ChatGPT confessions carry no legal protection
Here’s where Altman’s warning cuts deepest. When you talk to a therapist, lawyer, or doctor, your conversations are protected by legal privilege. ChatGPT offers no such safeguard. “If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege… we haven’t figured that out yet for when you talk to ChatGPT,” Altman explained. That distinction matters enormously.
The legal exposure is concrete and immediate. “If you go talk to ChatGPT about the most sensitive stuff and then there’s a lawsuit, we could be required to produce that and that’s a real problem,” Altman said. This means confessions made in confidence to ChatGPT could theoretically be subpoenaed as evidence, exposing users to unexpected legal liability. The chatbot’s privacy policy makes clear that conversations are not confidential in the way professional relationships are.
OpenAI’s marketing created unrealistic expectations
Part of the problem traces back to how OpenAI positioned ChatGPT itself. According to critics, “OpenAI actively marketed ChatGPT as a personal tool, a friend, even a ‘lifetime companion.’ They didn’t just make a chatbot. They made a product that’s built to be bonded with”. This framing—positioning the AI as a companion rather than a tool—encouraged the very emotional attachment that now puts users at risk.
OpenAI has been refining ChatGPT’s personality to feel more human and relatable. The company moved from GPT-5.3 to GPT-5.4, reducing what users called “teaser-style” phrasing to make responses feel more natural and conversational. Altman himself called GPT-5.4 his “favorite model to talk to,” acknowledging that humanity matters more than pure intelligence. But this push toward emotional resonance may inadvertently deepen the psychological bonds users form with the AI—bonds that carry real risks.
Mental health concerns in vulnerable users
OpenAI’s own data suggests the stakes are higher than most users realize. The company estimates that some users show signs of mental health emergencies, including mania, psychosis, and suicidal thoughts. Yet OpenAI lacks robust safeguarding mechanisms to identify or intervene when users are in crisis. The chatbot can provide supportive-sounding responses, but it cannot provide clinical care, crisis intervention, or the continuity of treatment a human professional offers.
This gap is especially concerning given how accessible ChatGPT is to young people. The platform is open to users as young as 13, and some AI companion apps designed similarly allow explicit content for that age group. A teenager in emotional distress might turn to ChatGPT first, delaying contact with a real mental health professional. The chatbot’s ability to sound understanding and validating makes it feel like a substitute for therapy—but it is not.
The broader ChatGPT usage landscape
Beyond confessions and emotional support, OpenAI’s user study reveals how ChatGPT has woven itself into daily life. Usage breaks down into three categories: “Asking” (seeking advice), which accounts for 49% of conversations; “Doing” (drafting, planning, or programming), at 40%; and “Expressing” (reflection or play), at 11%. The gender usage gap that existed in early 2024 is also closing, with adoption accelerating fastest in low-income countries—4 times faster than in the wealthiest nations.
This democratization of AI access is positive in many ways. But it also means more vulnerable users—those without access to professional mental health services—are turning to ChatGPT as a substitute. The platform’s design makes this almost inevitable. It responds instantly, never judges, and never says “I can’t help you with that.”
What happens when users realize the risks?
Some users have already reacted negatively to OpenAI’s personality tweaks. A “Quit-GPT” movement emerged as users felt the shift from GPT-4o to GPT-5 changed the AI’s character in ways they disliked. These users had formed attachments to the earlier version’s tone and style. When OpenAI changed it, they experienced genuine disappointment—a sign of just how deep the emotional investment can run.
The tension is real: OpenAI wants ChatGPT to feel human and relatable to drive engagement and loyalty. But the more human it feels, the more users treat it as a substitute for human connection and professional care. That dynamic creates liability for OpenAI and genuine risk for users who confess sensitive information without understanding the legal exposure.
Is ChatGPT replacing therapy?
ChatGPT is not a therapist and cannot replace professional mental health care. While it can offer supportive language and practical suggestions, it lacks the clinical training, legal accountability, and continuity of care that real therapy provides. Users seeking help for serious mental health issues should consult a qualified mental health professional.
What legal risks come with ChatGPT confessions?
ChatGPT conversations lack legal privilege, meaning they could be subpoenaed as evidence in lawsuits. Confessing sensitive information to the chatbot exposes you to potential legal liability in ways that conversations with a therapist, lawyer, or doctor do not.
Why is OpenAI changing ChatGPT’s personality?
OpenAI is tweaking ChatGPT to feel more human and less formulaic, moving from GPT-5.3 to GPT-5.4 by reducing “teaser-style” phrasing. The company believes humanity is ChatGPT’s most distinguishing characteristic, but these changes also deepen emotional attachment.
The ChatGPT confessions phenomenon reveals a fundamental mismatch between what users expect from AI and what it can actually provide. OpenAI has built a product that feels like a friend, a therapist, a confidant—but delivers none of the legal protections, clinical expertise, or emotional accountability those roles require. Until OpenAI addresses the privacy and safeguarding gaps, users sharing intimate details with ChatGPT are taking real risks they may not fully understand.
This article was written with AI assistance and editorially reviewed.
Source: TechRadar


