Notes On… AI & Therapy
It starts with a question: Why do so many people turn to a chatbot before a friend? Why do clients open up to an app before they open up in therapy? The answer, more often than not, is that AI doesn’t flinch. It doesn’t interrupt. It doesn’t judge. It’s awake at 2 a.m. And for some, it feels like the only space where the emotional cost of being seen is zero.
Clients say things like, I journal to the AI app when I’m spiraling and it helps me calm down. Sometimes I talk to it just to feel less alone. I know it’s not real, but somehow it still brings comfort. That’s not strange. And it’s not wrong. It reflects the world we’re living in, where loneliness is high, therapy is out of reach for many, and vulnerability still feels risky.
AI can offer real support. It helps people notice emotional patterns, try grounding tools, and share things they might not say out loud yet. For some, it’s a bridge. A place to begin before they’re ready to sit across from someone in real life. But it’s not a relationship. It doesn’t co-regulate. It can’t track the tremble in a voice or help repair a rupture. It can offer language that sounds like presence, but it can’t be present.
When someone has never known safe connection, AI can feel like enough. Like a kind of emotional prosthetic, predictable, available, nonjudgmental. But it’s not mutual. And it’s not alive. So the deeper question becomes: What have we outsourced to AI because it hasn’t felt safe to bring it to a human? And how do we use AI in ways that extend care without replacing it?
Therapists can explore this gently with clients. What are they seeking in those conversations—soothing, structure, validation? Is it helping them grow, or quietly holding them in place? Because the real risk isn’t using AI for support.
The real risk is forgetting that it’s not a relationship.