If you’ve ever pictured “AI in therapy” as a glowing robot hovering between us like a third participant—great news: there is no robot. There’s also no secret earpiece, no HAL-9000 voice, and no moment where I swivel my chair and ask a chatbot to interpret your childhood.
Also: before I became a therapist, I spent a long time in computer science—so yes, I like tools, and yes, I’m allergic to hype.
If I use AI at all during a session, it’s not to “do therapy on you.” It’s not a substitute therapist, it doesn’t diagnose you, and it doesn’t understand you better than you understand yourself. Most sessions don’t involve AI at all. When it does show up, it’s because it genuinely serves the work we’re already doing together.
In real life, it’s closer to a fast, sometimes wildly overconfident brainstorming tool. I think of it as very smart and very eager—the kind of helper that’s read a lot and wants to please. That can be useful for generating possibilities, but it also means it can sound confident while being wrong, and in the wrong context it can even drift into advice that would be unsafe to follow. That’s why, if I use it at all, it’s always under supervision: minimal and non-identifying input, and we treat what it produces as a rough draft that we critically review—never as an authority. It’s never “driving.” If it shows up in a session, it’s sitting in the passenger seat with a clipboard while the two of us figure out the route we’re going to take together.
What it looks like in a session (usually over video)
Most sessions are just two humans talking, as therapy has always been.
But occasionally a situation is complex enough that it helps to step back and make sure we’re seeing the whole landscape: practical pressures, relationship dynamics, body/stress factors, the “story” you’re carrying, and what you actually have control over right now.
In those moments I might say something like:
“Would it be helpful if we used a tool to generate a quick list of common angles people run into with situations like this—then we’ll sort it together and keep what fits and toss what doesn’t?”
That’s the whole move. No sci-fi. No mystery. It’s basically structured brainstorming so we don’t miss something obvious.
The house rules (a.k.a. the part that makes this ethical and unexciting)
If you’re thinking, “Okay, but what about privacy?”—good. That’s the right question.
Here’s how I handle it:
- You’re in charge. If you don’t want AI used in session, you can say no. No awkwardness, no pressure.
- I don’t enter identifying details. No names, no addresses, no workplace names, no unique personal specifics. If I can’t phrase it generally, I don’t use it.
- We treat the output like a rough draft. Sometimes it’s useful. Sometimes it’s wrong. Sometimes it’s biased. We evaluate it together—we don’t treat it as authority.
- It doesn’t do diagnosis, risk assessment, or clinical decision-making. Those responsibilities stay where they belong: with me, with you, and with the actual human relationship across the screen.
Why use it at all?
Because humans are great—and also imperfect.
When something carries emotion, urgency, or high stakes, it’s easy for anyone (including a therapist) to focus hard on the most intense piece and accidentally ignore another important piece.
A quick “generate possible angles” moment can help us:
- notice a category we haven’t talked about yet (sleep, isolation, workload, health constraints, etc.),
- see where competing pressures are colliding (values vs obligations),
- or create options when everything feels stuck in one narrow track.
Then we do what therapy is actually for: we decide what matters, what doesn’t, and what you want to do next.
What if you find AI creepy or just… not your thing?
Completely valid.
Some people like tools. Some people hate them. Some people are fine as long as it’s transparent and optional. I’m comfortable with all of those.
If you want therapy that never involves AI in-session, that’s fine. We’ll do it the classic way: two humans, careful attention, and the slow work of change.
Bottom line
Therapy is not a technology problem. It’s a human problem: patterns, emotions, history, relationships, meaning, and choice.
If AI shows up in my work, it’s in a limited, optional role—used openly, with your consent, and only when it genuinely helps us stay clear and practical.
If you’re curious about how this might fit for you—or if you already know you want zero AI involved—feel free to bring that up in a consultation.
Next step
If this resonates for you, request a free 30-minute phone consultation (see the Contact page for what to include).
Leave a comment