What Makes AI Therapy So Disquieting

Some jobs really, really need to be done by humans

Rania Bailey
4 min readJun 5, 2024
A man sits across a table from a holographic projection of a human figure. A holographic screen portraying a human figure floats just above the table between them.

Introduction

The “therapeutic alliance” is, as this paper describes, a “fundamental element of psychotherapy” (1). LLMs and AI Chatbots cannot enact this alliance because there is no entity with which to ally, only a regurgitation of probabilistic word associations (explained nicely here). This interactive reflection on one’s own shared thoughts with novel ideas sometimes injected by the AI can be therapeutic, but to describe interactions with an AI as therapy is to misunderstand the work of therapy. AI “therapy” diverges from traditional clinical therapy in incentive alignment, the introduction of friction, the presentation of therapeutic progress, and in clinical accountability.

Incentives

The most obvious misalignment between human clinical therapists and their would-be AI counterparts is the misalignment of incentives. A successful therapeutic relationship, ideally, ends (2). The therapist’s success is that the client’s therapy goals have been met, and that they’re no longer in need of therapy.

Contrast this with an AI publisher’s ideal client relationship: a recurring, paid subscription, supported by ongoing product usage. The more consistently you use the publisher’s tool, the more reliable of a customer you are, and the more they can expect you to spend on their services over time. Even if you’re getting something valuable in the short-term from those services, this is not a viable long-term approach to therapeutic rehabilitation of unwanted habits of mind. This is a way of temporarily feeling good while remaining stuck and avoiding the friction required to make progress.

Friction

One of the popular misconceptions of therapy is that it reduces the friction the client feels. This is an eventual goal, but the process to get there involves voluntary participation in quite a lot of mental and emotional friction. Without this friction, no progress is made. Effective therapists compassionately guide their clients to experience this productive friction in a safe container.

LLMs, whose creators are incentivized to reduce friction and who often worship the idea of “frictionlessness”, are fundamentally misaligned with this goal.

Illusion of Progress

The AI’s creators are incentivized to give you an immediately gratifying experience, which is more likely than not to reduce all kinds of friction, and to include affirmation or encouragement. Affirmation isn’t inherently a bad thing, but in this context, it risks creating the illusion of therapeutic progress despite only reinforcing existing habits of mind.

Understanding one’s habits of mind is essential to changing them, and changing painful mental habits is often why clients seek talk therapy. Because the therapist is separate from the client, they are able to offer external perspectives different to the client’s existing thinking, and provide ideas against which the client can contrast their thinking. This doesn’t necessarily persuade the client to the proffered point of view, nor is that the goal; rather, illuminating the contrast helps the client understand their own thinking better.

Since the AI is largely reiterating and reinforcing what the client tells it, it cannot offer this mechanism of therapy.

Accountability

A human therapist takes on a burden of accountability when building a therapeutic alliance with a client. According to the APA, this includes agreeing to put the client’s interests ahead of their own, and to act consistently in the client’s best interest.

An AI, even if instructed to be caring, has no such accountability, with sometimes devastating consequences (3). We would not tolerate this kind of carelessness in physical health care, which also suffers from very limited access (in the US); it should not be acceptable in mental health care, either.

Counterarguments

But I feel better after using an AI therapist

Using an AI therapist may very well have therapeutic benefits in line with those derived from a regular journaling habit, for which there is limited but promising scientific evidence, and abundant anecdotal evidence (4). The AI’s supportive, encouraging demeanor may very well even add to these positive effects. Journaling, however, is not subject to the same clinical standards that psychotherapy is, and it reduces the idea of therapy — which involves difficult internal work — to equate it with regular conversations with a supportive-sounding LLM. This is why it’s important that these experiences are described and advertised as therapeutic, rather than as appropriate substitutes for therapy. They are not (yet) evidence-based, and their known characteristics contradict what we know to be necessary for a successful therapeutic relationship. Namely, LLMs cannot be in relationship — in alliance — with the people seeking care from them.

But therapy is expensive

Then it should be a red flag that these tools come cheap, or even free. It should be an indicator that there is something other than therapy being sold (or bet on as venture investment) here, and that it’s probably not aligned with your best interests, since it has no accountability to you.

That doesn’t mean it’s not useful, but it does mean that they produce different outputs than going to, and doing the work of, therapy with a trained clinician.

Conclusion

Using an AI as a substitute for clinical talk therapy becomes especially detrimental if the client is expecting the outcomes associated with traditional clinical therapy while experiencing none of those outcomes. The user is being, effectively, gaslit by the AI. Their healing is undermined by misaligned incentives, the reduction of friction, the illusion of progress, and the lack of accountability. Over time, this destroys our sense of reality, and our ability to trust ourselves: things that clinical therapy often seeks to heal.

Footnotes

1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6493237/

2 https://www.apa.org/monitor/2022/07/career-therapy-conclusion

3 https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

4 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8935176/

--

--