AI & Machine Learning

Is AI therapy a good idea? Not on its own

Dr Vicky Crockett talked to Times Radio about the news that growing numbers of people are using AI tools for therapy – is this a good idea?

Last week I was invited onto Hugo Rifkind’s Times Radio show to discuss whether AI chatbots could be a useful form of therapy. It was in response to a recent BBC report showing how growing number of teens are turning to AI chatbots – in this case DeepSeek, which we discuss here – for advice and guidance. It’s a big and growing subject, and is not talked about enough.

You can hear the exchange below. The short answer is: not on its own. But don’t rule it out entirely.

AI chatbots are popping up everywhere, offering everything from fitness coaching to legal advice. But what happens when people turn to them for mental health support? In addition to the BBC report about DeepSeek, last year it was reported that millions of people are turning to psychologist chatbots. There is clearly a demand here, and I can see it getting ever more popular in the coming months and years. But it’s not without risk.

The risks: privacy, isolation, and the limits of AI

That millions of young people are using AI purely for direct therapy is worrying. Right now, even the most advanced AI assistant is a far cry from a trained, compassionate therapist. There are some potentially serious concerns here, from privacy to isolation.

Take the security of data with AI therapy. If you’re using an off-the-shelf chatbot, everything you type could be stored, analysed, or even used to train the next version.

Knowing this, would you be comfortable sharing personal details? After all, a lot of therapy does involve sharing intimate information about yourself. It’s essential that users, who may often be vulnerable and possess different levels of understanding around how the technology works, are fully aware of where their information goes. At the moment I’m not sure they do. (I suspect that any AI therapy bots which operate withing the EU will fall under the new EU AI Act, and will therefore require strict transparency regarding how user data is being handled, and include warnings about potential risks.)

Then there are concerns around dehumanisation, which we discussed on the Times Radio segment. Fundamentally, therapy works because of human connection. If AI becomes the go-to solution, are we moving towards a world where people rely on machines instead of one other? In an AI-driven therapeutic context, the absence of human empathy and understanding could lead to a diminished quality of care.

Even if the care itself is to standard, AI therapy still represents time spent interacting with technology rather than with other people. There’s an irony here, since many of our existing mental health challenges across society stem from disconnection. If AI is the only support available, it could reinforce that problem rather than solve it.

The potential: AI as a ‘first step’

But this doesn’t mean we should discount AI entirely. Right now, there’s a huge and well-documented shortage of mental health support. (Not to mention a related but separate ‘loneliness’ epidemic). Given how good some of these AI assistants now are, I think it could help fill some of the gaps – offering instant, 24/7 guidance, helping people reflect on their thoughts, and signposting them to professional services.

Like a lot of emerging uses of AI, it seems the most effective systems are hybrid human-machine models. I could see AI acting as a valuable triage system, able to identify when a user needs help beyond its capabilities, and directing them to a certified, human therapy service.

There are already examples. Schools across the U.S. are adopting "Sonny," a hybrid chatbot developed by Sonar Mental Health, to address the shortage of counsellors. Sonny provides support to students via text responses, and can be especially valuable to students in low-income and rural areas.

Sonny can support teens during stressful periods like college applications and exams, and notifies authorities if students express intentions of self-harm or violence. Every conversation is monitored by a ‘trained Wellbeing Companion’, and school staff can intervene when necessary, aiming to match students with professional counsellors if needed.

Done well, this kind of system helps schools proactively address students' mental health needs and reduces behaviour infractions by creating a judgment-free space for students to share their concerns.

Finding the right balance

AI can’t solve every social problem, but I’m always looking for ways that it can be part of the solution. And I think there is a middle-ground here. But it would rely on clear safeguards. That means three things above all:

  1. Human oversight – AI tools should be used alongside trained professionals, not as a replacement. In in any industry, AI should augment human intelligence, not seek to render it obsolete. While AI can assist in identifying patterns or providing initial support, the nuanced understanding and empathy of human therapists are irreplaceable.
  1. Secure systems – Sensitive conversations need to be protected, not shared – especially not without explicit user knowledge. With the increasing usage of AI for medical use cases, concerns regarding data sharing, triangulation, and ethics are also cropping up due to a lack of diversity in data representation, which in turn can lead to inequitable services. The emerging introduction of safety marks on AI systems is a crucial part of helping consumers know if it’s been risk assessed and has safety nets in place before using for therapy or other medical uses.
  1. Real-world connections – AI therapy solutions should nudge people towards human support, not keep them locked in a digital loop. Elements of wellbeing that cannot be solved by AI – like exercise, sleep, and socialising – are still vital. Sometimes, that means getting off our devices, and not turning to them for support.

AI in mental health is evolving fast. It’s an equally promising and difficult space. One thing’s clear: technology can help if it’s empowering and valuing people. But not if it’s replacing them.

Related Articles