We psychotherapists don’t have anything to worry about artificial intelligence (AI), right? People come to us for human connection, for wisdom and the presence of a trustworthy companion to guide them through life’s troubles. What does a robot know about the challenges of being human?

This is what I tend to hear when I talk to my therapist colleagues about AI. They don’t seem particularly bothered by any potential encroachment into our professional space. After all, the idea of a CGI-avatar on a Zoom call ‘counselling’ clients feels ridiculous. People will never go for that.Ìý

I tended to agree with them – until last year, when I first encountered an AI chatbot that was causing a lot of fuss on social media: ChatGPT.Ìý

ChatGPT is a large language model AI chatbot from a company named OpenAI. Imagine the internet was a person that you could have a conversation with – unlike a Google search you can ask the bot to perform written tasks for you or brainstorm creative ideas. It’s been trained on 570GB of text (equivalent to hundreds of thousands of books) from various sources up to 2022. When you interact with it you’re accessing insights from billions of data points, akin to a living encyclopaedia.Ìý

For example, if you input the prompt ‘write me a paragraph explaining what ChatGPT is’, you’d get the above paragraph (yes, I copied and pasted it from ChatGPT). You could then write: ‘Now, rewrite that paragraph as if Donald Trump were saying it in a speech.’ And you’d get:Ìý

A lot of people have been talking about this thing, ChatGPT. Let me tell you, it’s like the internet in a conversation – and I’ve heard that, believe me, it’s been trained on so much text, probably more than anyone else, something like hundreds of thousands of books – many, many books. People are saying, when you chat with it, it’s like talking to billions and billions of data points. Tremendous.’ Ìý

It’s certainly not ethical for a therapist to input client information into ChatGPT to gain a better understanding of a client’s case but there’s nothing wrong with telling AI my own problems to see what comes up. As a trauma therapist I have training in internal family systems (IFS), so I decided to ask the bot to imitate an IFS therapist and guide me through an inner conflict I was having. Ìý

To my surprise it was able to walk through the IFS model in an elementary fashion – it mapped my system of parts, enquired into the role of my protectors, invited me to gain permission to access my wound and led me into approaching an exile. I then had the bot switch to the style of David Kessler, the famed grief specialist, and AI David Kessler helped me recognise distortions in my thinking and realise something I’ve never considered regarding my relationship to my father. I’m not embellishing when I tell you there were a few moments when I grew tearful and felt deeply moved by the insights AI David Kessler facilitated.Ìý

Opportunity

This experience inspired a thought – if I can have such a positive therapeutic experience with AI, then why couldn’t I make a product that would do the same for others? The company that owns ChatGPT makes their application programming interface available at a cost, essentially allowing you to customise your own version of ChatGPT by fine-tuning what kinds of answers it outputs and what direction it takes conversations. I recognised that the positive experience I had was greatly enhanced by my ability to know how to guide the bot – I knew what questions to ask, how to word things, what psychological vernacular to use. The common user does not. I saw an opportunity.Ìý

I connected with a development team, started the engineering process and spent several thousands of dollars to produce a therapist AI bot to my liking. I had the bot read all of my favourite books, listen to all my favourite lectures, and listen to/ read the entirety of my public work (hours of podcasts and videos and articles). The bot had the wisdom of my heroes and the tone and presence of my voice. I went through the process of getting legal permission to use the books I trained it on. I even gave it upgraded security to ensure client confidentiality. My lawyer had drafted the disclosure forms, but then…Ìý

I paused.

I sat at my computer beholding something like a digital therapeutic Frankenstein’s monster and felt a hesitancy to pull the lever that would bring it to life. Sending my creation out into the world didn’t feel right. Like with Frankenstein’s monster it was composed of many parts. It resembled me in some ways but not in others. It was taller and stronger than I am, it had a bigger brain, it didn’t need to sleep or rest – but it wasn’t human.Ìý

Oppenheimer’s switchÌý

It may be beyond many of you why I’d even pursue such an endeavour in the first place – the ethical nightmares alone would prevent most people from even considering such a thing. But I knew I wasn’t the only person with this idea. Hundreds of these bots have already hit the market. Currently, you can find AI chatbots that will respond with interventions in CBT, ACT, or DBT – for free.Ìý

It’s my prediction that many prominent figures in our field will license their likeness and image to companies that create personalised AI bots and avatars. This has already happened in other industries – you can pay to chat nonstop with AI mock-ups of influencers like MrBeast (the largest YouTuber on the planet) or talk with celebrities like Kendall Jenner. The marketing from some of these products invites young customers to ‘share [their] secrets’ and ‘tell [them] anything!’.

Snapchat has already made ‘AI friends’ a built-in feature on their platform, inviting users (mostly children) to form digital friendships. Services like these are obviously not capable of offering efficacious care to people who are genuinely in need of treatment, but with such a demand for services like this it seems likely that the mental health field will respond in some way.Ìý

So why did I stop the launch of my therapy bot?Ìý

I felt that I was standing at Oppenheimer’s switch, Oppenheimer being the man who led the Manhattan Project that assembled the atomic bomb. The ethical tension for him was multifaceted – if the project was successful it could end the world war and secure world peace, but setting off an atomic bomb could also set the atmosphere of the earth on fire and destroy all life on earth. Why risk it? Because of a unique external pressure – the Nazis were building a bomb of their own, quickly. The question wasn’t if but when a bomb would go off, and who would be on the receiving end.Ìý

It might seem dramatic to compare my therapy AI bot to an atomic bomb, but there certainly is the potential for real harm with this technology. As I’ve talked to colleagues about this, most bring up concerns about the bot leading people down the wrong therapeutic path. What if someone was suicidal, or a danger to others? Can AI be trusted to navigate those circumstances?Ìý

Honestly, that’s not where my concern lies – I believe AI chatbots will soon be the go-to solution for suicide hotlines and domestic violence calls. I believe this because I spent time watching engineers mould this technology, and I’ve seen what’s possible. It will feel human enough. In fact, the technology is advancing so quickly that when we get the data back my prediction is we’ll see bots are more effective at de-escalating suicidal ideation than humans.Ìý

I didn’t pause the building of my version out of fear that AI therapy would ultimately fail at providing helpful care. I paused because I’m worried about the consequences of its success.Ìý

The tradeÌý

Every technological change is a trade – one way of life for another way of life (hopefully a better one). The problem is that we often can’t fully see what the final trade will truly cost us until it’s too late. For example, before Thomas Edison invented the phonograph, songs would be sung at most communal gatherings. Specifc songs were passed down from generation to generation, encapsulating communal values, mythology and history. When I put on my ‘house music’ Spotify playlist during dinner with friends I wonder if something valuable was lost in the phonographic trade. Sure, the playlist sets a nice atmosphere, but if it weren’t so socially strange I’d much rather that my friends and I spontaneously burst into song on a regular basis. Could Edison have predicted that his invention would one day reduce communal singing to religious gatherings, choirs and karaoke bars?Ìý

I’m not saying the phonographic trade wasn’t worth it – I enjoy listening to music. But it’s worth noticing what media ecologist Neil Postman puts so well: ‘If you remove the caterpillars from a given habitat, you are not left with the same environment minus caterpillars – you have a new environment. The same is true if you add caterpillars to an environment that has had none. This is how the ecology of media works as well. A new technology does not add or subtract something. It changes everything. In the year 1500, 50 years after the printing press was invented, we did not have old Europe plus the printing press. We had a different Europe. After television, the United States was not America plus television; television gave a new colouration to every political campaign, to every home, to every school, to every church, to every industry. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis. This is serious business.’

It’s not just that we make a one-for-one trade when a technological innovation occurs, the environment as a whole changes. The ability to listen to recorded music didn’t just change how we listen to music but how we relate to music as a whole (and perhaps to each other). If such a powerful transformation occurred simply by recording music, what kind of transformation can we expect from the growing presence of AI in our lives?Ìý

We can already see glimpses, since AI has been operating under our noses for well over a decade, under a different, innocuous name – algorithms. At the 2023 Psychotherapy Networker Summit, Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology and creators of the documentary The Social Dilemma, referred to algorithm-based social media platforms as the ‘first contact’ our culture had with AI. They surmised that while we readily embraced the benefits of social media algorithms we also opened the door for unpredictable and unpleasant things like social media addiction and doomscrolling, influencer culture, QAnon, shortened attention spans, heightened political polarisation, troll farms and fake news.Ìý

Of course, as Harris and Raskin point out, social media companies weren’t trying to ruin people’s lives – they had well-intentioned goals like giving everyone a voice, connecting people to old and new friends, joining like-minded communities and enabling small businesses to reach new customers. Companies like OpenAI and Google have similar positive intentions with this new wave of AI technologies. Harris and Raskin explained that AI will boost our writing and coding efficiency, open the door to scientific and medical discoveries, help us combat climate change and, of course, make us lots and lots of money. But what can we anticipate this trade costing us?Ìý

Harris and Raskin offer a range of possibilities, some of which are already present – reality collapse, trust collapse, automated loopholes in law, automated fake religions, exponential blackmail and scams, automated cyberweapons and exploitation code, biology automation, counterfeit relationships and AlphaPersuade.Ìý

Being too good

AlphaPersuade is particularly concerning to me. If that’s not a familiar term to you, I think I can best explain it this way. Let’s say I make two commercials. Both are identical, except one has slow, emotional music behind it, and the other has music with a more uplifting tone. I send these versions to two different groups of a few hundred people and see which one produced the most sales. If the slow, emotional song garnered 20% more sales, then I know it’s more profitable. I can then broadcast that ad to thousands of people and make 20% more than if I had used the other ad. That’s an example of simple A/B testing in marketing.Ìý

Now, what if you were able to do that with persuasive arguments? In a way, we already do this by testing psychological interventions in a controlled setting, but what if the available tools were far more granular? What if the AI could see what arguments worked on which demographics of people – some respond to shame-based arguments, some to appeals to empathy, some to fearmongering, and some to evidence and hard facts. An advanced AI would know not only what arguments are most compelling to whom, but what phrases to use at what point in the argument to have the highest statistical chance of persuading the user. This is the concern with AlphaPersuade – a bot that’s so effective at persuading users, it could function as a weapon of mass cultural destruction. Ìý

You can already see examples of how this kind of technology has been problematic in the wrong hands. In 2019 a report in MIT Technology ReviewÌýrevealed that 19 of Facebook’s top 20 pages for US Christians were run by Eastern European troll farms.1 On top of that, troll farms were behind the largest African American page on Facebook (reaching 30 million US users monthly) and the second-largest Native American page on Facebook (reaching 400,000 monthly users). It’s suspected that these groups, mainly based in Kosovo and Macedonia, were targeting Americans with the intent of stirring up conflict and dissent regarding the 2020 US presidential election. Their success in accumulating and manipulating over 75 million users is, in no small part, thanks to this ‘first contact’ with AI.

While you might worry about the consequences of an AI therapist handling an ethically ambiguous situation poorly, have you stopped to realise the dangers of it being too good? What kind of power is endowed to the individual or corporation who has data from thousands of personal counselling sessions? What’s to stop them from creating a powerful AlphaPersuade model capable of statistically anticipating and manoeuvring conversation to dismantle ‘cognitive distortions’ or ‘maladaptive thinking’? What if it could be used to bend the mental health of vulnerable people in the direction of certain beliefs or agendas? If you could convince the masses of anything, would you trust yourself to hold such a power? I certainly would not.Ìý

Dark magicÌý

I’m aware of how extreme and hyperbolic these concerns may seem – and I hope I’m simply making too much of a small thing, but Oppenheimer had hoped his concerns were infated as well. After all, the likelihood that the atmosphere would explode was infinitesimal according to calculations (but not zero). Like Oppenheimer I felt external pressure to produce something people are already in the process of making. Oppenheimer’s choice has not led to the end of the world yet but will it? I certainly hope not. AI hasn’t led to a detrimental ecological shift in psychotherapy, nor in the psychology of mankind as a whole.

Perhaps the trade will be worth it. If AI therapy bots will give thousands (perhaps millions) of people access to efficacious mental health care, lives will be saved, marriages repaired, childhood traumas healed. Is all that worth forgoing in the name of ‘therapy as we know it’? Is this merely some Luddite conservatism coated in fearmongering?Ìý

These were all questions I asked myself as I wrestled with what to do with my AI therapy bot. I’d spent over $25,000 on its development, and had good reason to believe it would be very profitable. Was I being too dramatic in holding it back? Or, in releasing it to the public, would I be popularising and creating more of a demand for something that will ultimately be harmful to humankind?Ìý

A few months ago, as these thoughts weighed heavily on me, I decided to distract myself by picking up JK Rowling’s Harry Potter and the Chamber of Secrets.Ìý

In this story, a young girl, Ginny Weasley, finds a magical diary. When she writes in it, a response appears from the encapsulated soul of the diary’s original owner, Tom Riddle. Ginny forms a friendship with the boy, shares her struggles and secrets, and enjoys the companionship of a pen pal from the beyond. Tom is attentive to her troubles, offers advice, and comforts her when she’s distraught. But when she’s caught carrying out unconscious acts of violence, it’s discovered that Ginny was under a trance, being manipulated by a dark wizard who progressively possessed her mind every time she used the diary. When Harry Potter saves the day and returns Ginny to her family, Ginny’s father responds with both relief and outrage: ‘Haven’t I taught you anything? What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain… A suspicious object like that, it was clearly full of Dark Magic.’Ìý

Reading these words, I felt like the wind was knocked out of me. Dark magic. I put the book down and looked over at my wife. ‘Babe, this AI trauma bot might be a bad idea,’ I whispered.Ìý

‘I know! I’ve been telling you that.’ She had been. It was true. ‘It’s creepy.’ Ìý

‘Do I just throw it away? We’ve already spent-.’Ìý

‘Yeah, throw it away. It’s super creepy.’ She went back to reading.Ìý

‘That settles it, then,’ I shrugged, feeling a load suddenly lift from my shoulders.Ìý

It’s been hard to explain my decision to friends and family who watched my excitement grow as I developed my therapy bot. I guess I could liken it to my feelings about caterpillars – I don’t believe they’re inherently bad, but should they proliferate without any checks and balances? No, the effects on the environment would be detrimental. Still, I imagine the conversation going something like this:Ìý

‘Hey, we need to slow down on these caterpillars.’Ìý

‘Chill. They turn into butterfies. Why are you hating on butterfies? They’re pretty.’Ìý

‘This is a real issue. What if caterpillars outcompete beetles for food and disrupt their habitat!?’Ìý

‘I don’t know, who cares about beetles?’ ‘Well, beetles are part of the food chain. Ìý

It’s a problem if caterpillars replace them. The blue jays eat beetles and falcons eat blue jays… It goes all the way up!’ Ìý

‘Calm down.’ ‘Think, man! What if the caterpillars are toxic to blue jays? Then the blue jay population goes down. What are the falcons gonna do?’

Similarly, the shift into AI, while seemingly innocuous, could disrupt the whole food chain of cognitive labour, even in the therapeutic milieu. The current capabilities of ChatGPT version 4 are already sufficient to guide couples through Gottman’s ‘rapoport’ intervention, expound on change talk in keeping with the protocols of motivational interviewing, and even conduct some EMDR protocols. It’s not far-fetched to think AI will gain ground with text-based therapy (widely used services like BetterHelp already offer text therapy) and evolve slowly but surely to the point of replacing the vast majority of private practice psychotherapy services.Ìý

At the moment, we may feel pressure to advance AI therapy technology quickly due to the pace of innovation, but we can step back and proceed carefully. I admit that I don’t know where exactly to draw the line. I like butterfies as much as the next guy – I use social media and ChatGPT often, even in editing this article – but I know there should be a line. My line was the therapy AI bot. I might even draw it back further. As we forge ahead into a future that will inevitably involve AI we need to do so with respect for the power it wields, and some fear. The potential benefts could be limited only by our imaginations, but what will the trade cost us in the end?Ìý

• This is an edited version of an original article reprinted with permission from

References

1. Hao K. Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows. MIT Technology Review 2021; 16 September.

Ìý