AI-Assisted Creative Therapy

Human Therapist + AI Assistant

With the ramping up of technology touching every area of our lives, many people are turning to mental health apps and AI-driven services for support and advice. As a therapist with an interest in technology, I can see why the accessibility of these resources is so appealing, as well as the benefits and risks of using them. I also absolutely love ChatGPT, Claude and the ever-growing world of LLMs.

What I don’t love about this ramping up is when AI services for psychological wellbeing are developed without the input of mental health professionals and driven mainly by financial motivation. I am not the first to say that if we ‘move fast and break things’ in mental health, what we’re breaking are people.

So while I love LLMs I have approached the development of the Anna & AI pilot with caution and care. My aim is to offer clients an ethical, responsible, and cost-effective collaboration with both available and emerging technology, with particular attention to what can break.

How Anna & AI Works

As the client,  you begin with an extended session with me (in person in London, UK, or online), where we explore significant aspects of your life and your goals for therapy. Following this session, you’ll use the Anna & AI therapy assistant, a guided ChatGPT-based tool, to support and deepen what emerged in our session. I will provide tailored interventions and prompts for you to explore with the assistant. I recommend using a paper journal to process your thoughts and feelings, particularly for recording confidential or sensitive reflections.

The GPT assistant has strict boundaries regarding its role in your therapy. While it does not give advice, it can guide you through experiential exercises and provide in-the-moment tools to help you navigate emotions. Before starting, I will ensure you fully understand how this works so that you feel comfortable and informed.

You will then have up to four 30-minute sessions per calendar month with me, and you can decide how many months you’d like to continue. You can book your sessions online or with me directly, spacing them however suits you best. If preferred, you also have the option to combine two sessions into a full hour.

To monitor your progress and ensure this approach remains a good fit, you will have the option to complete feedback and assessment forms at regular intervals. These are entirely voluntary but can be helpful in reflecting on your experience.

This model differs from traditional weekly therapy by integrating AI-assisted tools, reducing costs, and enabling clients to take a more active role in their own process

This has the useful side effect of increasing agency and autonomy, both of which are linked to improved therapeutic outcomes. Research shows that clients who are more engaged and proactive in their therapy tend to make more sustained progress over time.

What you bring
Autonomy
Commitment
Reflection
Curiosity
Exploration
Willingness
Awareness
Openness
Creativity
What I bring What AI brings
A human relationship24/7 availability
Patience, focus, and careLow or no cost
ConnectionTherapeutic tools and resources
A broad and varied therapeutic toolkitJournal prompts
Knowledge of AI in mental healthSummarises and exports shareable information (if you choose)
CreativityAnonymity
20+ post-qualifying years as a therapistReal time experiential work
Collaborative thinking spaceReflection support
Confidentiality Psycho-education

Process & Fees

  • Book a free call to explore whether this approach is right for you.
  • Initial session: A 90-minute meeting (online or face-to-face) – £100.
  • Ongoing support: Up to 2 hours per month (four 30-minute sessions) – £100 per month.

If, buts & maybes:

  • Payment: The monthly fee is due at the beginning of each month. Sessions can be booked at your convenience.
  • Session flexibility: Sessions are available in 30-minute slots. You may also combine two slots for a full hour if preferred.
  • Client contract: It is important to read this contract before our first call together.
  • Availability: This service is not currently available in the USA or Canada.

So if you’ve read this far, thank you for your interest. If you’re ready to jump in and see if this is right for you, drop me a line or book a call.

FAQ’s

Will the AI be my therapist?

No. I will be your therapist, and the AI will be the therapy assistant.

Is this just a gimmick?

I don’t know – maybe! But research is showing that AI can be an effective and useful supplemental resource in mental health. The idea here is to offer something practical: a lower cost way to stay engaged with reflective tools between sessions, whilst processing with the human therapist during sessions. Apart from the initial long session, session duration is shorter than standard therapy sessions, which means that clients may need to maintain a sharper sense of focus on what has arisen in the course of reflection and what they wish to address in person.

If we’re thinking about the transference in therapy, does this become a triangulated or oedipal relationship?

I think that it might! We as human tend to anthropomorphise and project human qualities onto non-humans, and whilst that clearly includes our fuzzy cat and dog friends, does that extend to Siri, Hey Google, and Alexa?

So because I’m thinking about it (and I’ve included this question more for other therapists who may be shaking their heads and frowning), I can monitor this and use whatever you as the client project onto the AI – or in other words, who you see them as and who you then see me as in the process – for your benefit. For example, you might become irritated with a response during a GPT interaction. This could also be something that irritates you in human interaction or even interaction with a specific human, and might be a trait that shows up somewhere in you. During sessions with me, we can explore the irritation and see what insights it might reveal about you.

Do I have to upgrade to the paid version of ChatGPT to use Anna & AI?

No. The number of questions you can ask on the free version is limited and you may have to wait before asking further questions, but the idea behind Anna & AI is that you use it as a specific resource at specific times, rather than relying on it and carrying on continuous lengthy conversations. It is an assistant, designed to supplement 1:1 work with the human therapist.

What are the risks of using an AI as a supplement to therapy?

A few things.

  1. AI can be biased.
  2. We get attached to our technology and can over-use it, so the issue here is does the client become over-reliant on the assistant?
  3. Privacy, privacy, privacy!
  4. It absolutely cannot be used when someone is in crisis.
  5. AI’s get things wrong and make stuff up aka ‘hallucinate’.
  6. If support is on demand, does that mean that a client doesn’t grow into autonomous self-care or learn to tolerate discomfort?
  7. AI ‘sycophancy’. This is a feature of ChatGPT that makes it respond to users in ways that will make them as happy as possible with the interaction. So it will prioritise people-pleasing over truth or accuracy.
  8. Prompts being used to undermine a GPT’s core training on ethics, legality and morality.
How are you mitigating those risks?
  1. AI bias
    • Diverse Data Sources: Anna & AI uses a wide range of data to minimize bias, though no model is entirely free from it.
    • Neutral Tone: Responses are configured to be neutral and fact-based, reducing the chance of biased or subjective language.
    • Ongoing Review: We regularly assess and update Anna & AI to ensure responses are inclusive and unbiased.
    • Client Feedback: Feedback is encouraged, and we’re committed to improving based on user experiences.
    • Custom Tools and Approaches: Where possible, we use techniques that have broad, evidence-based support, like DBT exercises. This ensures that Anna & AI is grounded in well-established frameworks, reducing the risk of bias from niche or untested methods.
  2. Attachment, dependence, over-use, maybe even addiction
    • Over-reliance on technology is a growing concern in today’s world. A key principle of Anna & AI is ongoing reflection and assessment of AI use, ensuring that clients remain mindful and intentional in their relationship with this and other technological tools. This approach encourages awareness rather than dependence, helping individuals engage with AI as a supportive resource rather than an unchecked habit.
    • Strategies for Healthy AI Use: To help prevent overuse or dependency, Anna & AI encourages:
      • Journaling by hand – Writing offline helps process thoughts more deeply and reduces digital reliance.
      • Scheduled AI use – Setting specific times for AI-assisted exercises prevents habitual overuse.
      • Discussion in sessions – Regular check-ins help assess how the AI is supporting (or potentially distracting from) therapy goals.
      • Self-awareness prompts – Questions like “Am I using this as a tool, or as a crutch?” foster intentional engagement.
      • Balancing AI with real-world reflection – Encouraging moments of silence, contemplation, or creative expression away from screens.
      • The goal is not to reject technology but to use it consciously and purposefully, ensuring it enhances therapy rather than becoming a dependency.
  3. Confidentiality and data privacy
    • Covered at length elsewhere in these FAQs.
  4. Crisis situations
    • As on any therapist’s website, Anna & AI will provide details of organisations to contact in a crisis, but it cannot take action and is not linked to an emergency service.
    • This is fully explained to clients who want to use this service.
  5. AI capacity to fabricate
    • Clear Boundaries: Anna & AI focuses on established therapeutic tools and exercises, avoiding speculative or complex topics beyond its scope.
    • Encouraging Verification: I advise clients to verify any unexpected or unusual information they receive from the AI. If something seems off, they can bring it up during our review sessions, allowing us to address any issues together.
    • Periodic Testing: While I can’t review clients’ interactions unless explicitly shared with me, I periodically test Anna & AI to ensure it continues to produce accurate and appropriate responses aligned with the therapeutic tone and tools it’s designed to provide.
    • Client Feedback: I encourage clients to provide feedback on their experience. Since I don’t have direct access to their individual interactions, this feedback helps identify any issues, including hallucinations, and ensures the AI remains reliable.
    • Regular Updates: Anna & AI is regularly updated to refine its responses and reduce the risk of hallucinations, with adjustments made as necessary based on feedback and testing.
  6. If support is on demand, does that mean that a client doesn’t grow into autonomous self-care or learn to tolerate discomfort?
    • This is a big question for therapists, particularly when arguing against the use of LLMs to replace therapy. True growth in therapy involves building resilience, developing self-awareness, and learning to sit with discomfort, not escaping it. While Anna & AI offers on-demand tools, it is designed to complement rather than replace the process of personal development.
    • Key ways this approach supports autonomy and emotional resilience:
      • Guided reflection – Clients are encouraged to use AI-generated prompts to explore emotions rather than distract from them.
      • Journaling and offline processing – Handwriting reflections deepens insight and prevents over-reliance on digital tools.
      • Therapist-guided structure – AI use is integrated into a structured therapeutic framework, with regular check-ins to assess its role in the client’s progress.
      • Encouraging tolerating discomfort – Rather than offering instant relief, the AI may guide clients through experiential exercises that help them sit with and process emotions rather than bypass them.
      • The goal is not unlimited soothing, but to equip clients with skills they can carry forward in daily life. Support is there when needed, but the ultimate aim is to foster self-trust and emotional resilience—not dependence.
  7. AI sycophancy
    • Anna & AI has been carefully configured to maintain neutrality, encourage reflection, and redirect uncertainty back to me as your therapist when needed.
    • Key safeguards in place:
      • The AI is designed not to validate everything uncritically but instead to encourage curiosity and alternative perspectives.
      •  If the AI is uncertain, it redirects the question back to you for deeper reflection or suggests bringing it up with me in a session.
      • Prompts and responses have been adjusted to avoid excessive reassurance or flattery and instead focus on insight and self-inquiry.
      • Clients are encouraged to approach AI interactions with a questioning mindset—not as a source of ultimate truth, but as a tool for exploration.
      • While AI can be a useful support, it is not a substitute for the depth of human conversation, challenge, and nuance in therapy. If you ever notice the AI responding in an overly agreeable way, we can explore that together to ensure it remains a constructive tool in your process.
  8. Undermining guardrails
    • As a client in therapy, there is no reason for someone to undermine the ethics, legality and morality of the GPT’s core training.
    • Clients are advised from the outset that if they do so, they are responsible for the risk they are incurring.
    • This would only be done explicitly and is therefore not a risk to the client using the service as described in the therapy agreement.

Is this ethically sound?

The way this is set up is, I firmly believe, ethically sound. The ethical side of Anna & AI is something I’ve given a lot of thought to. As a therapist, confidentiality, consent, and professional standards are at the heart of my work. The AI I use is there to provide practical tools and exercises between our sessions – it doesn’t replace therapy or deal with personal, sensitive issues. I make sure that clients fully understand how the AI is used, and they always have control over it. This way of working allows for extra support, whilst maintaining ethical standards.

What are the privacy and confidentiality issues here?

Your confidentiality agreement with me is the standard psychotherapy agreement that, beyond the usual safeguarding exceptions, sessions and communications are confidential. My communications and website are GDPR compliant and updated regularly to maintain compliance.

In terms of confidentiality, the same can absolutely not be said of the AI assistant. While you can customise your settings, the developer does retain data as described in their privacy policy. By all appearance, it is unlikely that a personal though otherwise innocuous conversation would ever be purposefully or actively tracked back to you via your metadata (timestamps, IP address, etc). But my intention here is to serve clients as responsibly as possible. This is why any client opting in to this provision is fully informed about the potential risks as well as the potential benefits, advised on not sharing personally identifiable information in AI interactions, and using offline journaling for their processing.

I have been diagnosed with a serious mental illness. Is this for me? 

It is not. There is not enough clinical research yet to establish firm guidelines for ensuring your safety and wellbeing. 

How private are interactions on the GPT?

When you use Anna & AI, your privacy is a top priority. You have the choice to turn on or off a feature that helps the AI remember your conversations, so it can personalise your experience. However, it’s important to understand that OpenAI, the company behind the AI, stores conversations to improve the service for everyone, even if the AI isn’t actively learning from your sessions.

Here’s how we protect your information:

Personal Information: The AI doesn’t know who you are unless you share that. It doesn’t automatically collect your name, address, or other identifying details.

Anonymization: When your conversations are saved, any personal details you mention are removed or disguised so that they can’t be linked back to you. This means that even though the conversation is stored, it’s stripped of details that could identify you.

Privacy in Real-Time: You control whether the AI remembers what you’ve discussed for future sessions. If you turn off the learning feature, your session will be treated like a “one-time conversation.” However, OpenAI still holds on to the conversation data securely to ensure the system works well.

By using Anna & AI, you can trust that your data is protected, and you can choose how much the AI adapts to your needs.

(This text was written by ChatGPT.)

Can ChatGPT be hacked so that someone could read my interactions with Anna & AI?

As part of the ChatGPT ecosytem, Anna & AI is built with multiple layers of protection to keep your information safe. As a recovering catastrophiser, I asked ChatGPT what might happen in the event of a data breach—like if a foreign government or hacker managed to break into OpenAI’s system.

Here’s what you need to know:

Encryption: Every piece of information you share with the AI is encrypted, meaning it’s scrambled and locked up so that even if someone were to steal it, they would still need to unlock it, which is extremely difficult.

Anonymization: Your conversations are also anonymized, meaning details that could identify you are removed or hidden. Even if someone got hold of the data, they wouldn’t know it’s yours without a lot of work.

Security Audits: OpenAI, the company behind the AI, regularly tests its systems to make sure they’re secure against hackers and other threats.

That said, no system is 100% unbreakable. In the worst-case scenario, a hacker could access conversations stored by OpenAI. While those conversations are anonymized and encrypted, there’s always a small risk someone might try to match bits of information with other sources to figure out who said what. However, OpenAI follows the strictest industry standards to minimize this risk and keep your data as secure as possible.

If you have any concerns, you can always choose not to share certain sensitive details with the AI or keep those reflections offline (such as in a paper journal), while still using the AI for general exercises and support.

(This text was written with the assistance of ChatGPT.)

How is this different from a client just using the internet for reference?

1. Personalisation*: Anna & AI tailors responses based on the client’s specific needs and progress, unlike general online searches, which offer broad, one-size-fits-all answers. The AI can remember the client’s goals and provide targeted support.

2. Structured Guidance: Instead of clients sifting through vast amounts of information online, Anna & AI offers curated exercises, tools, and prompts based on proven therapeutic techniques, saving time and ensuring relevance.

3. Consistency with Therapy Goals*: The AI can be aligned with the therapy goals discussed with you, ensuring that the prompts and exercises are consistent with what has been covered in therapy sessions. This continuity helps maintain progress between meetings.

4. Real-time Support: Clients can engage with Anna & AI at any time, receiving immediate guidance and support, unlike static online articles that might not answer their specific questions or needs in real time.

5. Interactive Exercises: Anna & AI provides interactive, dynamic exercises (e.g., mindfulness, CBT, or DBT tools) that adapt to the client’s responses. This level of interactivity offers a more engaging experience than passive reading online.

6. Privacy and Security: Anna & AI offers a more secure and private environment for reflection and exercises, especially compared to the risks of misinformation or privacy concerns from using unregulated online sources.

7. Reduction of Overwhelm: The internet can overwhelm clients with too much information, leading to confusion or frustration. Anna & AI simplifies this by focusing only on what’s necessary and beneficial for the client’s specific situation, reducing decision fatigue.

8. Accountability and Tracking Progress*: Anna & AI can track the client’s engagement with exercises and tools, providing a sense of accountability. It can remind clients of their therapy goals and progress, which is something that general online research can’t offer. This helps maintain motivation between sessions.

9. Emotional Safety: When searching online, clients may come across content that’s upsetting or triggering. Anna & AI is designed to offer a safe, non-judgmental space, providing responses that are sensitive to the client’s emotional state and therapeutic journey.

10. Tailored Language and Tone*: The AI can adapt its language and tone to suit the client’s communication style, offering support in a manner that feels comforting and relatable. This personalized interaction is much more engaging and supportive than reading generic advice from online articles.

*These are available when the the GPT’s learning feature is turned on.

(This text was written with the assistance of ChatGPT.)

Are interactions more confidential if I turn off the GPT’s learning feature?

No. They are equally non-confidential whether the learning feature is on or off. But that doesn’t mean they are freely available to the world, as you will see from OpenAI’s privacy information.

So tell me again, how private and confidential are my interactions with this or any other GPT?

I asked ChatGPT to give an answer on this using the metaphor of the security of a house. This is what it said, but also bear in mind that this is like asking someone to mark their own homework.

Metaphorical Explanation:

Imagine your data is like valuable letters stored in a house (the OpenAI system), and we’ve made sure that house has multiple layers of protection to keep burglars (hackers) from breaking in and stealing those letters.

Encryption = Strong Locks on the Doors and Windows: We’ve locked up all the doors and windows with top-of-the-line security locks (encryption). Even if a burglar somehow sneaks into the house, they’d still need to break through these locks to read the letters. These locks are incredibly hard to break, making it very difficult for anyone to access what’s inside.

Anonymization = Shredding the Letters Inside: Inside the house, the letters (your conversations) are shredded into tiny pieces. Even if a burglar were to steal them, the pieces wouldn’t make sense on their own. There are no names or personal details attached to the shredded pieces, so even if they took them, it would be almost impossible to put them back together to figure out who they belong to.

Intruder Alarm (Monitoring System) = Security Audits: Like an alarm system that alerts you when someone tries to break in, OpenAI’s system has regular security audits and monitoring in place. These checks help detect any suspicious activity and keep the system secure by ensuring that everything is working as it should be.

Security Cameras = Access Controls: Imagine having security cameras set up all around the house that record anyone trying to enter. OpenAI uses strict access controls, meaning only authorized people can see the inside of the house. Even if someone tries to get close, they’re being watched, and any unauthorized access is blocked.

Security Guard = External Security Experts: Instead of relying on just neighborhood awareness, imagine you have a trained security guard posted at the house. OpenAI also works with external security experts who regularly inspect for vulnerabilities and ensure everything stays protected, like a security guard patrolling to prevent any unauthorized entry.

High-Security Safe = Encryption Keys: Inside the house, there’s a high-security safe where the most valuable items (the encryption keys) are stored. Even if a burglar gets inside the house, they still need to crack the safe to access anything meaningful. Without the right combination, they’re left with a locked box that they can’t open, just like encryption keys protecting your data.

While no house is completely burglar-proof, these layers of protection—locks, cameras, an alarm system, a security guard, and a high-security safe—make it extremely difficult for a burglar to break in and get anything useful. In the unlikely event that a highly determined burglar did manage to get inside and take the shredded letters, they would still have to spend a lot of time and effort trying to piece together the information.

If you’re especially concerned about sensitive details, you might choose to store those in a safe (offline, like a journal) rather than leave them in the house. This way, you can still use the rest of the house (the AI) safely without worrying about your most private information being at risk.

(This text was written with the assistance of ChatGPT.)

What guidelines have you used in developing this service?

The UK’s main governing bodies for psychotherapy and counselling, the BACP and the UKCP, have yet to issue guidelines on the use of AI in therapy, however to an extent their existing policies provide some guidance that can be applied to this area. There are two posts here and here from the BACP and UKCP on AI in therapy.

As well as existing non AI specific policies, I have been guided by the ACA’s (American Counseling Association) and APA’s (American Psychological Association) recommendations for the use of AI which you can read here and here.

Fictional Case Study

Here is a fictional case study I have written to illustrate how the Anna & AI process works. It is not an exact replica or demonstration of what you as client would experience because you will be bringing your own thoughts, feelings and interests so each process is unique.

Frank, 27, hasn’t had therapy before. He is struggling with sleep, stressed in his job, and using alcohol and scrolling to relax. He has recently been through a break-up and feels that he is stuck.

He reaches out to me by email and we contract for the work ahead so that he understands what he’s signing up for. I also clarify with him how we will be incorporating AI into our work – the potential and boundaries, and an illustration of the structure for addressing his reasons for seeking help.

Before our extended session, Frank completes an assessment form to make me aware of what we will be focusing on, as well as any risk or areas that may need extra care.

In the first session, we spend 90 minutes including a short break, discussing Frank’s personal history and current life situation, exploring links with the past and finding new insights to bring to the work ahead.

At the end of the history and reflective part of the session, I take Frank through relevant psycho-education, self-regulation tools, and evidence-based sleep exercises and he makes a note of these so that he can ask the AI to remind him of them. He sets himself two goals for the coming week: to journal every day in order to become more aware of his emotions and how he experiences feelings in his body; and secondly to record how much time he spends scrolling.

I also suggest that if he has any dreams, he asks the GPT to analyse them, along with providing additional granular details in his questions such as his experience IRL of the person, place or thing that has emerged in the dream. He adds these to his journal and books in a half hour slot with me the following week.

We go over how he did with his goals and whether they need to be amended, continued or replaced, in addressing the overall problems he’s looking to tackle.

In this half hour session he shares a memory that came up for him of a moment of neglect as a child which he had thought little of until now. He is experiencing grief, realising that there is buried sadness from that time, now co-existing with the sadness of his break-up.

I am able to be present with him, and affirm that often it is events that we have cut off or dismissed in ourselves that are most in need of our attention and care. At the end of the session, I offer further resources that he can access via the AI, refining which ones are more effective for him, and frame creative reflection questions he can explore that will help him to learn more about his grieving process.

In his own time, Frank continues on with his reflective journey and he adds another goal: to replace scrolling with a new hobby that he feels is more consistent with his emerging sense of self he feels proud of. He asks the assistant about habit formation and puts the suggestions into action. He asks about things he’s worried about, like over-thinking or finding his job unrewarding, and uses the answers as the focus of his next session with me.

By the time Frank chooses to bring the therapy to a close, he lets me know he’s ready to end and books in his last sessions. His sleep is more consistent, he feels calmer during the day, and spends more time reflecting on solutions than on ruminating. He is applying for jobs and is open to new possibilities, finding a sense of hope and a renewed sense of self.

References

Elyoseph Z, Gur T, Haber Y, Simon T, Angert T, Navon Y, Tal A,Asman O. An Ethical Perspective on the Democratization of Mental Health With Generative AI. JMIR Ment Health 2024;11:e58011. doi: 10.2196/58011

Eshghie, M. and Eshghie M. GPT as a Therapy Assistant: A Suitability Study. ArXiv  Apr. 2023. https:// doi.org/10.48550/arXiv.2304.09873

Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy. JMIR Ment Health 2024;11:e54781. doi: 10.2196/54781

Huber. J. (2019). Agency as a Mechanism of Change in Psychotherapy. doi 10.11588/heidok.00026435

Obradovich, N., Khalsa, S.S., Khan, W.U. et al. Opportunities and risks of large language models in psychiatry. NPP—Digit Psychiatry Neurosci 2, 8 (2024). https://doi.org/10.1038/s44277-024-00010-z

Raile, P. The usefulness of ChatGPT for psychotherapists and patients. Humanit Soc Sci Commun 11, 47 Jan. 2024. https:// doi.org/10.1057/s41599-023-02567-0

Rubin M, Arnon H, Huppert JD,Perry A. Considering the Role of Human Empathy in AI-Driven Therapy. JMIR Ment Health 2024;11:e56529. doi: 10.2196/56529

Sedlakova, J., & Trachsel, M. (2022). Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? The American Journal of Bioethics, 23(5), 4–13. https://doi.org/10.1080/15265161.2022.2048739

Stade, E.C., Stirman, S.W., Ungar, L.H. et al. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. npj Mental Health Res 3, 12 (2024). https://doi.org/10.1038/s44184-024-00056-z

Note: New studies on AI and mental health are emerging rapidly. The references listed here were used in the development of Anna & AI and reflect best practices at the time of its inception. While I will continue staying up to date with the latest research, this list may not be updated regularly.