The Genie and the Therapist: Rethinking Bioethics for Digital Mental Health

11 minute read

Published:

This post describes the unique challenge of bioethics in digital mental health.

I have become more interested in bioethics and AI governance recently, because it has become clear that these potentially present the biggest challenge to digital mental health transformation. I have come to believe that traditional bioethical frameworks like compassionate use are inadequate for digital mental health. This essay examines why existing frameworks fail and identifies key questions that new approaches must address.

In the standard healthcare ethics and regulation pipeline, we test drugs and interventions in vitro, then in mice, then in healthy participants, then in the diseased population. This is great, because you can slowly test both the efficacy of a drug and any risks and side effects. Unfortunately, this can take years to complete, but that is often considered a price worth paying. Rushing a drug to market means people may be able to access it earlier, but this comes with a significant risk.

There is an established regulatory framework of “compassionate use”, also known as “expanded access”, within healthcare ethics that allows patients with serious or life-threatening conditions to access experimental drugs, medical devices, or treatments that haven’t yet been fully approved by regulatory agencies like the FDA. Typically, patients must have a serious condition with no comparable or satisfactory alternative treatments available. They’ve usually exhausted standard treatment options, and they can’t participate in a clinical trial (either because there isn’t one available, they don’t meet enrollment criteria, or the trial is full). The potential benefits need to justify the risks, even though the treatment hasn’t completed the full approval process. There must be enough evidence to suggest the treatment might help. This framework creates interesting bioethical questions—balancing hope and access against scientific rigor, managing desperation when facing terminal illness, questions about equity (who gets access?), and whether compassionate use might undermine clinical trials if patients opt out to pursue experimental treatments instead.

When it comes to the development of psychopharmaceuticals we may take a similar approach, ensuring that the drugs released to market are safe and effective. However, the framework of compassionate use was primarily designed for physical medical conditions, and its application in mental health is complex and limited. Reasons for this include that traditional compassionate use requires a serious or life-threatening condition where standard treatments have failed. While severe mental illness can absolutely be life-threatening (through suicide risk, severe self-neglect, or treatment-resistant conditions), demonstrating this threshold is more complex than with something like stage 4 cancer. Moreover, most mental health conditions have multiple FDA-approved treatment options (various medications, therapy modalities, ECT), making it harder to argue “no satisfactory alternatives” exist, even when those alternatives haven’t worked well for a particular patient. An example case study is the use of ketamine/esketamine for treatment-resistant depression. Before esketamine’s FDA approval, some patients accessed ketamine through compassionate use or off-label prescribing for severe, treatment-resistant depression with acute suicide risk.

The complexities of applying compassionate use to mental health become even more pronounced when we consider digital interventions, which operate under entirely different regulatory assumptions. With the proliferation of digital technologies, we have seen a new type of intervention for mental health. Digital mental health refers to the use of technology-based tools and platforms to support mental health assessment, treatment, and wellbeing. This encompasses a wide range of interventions, from smartphone apps that deliver cognitive behavioral therapy (CBT) exercises and mood tracking, to teletherapy platforms connecting patients with licensed therapists via video, to AI-powered chatbots (e.g., Ash, Wysa) providing round-the-clock emotional support. These technologies also include wearable devices that monitor physiological indicators of mental health, virtual reality exposure therapy for phobias and PTSD, and algorithm-driven systems that predict mental health crises or personalize treatment recommendations. The field has expanded rapidly, particularly accelerated by the COVID-19 pandemic, promising greater accessibility, reduced costs, and continuous care for mental health conditions that affect millions worldwide.

One key difference with these digital mental health tools is the complicated and ambiguous regulatory landscape. In some sense, we do not have the luxury to slowly and carefully test these interventions for safety and efficacy, because people are already using digital tools like ChatGPT for therapy. The genie is already out of the bottle. Unlike pharmaceutical development, where compassionate use provides a structured pathway for accessing experimental treatments while maintaining some regulatory oversight, digital mental health exists in a largely unregulated space where deployment often precedes rigorous validation. General-purpose AI systems, social media platforms with mental health features, and wellness apps are already deeply embedded in how millions of people seek support for psychological distress, yet most fall outside traditional medical device regulations. This creates a paradox: the very accessibility and low barrier to entry that makes these tools potentially democratizing also means they proliferate without the safety standards we would demand of conventional treatments. The question facing bioethicists is not whether to allow experimental digital interventions to reach desperate patients (they already have) but rather how to establish ethical guardrails and evidence standards in a landscape where the technology has outpaced our regulatory frameworks, and where restricting access might simply push users toward even less accountable alternatives. It is a race against the clock to keep up with interventions already out there in the public domain. Of course, there is room for validated, clinically sound interventions, and many are being developed. But if such tools remain too expensive or otherwise inaccessible for most users, we risk losing patients to unregulated tools.

An alternative path forward would be to aggressively ban any public domain tools for mental health care, and only allow regulated, safe, and evidence-based interventions. With the rise in awareness of the risks of these technologies (multiple lawsuits have been filed), important steps have already been taken. Several major tech companies have proactively implemented guardrails: OpenAI and Anthropic have programmed their AI assistants to explicitly disclaim that they are not therapists and to redirect users in crisis toward professional resources, while some platforms have restricted romantic or deeply personal conversational capabilities after concerns about emotional dependency. Continued and increasing pressure on tech giants should make their tools safer to use. Some US states have implemented regulatory efforts with varying approaches, some jurisdictions have moved to restrict unlicensed digital therapy apps or require mental health apps to meet certain clinical standards before marketing themselves as therapeutic tools. Mental health professional organizations have also voiced concerns, with some calling for stricter licensing requirements for any digital tool claiming therapeutic benefits.

However, these restriction efforts face significant practical challenges: the global nature of internet services makes jurisdiction-specific bans difficult to enforce, determined users can easily circumvent warnings or access alternative platforms, and the line between “wellness tool” and “therapy” remains blurry enough that companies can often avoid regulation through careful marketing language. Moreover, the effort to restrict digital tools from claiming therapeutic benefits may only scratch the surface. People will still use tools even though they don’t explicitly claim therapeutic benefits (ChatGPT certainly does not claim this, and yet it is one of its most common usages). The nature of mental health support defies neat categorization. Activities ranging from formal psychotherapy to conversations with friends can all meaningfully impact psychological wellbeing. What constitutes mental health ‘care’ remains fundamentally ambiguous, as therapeutic benefit can arise from countless interactions that fall outside traditional clinical settings. It is not uncommon that someone with years of depression despite ongoing clinical treatment finds a breakthrough in something seemingly ordinary, such as a conversation with a coworker, advice from an online forum, or even a line from a book or song.

One important application of medical ethics in this domain is the vast majority of people around the world that don’t have access to human psychological care. Rather than the framework of compassionate use, “emergency use” or “crisis standards of care” could be applicable. These use ethical reasoning when normal standards become impossible to maintain due to resource scarcity. During COVID-19, for instance, healthcare systems adopted crisis standards that would be unacceptable under normal circumstances. The “remote village” scenario provides a powerful example of how existing bioethical frameworks strain when applied to digital mental health. It captures the genuine tension between perfect being the enemy of good, while also raising questions about whether we’re using “access” arguments to justify providing inadequate care to vulnerable populations. If we accept lower evidence standards for digital tools in under-resourced communities, we risk creating a two-tiered system where wealthy people get evidence-based human care and poor people get algorithmic approximations. However, this might still be preferable over no care at all.

Developing adequate bioethical frameworks for digital mental health requires us to answer fundamental questions that existing paradigms were not designed to address, such as:

  1. What threshold of risk triggers regulatory oversight? Should all mental health apps be regulated, or only those making explicit therapeutic claims? What about general-purpose AI tools that people use therapeutically even without such claims?
  2. Who bears responsibility when harm occurs? The developer? The platform hosting the tool? The user who chose to use it instead of seeking professional help? How do we assign liability in complex sociotechnical systems?
  3. How do we handle the “moving target” problem? AI systems that learn and update continuously don’t fit the model of a fixed pharmaceutical formulation. How do we regulate something that changes over time?
  4. What does informed consent look like for AI mental health tools? Users need to understand limitations, but how much technical detail is necessary? How do we communicate uncertainty about long-term effects?
  5. How do we protect vulnerable populations? Children, people in acute crisis, those with impaired decision-making capacity—should they have different access rules or protections?
  6. What transparency is required? Should users know when they’re talking to AI? Should they understand how the system makes decisions? Where does transparency end and proprietary protection begin?
  7. How do we weigh different types of harm? Direct harms (bad advice leading to suicide) vs. indirect harms (dependency preventing help-seeking) vs. systemic harms (undermining the mental health profession)?

These questions cannot be answered by simply adapting pharmaceutical regulations or extending compassionate use principles. They demand dedicated interdisciplinary work bringing together bioethicists, AI researchers, mental health professionals, patients, and policymakers. Until we develop frameworks that grapple seriously with these questions, we will continue operating in an ethical vacuum, where millions use powerful psychological interventions with neither the protections we expect from medicine nor the freedoms we associate with everyday conversation.

Further reading

  1. Wainberg ML, Scorza P, Shultz JM, Helpman L, Mootz JJ, Johnson KA, Neria Y, Bradford JE, Oquendo MA, Arbuckle MR. Challenges and Opportunities in Global Mental Health: a Research-to-Practice Perspective. Curr Psychiatry Rep. 2017 May;19(5):28. doi: 10.1007/s11920-017-0780-z. PMID: 28425023; PMCID: PMC5553319.
  2. Parents of teenager who took his own life sue OpenAI. https://www.bbc.com/news/articles/cgerwp7rdlvo