Mind launches AI and Mental Health Commission
Friday, 20 February 2026
Mind
Mind has announced that it will be launching an AI and Mental Health Commission. The commission will last for a year and looks to address concerns about AI from what the charity is seeing on the frontline. It also aims to support the nation in navigating one of the most significant technological shifts of our time.
The commission will explore the potential of AI to drive improvements in care and access to information. But also how to manage the risks and prevent harms where these tools are being used as a substitute for therapy, crisis support, or clinical guidance - often by people who are already vulnerable, distressed, or struggling to access timely help elsewhere.
Mind is seeing a growing number of people seeking help after receiving inappropriate, misleading or even dangerous advice from AI platforms. Some are forming emotionally dependent or quasi-therapeutic relationships with AI tools that are not designed, regulated or clinically aligned to provide mental health support. Others are acting on advice that directly contradicts established best practice, sometimes with serious consequences.
To address this commission will bring together people with lived experience of mental health problems, clinicians, technologists, ethicists and policymakers to develop practical recommendations rooted in evidence, compassion and realism. It will release regular reports sharing findings, insights and recommendations for further work.
A huge amount of work is already being done by regulators, researchers and governments to inform their approach to health AI. What is missing, however, and what Mind is uniquely positioned to provide, is to place lived experience of mental health problems at the heart of our understanding of AI.
Dr Sarah Hughes, Chief Executive of Mind, said:
“We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks.
“We are already seeing examples of AI tools offering dangerously incorrect guidance on mental health, including advice that could prevent people from seeking treatment, reinforce stigma or, discrimination and in the worst cases, put lives at risk. People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.
“Mind is a trusted source of information for those with mental illness, at this moment that landscape is shifting radically and it’s vital we use our insight and platform to shape how AI impacts on mental health provision. Our commission will examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life. We want to ensure that innovation does not come at the expense of people’s wellbeing, and that those of us with lived experience of mental health problems are at the heart of shaping the future of digital support.”