Study Confirms Mental Health Access a Problem, AI Not to Blame

One of the ways mainstream media does hand wringing over AI is people using it for mental health. It turns out, there’s a deeper problem.

If there is one thing the media loves doing on a somewhat regular basis, it’s pushing moral panics over AI (Artificial Intelligence) taking over everything. Among the wild and obviously exaggerated claims is that AI is going cause humanity to go extinct among other things. Of course, mainstream media, being the infinitely gullible organizations that they are, just eat up every tall tale told by tech bros telling them that AI is going to be the next industrial revolution.

The more likely scenario is that AI, specifically generative AI Large Language Modules (LLMs) is exhibiting all the signs that they are in a bubble, so developers creating these LLMs are doing everything they can to keep the AI hype train going for as long as possible. This despite the technology being little more than glorified auto-complete bots. Part of that hype is trying to shove AI in anything and everything whether or not it even remotely makes sense. One of the most, if not, the most notorious example of this is Logitechs AI mouse. Why should mice have an AI button? Heck if anyone can figure that one out.

Now, this isn’t to say that AI doesn’t have its purpose. Can it be used to tweak the overall sound of a vocal sample? Sure. Can it be used to create bot opponents to make games fun? Has been happening for decades now. Can AI be used to speed up the animated process such as making cloth more convincingly flap in the wind? Absolutely. Can AI be used to translate from one language to another? Yeah, I can see that. Can AI be used to help tweak grammar? I can see that. AI does have its uses, but can AI replace journalists and lawyers? Probably not. Can AI replace CEO’s and other decision makers? Doubtful.

Still, that’s not going to stop the mainstream media from considering AI to this technological miracle that can do anything. One angle that the mainstream media loves to do hand wringing about AI is the use of AI to replace psychologists to help people with their mental health problems. The claim should be immediately met with skepticism. Anyone who claims that their LLM is going to replace actual humans when it comes to diagnoses and treatment of mental health issues should be treated like a snake oil salesman right out of the gate.

In one instance, the CBC earlier this year did a story about AI replacing psychologists and how people are using it in droves:

University of Waterloo student Rastin Rassoli’s own struggles with his mental health inspired him to combine two of his passions — computer science and psychology — to help more young people access care and support. At an early co-op placement, for instance, he helped develop an app called Joyi, which delivers bite-sized psychology lessons to help teens manage stress and anxiety.

A new idea emerged, however, as he heard first-hand accounts about difficulties fellow students had accessing care on campus and began learning about the subclinical stage of mental health disorders in his classes. At the same time, he saw stunning leaps being made with large language model artificial intelligence systems like ChatGPT.

Those elements coalesced in Rassoli’s latest creation: Doro, an app that aims to coach students in tackling their mental health concerns early on, before symptoms escalate.

As young people struggle to find care, a new wave of AI mental health and wellness apps have emerged of late, offering support at one’s fingertips, 24/7. Yet experts warn that an app can’t replace conventional treatment, especially in serious or emergency situations.

There’s definite potential in the technology, says Dr. Michael Cheng from Ottawa children’s hospital CHEO. A chatbot could be used to easily point a young person toward specific mental health resources or information, for instance.

Still, he’s concerned about issues of safety, privacy and confidentiality, use of patient data and an overall lack of regulation at the moment.

“It’s a bit of a Wild West right now,” said the child and adolescent psychiatrist. “Anyone can just [release] an app and say it does whatever.”

I was honestly thinking of doing a write-up about this pointing out that if people are using these apps, that actually points to a likely problem in access to mental health services. There were other stories to cover and of all the ways mainstream media outlets like the CBC screw up stories related to technology, this wound up being low on the list of stories the mainstream media screw up stories. This one wasn’t as egregious as many of the other mistakes, so I let it slide at the time.

So, why am I bringing this up now? Well, funnily enough, the same organization ended up indirectly correcting themselves. Months later, someone apparently finally decided to look into the issue of whether or not mental health access is an issue or not. As it turns out, it is. From, ironically, the CBC:

About 2.5 million people — nearly the populations of Manitoba and Saskatchewan combined — aren’t getting adequate care for their mental health, according to a new report.

The Canadian Mental Health Association (CMHA), which released the report on Tuesday, called it a map of the landscape of mental health, addictions and substance use in the country.

“We are not doing well,” said Sarah Kennell, the group’s national director of public policy, in an interview. “For many Canadians, mental health is in fact grim.”

The report looked at 24 measures, from how much is being spent on care, to suicide rates and levels of discrimination against people with mental health concerns, with breakdowns by province and territory, where available.

On average, provinces and territories spend about 6.3 per cent of their overall health-care budgets on mental health, the report says, roughly half the 12 per cent that CMHA recommends. That’s a fraction compared to a country like France, which also has a universal system and spends 15 per cent on mental health care.

Honestly, the story about people creating apps for mental health diagnosis should have been a clue earlier that this might be a mental health access issue. Yet, that point seemingly went over the heads of the authors of the first report at the time. While this situation feels like a stopped clock is right twice a day, it is actually good that mainstream media outlets actually do arrive at a good answer eventually.

Of course, it’s worth pointing out that people creating or using AI to diagnose and treat mental health issues isn’t a uniquely Canadian situation. Back in June of this year, a similar story cropped up in the US. From Techdirt:

It’s just like adults to be constantly diagnosing the wrong thing in trying to “save the children.” Over the last couple of years there’s been a mostly nonsense moral panic claiming that the teen mental health crisis must be due to social media. Of course, as we’ve detailed repeatedly, the actual research on this does not support that claim at all.

Instead, the evidence suggests that there is a ton of complexity happening here and no one factor. That said, two potentially big factors contributing to the teen mental health crisis are (1) the mental health challenges that their parents are facing, and (2) the lack of available help and resources for both kids and parents to deal with mental health issues.

When you combine that, it should be of little surprise that desperate teens are turning to AI for mental health support. That’s discussed in an excellent new article in The Mercury News’ Mosaic Journalism Program, which helps high school students learn how to do professional-level journalism.

For many teenagers, digital tools such as programs that use artificial intelligence, or AI, have become a go-to option for emotional support. As they learn to navigate and cope in a world where mental health care demands are high, AI is an easy and inexpensive choice.

Now, I know that some people’s immediate response is to be horrified by this, and it’s right to be concerned. But, given the situation teens find themselves in, this is not all that surprising.

Teens don’t have access to real mental health help. On our most recent podcast, we spoke to an expert in raising kids in a digital age, Devorah Heitner, who mentioned that making real, professional mental health support available in every high school would be so much more helpful than something silly like a “Surgeon General’s Warning” on social media.

The bottom line is this, there will always be people struggling with mental health issues. A big question that should be asked is if people are suffering from mental health issues, is help actually available or not? If the answer is “no”, then that is a problem that needs addressing. If people are desperately turning to AI for help dealing with mental health issues, that’s not necessarily an AI problem, but rather, a systemic/accessibility problem.

If organizations are finding that funding for mental health services is lacking, then maybe it’s time to have a serious discussion about what is an appropriate level of spending for mental health services, what levels of government are responsible for that, and how that money should be distributed so that access is not actually a problem.

So, returning to the question of AI, I think the reaction to someone developing an AI LLM to diagnose should be more like, “Well, I don’t see the use because I can see someone if I am running into problems. So, I think it’s kind of redundant.” That, I think, would be the ideal world we should be striving for.

Drew Wilson on Mastodon, Twitter and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top