I heard a story on NPR the other day, and the attitude seems to be that it's totally inevitable that LLMs _will_ be providing mental health care, so our task must be to apply the right guardrails.
I'm not even sure what to say. It's self-evidently a terrible idea, but we all just seem to be charging full-steam ahead like so many awful ideas in the past couple of decades.
Forget about calling it mental healthcare or not: Most people end up dealing with people in significant distress at one point or another. Many do it all the time even when they aren't trained or getting paid as mental health professionals, just because of circumstances. You don't need a clinical setting for someone to tell you that they have suicidal ideation, or to be stuck interacting with someone in a crisis situation. We don't train every adult in this, but the more you have to do it, the more you have to learn some tools for at least doing little harm.
We can see an LLM as someone that talks with more people, for more time, than anyone on earth talks in their lifetime. So they are due to be in constant contact with people in mental distress. At that point, you might as well consider the importance of giving them the skills of a mental health professonal, because they are going to be facing more of this than a priest in a confessional. And this is true whether someone says "Gemini, pretend that you are a psychologist" or not. You or I don't need a prompt to know we need to notice when someone is in a severe psychotic episode: Some level of mental health awareness is built in, if just to protect ourselves. So an LLM needs quite a bit of this by default to avoid being really harmful. And once you give it that, you might as well evaluate it against professionals: Not because it must be as good, but because it'd be really nice if it was, even when it's not trying to act as one.
I heard someone say that LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert. A lot of people don't have access to mental health care, and will ask their chatbot to ask like a psychologist.
>[...] LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert.
This mostly makes sense.
The problem is that people will take what you've said to mean "If I have no access to a therapist, at least I can access an LLM", with a default assumption that something is better than nothing. But this quickly breaks down when the sycophantic LLM encourages you to commit suicide, or reinforces your emerging psychosis, etc. Speaking to nobody is better than speaking to something that is actively harmful.
All very true. This is why I think the concern about harm reduction and alignment is very important, despite people on HN commonly scoffing about LLM "safety".
Is that not the goal of the project we are commenting under? To create an evaluation framework for LLM's so they aren't encouraging suicide, psychosis, or being actively harmful.
Sure, yeah. I'm responding to the comment that I directly replied to, though.
I've heard people say the same thing ("LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert"), and I also know that some people assume that LLMs are, by default, better than nothing. Hence my comment.
Maybe you’re comparing it to some idealized view of what human therapy is like? There’s no benchmark for it, but humans struggle in real mental health care. They make terrible mistakes all the time. And human therapy doesn’t scale to the level needed. Millions of people simply go without help. And therapy is generally one hour a week. You’re supposed to sort out your entire life in that window? Impossible. It sets people up for failure.
So, if we had some perfect system for getting every person that needs help the exact therapist they need, meeting as often as they need, then maybe AI therapy would be a bad idea, but that’s not what we have, and we never will.
Personally, I think the best way to scale mental healthcare is through group therapy and communities. Having a community of people all coming together over common issues has always been far more helpful than one on one therapy for me. But getting some assistance from an AI therapist on off hours can also be useful.
Some rape victims detest the comparison. Some make the comparison. I would agree people who have not been raped should avoid the comparison. But I would not assume someone who made the comparison had no standing.
127. A technological advance that appears not to threaten freedom often turns out to threaten it very seriously later on. For example, consider motorized transport. A walking man formerly could go where he pleased, go at his own pace without observing any traffic regulations, and was independent of technological support-systems. When motor vehicles were introduced they appeared to increase man’s freedom. They took no freedom away from the walking man, no one had to have an automobile if he didn’t want one, and anyone who did choose to buy an automobile could travel much faster and farther than a walking man. But the introduction of motorized transport soon changed society in such a way as to restrict greatly man’s freedom of locomotion. When automobiles became numerous, it became necessary to regulate their use extensively. In a car, especially in densely populated areas, one cannot just go where one likes at one’s own pace one’s movement is governed by the flow of traffic and by various traffic laws. One is tied down by various obligations: license requirements, driver test, renewing registration, insurance, maintenance required for safety, monthly payments on purchase price. Moreover, the use of motorized transport is no longer optional. Since the introduction of motorized transport the arrangement of our cities has changed in such a way that the majority of people no longer live within walking distance of their place of employment, shopping areas and recreational opportunities, so that they HAVE TO depend on the automobile for transportation. Or else they must use public transportation, in which case they have even less control over their own movement than when driving a car. Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (Note this important point that we have just illustrated with the case of motorized transport: When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it.)
128. While technological progress AS A WHOLE continually narrows our sphere of freedom, each new technical advance CONSIDERED BY ITSELF appears to be desirable. Electricity, indoor plumbing, rapid long-distance communications ... how could one argue against any of these things, or against any other of the innumerable technical advances that have made modern society? It would have been absurd to resist the introduction of the telephone, for example. It offered many advantages and no disadvantages. Yet, as we explained in paragraphs 59-76, all these technical advances taken together have created a world in which the average man’s fate is no longer in his own hands or in the hands of his neighbors and friends, but in those of politicians, corporation executives and remote, anonymous technicians and bureaucrats whom he as an individual has no power to influence. [21] The same process will continue in the future. Take genetic engineering, for example. Few people will resist the introduction of a genetic technique that eliminates a hereditary disease. It does no apparent harm and prevents much suffering. Yet a large number of genetic improvements taken together will make the human being into an engineered product rather than a free creation of chance (or of God, or whatever, depending on your religious beliefs).
129. Another reason why technology is such a powerful social force is that, within the context of a given society, technological progress marches in only one direction; it can never be reversed. Once a technical innovation has been introduced, people usually become dependent on it, so that they can never again do without it, unless it is replaced by some still more advanced innovation. Not only do people become dependent as individuals on a new item of technology, but, even more, the system as a whole becomes dependent on it. (Imagine what would happen to the system today if computers, for example, were eliminated.) Thus the system can move in only one direction, toward greater technologization. Technology repeatedly forces freedom to take a step back, but technology can never take a step back—short of the overthrow of the whole technological system.
130. Technology advances with great rapidity and threatens freedom at many different points at the same time (crowding, rules and regulations, increasing dependence of individuals on large organizations, propaganda and other psychological techniques, genetic engineering, invasion of privacy through surveillance devices and computers, etc.). To hold back any ONE of the threats to freedom would require a long and difficult social struggle. Those who want to protect freedom are overwhelmed by the sheer number of new attacks and the rapidity with which they develop, hence they become apathetic and no longer resist. To fight each of the threats separately would be futile. Success can be hoped for only by fighting the technological system as a whole; but that is revolution, not reform.
Do you have some better alternatives for a country where private mental health care costs €150/hr, while the government/insurance paid care have 3-6M+ waiting lists?
Well on the one hand, an obviously terrible solution is not inherently better than doing nothing. ie, LLM mental healthcare could be _worse_ than just letting the current access times climb.
My other stance, which I suspect is probably more controversial, is that I'm not convinced that mental health care is nearly as effective as people think. In general, mental health outcomes for teens are getting markedly worse, and it's not for lack of access. We have more mental health access than we've had previously -- it just doesn't feel like it because the demand has risen even more sharply.
On a personal level, I've been quite depressed lately, and also feeling quite isolated. As part of an attempt to get out of my own shell I mentioned this to a friend. Now, my friend is totally well-intended, and I don't begrudge him whatsoever. But, the first response out of his mouth was whether I'd sought professional mental health care. His response really hurt. I need meaningful social connection. I don't need a licensed professional to charge me money to talk about my childhood. I think a lot of people are lost and lonely, and for many people mental health care is a band-aid over a real crisis of isolation and despair.
I'm not recommending against people seeking mental health care, of course. And, despite my claims there are many people who truly need it, and truly benefit from it. But I don't think it's the unalloyed good that many people seem to believe it to be.
>I think a lot of people are lost and lonely, and for many people mental health care is a band-aid over a real crisis of isolation and despair.
Professional mental health care cannot scale to the population that needs it. The best option, like you mention, is talking to friends about our feelings and problems. I think there has been an erosion (or it never existed) of these social mental health mechanisms. There is a learned helplessness that has developed that people have lost their capacity to just be with someone that is hurting. There needs to be a framework for providing mental health therapy to loved ones that can exist without licensed professionals, otherwise LLm's are the only scalable option for people to talk about their issues and work on finding solutions.
This might be controversial but mental health care is largely a bandaid when the causes of people's declining mental health is due to factor's far outside the individual's control: loneliness epidemics, declining optimism towards the future, climate change, the rise of global fascism, online dating, addictiveness of social media and the war on our attention, etc.
>My other stance, which I suspect is probably more controversial, is that I'm not convinced that mental health care is nearly as effective as people think. In general, mental health outcomes for teens are getting markedly worse, and it's not for lack of access. We have more mental health access than we've had previously -- it just doesn't feel like it because the demand has risen even more sharply.
There's also the elephant in the room that mental healthcare, in particular for teens will probably just be compensating for the disease that is social media addiction. Australia has the right idea, banning social media for all goods.
I was watching "A Charlie Brown Christmas" the other day, and Lucy (who has a running gag in Peanuts of being a terrible, or at least questionable, psychologist) tells Charlie Brown to get over his seasonal depression he should get involved in a Christmas project, and suggests he be the director of their play.
Which is to say, your stance might not be as controversial as you think, since it was the adult take in a children's cartoon almost 60 years ago.
Your Peanuts reference made me smile but I don't see why you thought a little girl's comment in a 1960s Christmas special was supposed to represent the "adult take" on mental health in the 1960s.
Lucy isn't actually a psychologist which is part of the reason the "gag" is funny.
As already mentioned: availability, convenience, and cost are huge.
It's also less pressure, a more comfortable environment (home vs. stranger's office), no commitments to a next session, and less embarrassing (sharing your personal issues to a computer via text is less anxiety-inducing than saying them to a person's face).
With that all said, I'm strongly opposed to people using LLMs as therapists.
"Sharing my data" isn't something that cross the average person's mind unless there is a checkbox they annoyingly have to check in order to check in to some European hotel which is asking you for permission to process their data, for better or worse.
In their mind, most of the times, if there is no one standing behind them when they chat with a LLM, then the conversation for most intents and purposes is private.
Obviously, us who are born with a keyboard in front of our hands, know this to not be true, and know we're being tracked constantly with our data being sold to the highest bidder. But the typical person have more or less zero concerns about this, which is why it's not a priority issue to be solved.
Finding a licensed therapist, especially one covered by health insurance, who takes new patients, can be a challenge in some areas. So while it obviously is a bad idea, I can hardly blame people in a bad place looking for at least some help.
Availability.
There must be many other reasons, but IMHO that has to be the biggest factor. Being able to just start a session, in the moment, when you feel like it, is a fundamental difference.
Counter-intuitively, I think the fact that it's not a human seems to have a non-negligible effect too. It's a computer program you can share whatever with, and it'll never judge you, because it cannot. It reads exactly what you write and assumes you're faithfully answering, then provides a reply based on that.
I haven't been so unlucky myself, but I know many who've had terrible first experiences with therapists and psychologists, where I'm wondering why those people even are in the job they are, but some of them got so turned off they stopped trying to find anyone else to help them, because they think most mental health professionals would be the same as the first person they sought help from.
My experience is that they (at least copilot) are at least on par, if not better, than self help books. I assume they will get better over time.
Just my few cents
I believe that 'help or not' is a question impossible to answer, objectively at least. My subjective answer is that psychologic help in general is effective, whether it is provided orally, via books or as recently, via automation.
People use opiates as a replacement for mental health providers for similar reasons. While I’m all for harm reduction, it doesn’t mean we should view it as inevitable.
See also, all the ads for prescription medication on TV. Maybe it's just the programs I watch but it really seems like this has become the predominant advertising. Every break has an ad (or several) urging me to "ask my doctor about..."
Should be banned. Average people have no basis to know whether drug X is appropriate for them. If your doctor thinks you need it, he'll tell you. These ads also perpetuate the harmful idea that there's a pill for everything.
It’s a trivial claim that people are going to use AI as a therapist. No grumbling is going to stop that.
So it’s sensible that someone out there is evaluating its competence and thinking about a better alternative for these folks than yoloing their worst thoughts into chatgpt.com’s default LLM.
Everyone's hand is being forced by the major AI providers existing.
Even if you were a perfect altruist with a crusade against the idea of people using LLMs for mental health, you could still be forced to dash towards figuring out how to build LLM tools for mental health in your consideration for others.
Sure. It's also a trivial claim that people will take megadoses of rhubarb to cure cancer.
The age-old problem is how to prevent that disaster and save those lives. That's not trivial. Creating an Oncological Rhubarb Authority could easily make the problem much worse, not better.
Agree, the solution not trivial, so it's good that people are thinking about it.
If you try to merely stop people from using LLMs as therapists (could you elaborate on what that looks like?) and call it a day, your consideration isn't extending to all the people who will do it anyways.
That's what I mean by forcing your hand into doing the work of figuring out how to make LLM therapists work even if you were vehemently against the idea.
Everyone deserves clear advice not to use rhubarb as a cancer treatment, and not to use LLMs for mental health care. It's very easy to provide that advice.
I think you're assuming my proposed solution is to take rhubarb away from people? It's not.
Maybe you want to found the "Oncological Rhubarb Validation Association" or something. If so, that has nothing to do with me? That's just snake oil marketing. Not my field.
(1) The demand for mental health services is an order of magnitude vs the supply, but the demand we see is a fraction of the demand that exists because a lot of people, especially men, aren't believers in the "therapeutic culture"
In the days of Freud you could get a few hours of intensive therapy a week but today you're lucky to get an hour a week. An AI therapist can be with you constantly.
(2) I believe psychodiagnosis based on text analysis could greatly outperform mainstream methods. Give an AI someone's social media feed and I think depression, mania, schizo-* spectrum, disordered narcissism and many other states and traits will be immediately visible.
(3) Despite the CBT revolution and various attempts to intensify CBT, a large part of the effectiveness of therapy comes from the patient feeling mirrored by the therapist [1] and the LLM can accomplish this, in fact, this could be accomplished by the old ELIZA program.
(4) The self of the therapist can be both an obstacle and an instrument to progress. See [2] On one level the reactions that a therapist feels are useful, but they also get in the way of the therapist providing perfect mirroring [3] and letting optimal frustration unfold in the patient instead of providing "corrective emoptional experiences." I'm going to argue that the AI therapist can be trained to "perceive" the things a human therapist perceives but that it does not have its own reactions that will make the patient feel judged and get in the way of that unfolding.
It's not inevitable that LLMs will be providing mental health care; it's already happening.
Terrible idea or not, it's probably helpful to think of LLMs not as "AI mental healthcare" but rather as another form of potentially bad advice. From a therapeutic perspective, Claude is not all that different from the patient having a friend who is sometimes counterproductive. Or the patient reading a self-help book that doesn't align with your therapeutic perspective.
People constantly amazed that a machine can outperform a 24 year old charging $250/hour. Especially when the 24 year old seems incapable of calculating compound interest on their student loan deferrals. Surely this 24 year old who cannot use a formula 14 year old can will have wisdom to share. Iona Potapov talks to horse, modern man talks to machine, man with more money than sense talks to young graduate with no life experience about his struggles. All do equally well: 4 on LLM benchmark for mental health.
I'm not even sure what to say. It's self-evidently a terrible idea, but we all just seem to be charging full-steam ahead like so many awful ideas in the past couple of decades.