Is Mental Therapy a Life-Critical Task?

It would seem reasonable to  ban LLMa for life-critical tasks.

LLMs are being used for Mental Therapy – for making mentally troubled people feel better about themselves. How is it working out?

The New York Times has just published an article which tells us.

You can read the NYT essay here.

OpenAI is caught in a trap – if they try to prevent mental health problems in their users by making the system less sycophantic, the usage of the system goes down. If they are notified of people who are at risk, the usage goes down.

Several problems –

1.The system does not understand what words mean, so they will be looking for trigger words. You can easily write a downer piece without using trigger words.

2. If you go too upbeat when the user is downbeat, the bond between user and Chatbot is broken,

“But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend.

 Only a skilled human  therapist, or a machine that understands what the words mean, can successfully navigate these waters.

3.”analyzed  a statistical sample of conversations and found that 0.07 percent of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania” – a tiny fraction, but a very big number – it is attracting people with psychosis, and it is not, and never will be, equipped to deal with it. 

It is estimated that 1.5% to 3.5% of the US population will exhibit systems of psychosis, and 4% exhibit mania, so 0.07% may be a considerable under-estimate., putting 20 million at risk.

Only Semantic AI can do that  (something that understands what tens of thousands of words mean, and ten thousand uses of figurative speech ("he's not all there"), and elisions – in other words, all of English). Is it necessary? – what hospitalization and suicide load will society tolerate so OpenAI and its competitors can make a buck out of amateur Mental Therapy?

Another Area of Psychology That Needs Society's Attention

Minor court officials were allowed to give bail to potential Domestic Violence offenders. That did not end well. It was changed to trained psychologists. The psychologist would give the All Clear, and the intimate partner would bew dead in two days after release.

Humans are not very good at assessing the mental state of other humans - never have been, never will. Is this an argument for making chatbot conversations a useful predictor of Domestic Violence, or could a cry for help come from this source?

 

 


Comments

Popular Posts