A formula for new medication
Plus: Psychosis and suicide risk; a pandemic puzzle; and ChatGPT vs human therapists
A formula for new medication
Controversial he may be, but David Nutt’s work is always worth a read. In this Lancet Psychiatry piece, he gives a personal opinion on the technical, regulatory, and cultural challenges that have impeded the development of new mental health medications over the past half-century. There’s plenty to provoke discussion here, as well as much to quibble with.
For example, Nutt quite rightly points out that dementia research has focused on amyloid elimination to the exclusion of the highly impportant behavioural and psychological symptoms of the condition; but I think that research into non-pharmacological patient care, rather than medication development, might have been the better strategy here. Another example is his proposal that “Charitable and not-for-profit companies should be incentivised to fund innovation.” While it sounds good to expand medication development beyond big pharma, I worry about a flood of studies funded and conducted by true believers in whatever new drug is being tested, with all the problems of bias that this would bring.
Still, I like some of his ideas a lot: the notion of a “yellow card” system to capture unexpected benefits of medication (a twin with the current UK system for recording adverse events) is particularly appealing.
Psychosis and suicide risk
Schizophrenia Bulletin has published this important new paper from on suicide risk and psychosis, from the ever-reliable Manchester team. Analysing data from England and Wales collected between 2008 and 2021, they focused on patients with less than 12 months of psychotic illness who died by suicide (288 of the total group of 2828 people with psychosis who died by suicide, ie, 10%).
They found that this recent-onset group were more commonly in recent contact with crisis teams or recently discharged from in-patient care at the time of death than their longer-duration counterparts. They also had fewer of the social and behavioural factors commonly associated with death by suicide, including unemployment, living alone, substance use, and previous self-harm. This, the authors propose, is suggestive of “lives recently disrupted by illness.”
The authors conclude by drawing the attention of clinicians to “the disruption recent onset of schizophrenia has to social circumstances, such as relationships and work”, and recommending intensive, robust, and regular support, particularly around early discharge from inpatient care, and during periods of crisis home treatment.
A pandemic puzzle
An intriguing finding in eClinicalMedicine: according to the South London and Maudsley NHS Foundation Trust’s clinical records system, the COVID pandemic was accompanied by an increase in the incidence of first-episode psychosis, a pattern particularly marked in Black and Asian individuals. Alas, the data aren’t granular enough to figure out what, if anything, is going on here: is it just a shift of patients from primary to secondary care, the impact of social stressors, or some direct effect of the virus itself?
AI, therapist
A somewhat florid title for this PLOS Mental Health study (“A Turing test for the heart and mind”), and a rather tangled write-up of the method (authors: please remember that flowcharts are your friend).
The gist is that 830 individuals recruited via the CloudResearch platform, and apparently “representative of the population of the United States”, were presented with couple-therapy vignettes generated by either real therapists or ChatGPT. They were then asked to guess if a human or machine was behind the vignette, and score according to the “common factors of therapy” (therapeutic alliance, empathy, expectations, cultural competence, and therapist effects).
It turned out that, first, participants couldn’t guess the difference, and second, that ChatGPT did better on the “common factors” ratings. Which is worth knowing, but doesn’t mean it’s time for therapists to hand over to the robots quite yet. To my mind, this study just shows what we already know, which is that AI is great at patterns (spotting them and generating material to fit them). It’s not really a test of whether ChatGPT is the better therapist in real-time, clinical situations.
“After conducting this study, we find ourselves with more questions than answers,” write the authors. They’re not the only ones.