A New Way Forward: Psychedelic Therapy for the Treatment of Mental Illness

Post by Laura Maile 

The need for effective treatments for mental illness

An estimated 26% of adults in the United States suffer from depression, anxiety, or other related mental health disorders, and around 20% of patients don’t respond to standard treatments. Major depressive disorder (MDD) is a substantial public health burden, with an estimated $210 billion burden every year in the US alone. Drug treatments for MDD usually require weeks to reach effective levels, and many who take such treatments do so indefinitely. Along with the adverse side effects reported for many drugs used to treat depression and anxiety, the cost burden to patients and the public highlights the critical need for improved treatments. 

What are psychedelics?

There are many synthetic and naturally occurring substances categorized as “psychedelics,” which are defined by their ability to alter consciousness, perception, cognition, and mood, usually through action at serotonin receptors. Classic psychedelics including psilocybin, mescaline, N, N-dimethyltryptamine (DMT), and lysergic acid diethylamide (LSD) bind to a variety of receptors, but exert their psychoactive effects by binding to a specific type of serotonin receptor, the 5-HT2A receptor, which is mostly located on excitatory neurons in the cortex. Binding to these specific serotonin receptors leads to intracellular signaling cascades that ultimately lead to synaptic plasticity and increased activity of neurons in regions of the brain associated with cognition, attention, emotion regulation, and sensory perception. Scientists don’t yet fully understand the specific mechanisms that lead to the acute effects of the psychedelic experience, nor the long-term effects of their use in treating mental health disorders.  

What’s the current state of research on psychedelic psychotherapy?

There have been several studies that support the high therapeutic potential of psychedelics in the treatment of depression, anxiety, alcohol-use disorder, substance-use disorder, and post-traumatic stress disorder. Evidence comes from both surveys of psychedelic self-administration and small-scale clinical investigations with controlled administration and medical supervision, usually in conjunction with psychological support or therapy before, during, and after the treatment sessions. In 2021, Davis and colleagues published a randomized clinical trial of 24 participants with MDD, showing that just two treatment sessions of psychotherapy-assisted administration of psilocybin led to significant improvement in symptoms in 71% of participants, and full remission in 54% of participants one month later. Ross and colleagues demonstrated similar findings in another small-scale randomized placebo-controlled trial focusing on the treatment of anxiety and depression in cancer patients. In this study, between 60-80% of patients saw continued relief from anxiety and depression symptoms 6.5 months following a single dose of psilocybin combined with psychotherapy. Another study published in 2022 by Holze and colleagues tested the effects of LSD psychotherapy in 42 participants with anxiety, both with and without a life-threatening illness. In this study, participants experienced two treatment sessions with LSD and two with placebo, and most experienced reduced anxiety and depression symptoms 16 weeks later. Overall, this research suggests that psychedelic therapy can be effective in treating mental illness, even after just a couple of treatments.

Results from a randomized clinical trial (von Rotz et al., 2022)

What does the future look like?

Recent studies in rodents indicate that some of the beneficial effects of psychedelics for depression treatment may depend on the TrkB receptor, not serotonin receptors. More research is needed to understand further the mechanism of action that underlies the beneficial effects of psychedelics. One concern regarding psychedelic therapy is the potential side effects - more research is needed to determine whether existing psychedelic drugs can be chemically altered to target receptors that will limit the hallucinogenic effects while maintaining the beneficial antidepressant effects. While past clinical trials have indicated the potential benefits of using psychedelics to treat mental illness, additional large-scale randomized clinical trials are needed to ensure the safety and efficacy of using various psychedelic drugs for treatment-resistant depression, anxiety, and other mental illnesses, especially for individuals already taking medications like antidepressants. These include studies on 5-MeO-DMT, a component of Sonoran Desert Toad venom that produces an intense psychedelic experience that lasts between 5-20 minutes, which is much shorter than the hours-long effects of LSD and psilocybin.  

The takeaway

Reviews of many recent clinical studies indicate that psychedelics have huge potential as an alternative treatment for depression, anxiety, and other mental health disorders. When administered in a safe environment in combination with psychotherapy, psychedelics can result in long-term reductions in depression and anxiety symptoms, with few adverse effects reported. More large-scale clinical trials are needed to assess the safety and efficacy of these treatments in a more diverse pool of patients. The research and development of approved psychedelics is an important ongoing effort that may have substantial impacts on the personal, social, and economic burdens of mental health disorders worldwide.    

References +

Carhart-Harris et al. Psylocibin with psychological support for treatment-resistant depression: an open label feasbility study. 2016. The Lancet Psychiatry.

Davis AK et al. Effects of Psilocybin-Assisted Therapy on Major Depressive Disorder: A Randomized Clinical Trial. 2021. JAMA Psychiatry.

DiVito AJ et al. Psychedelics as an emerging novel intervention in the treatment of substance use disorder: a review. 2020. Mol Biol Rep.

Holze F et al. Lysergic Acid Diethylamide-Assisted Therapy in Patients With Anxiety With and Without a Life-Threatening Illness: A Randomized, Double-Blind, Placebo-Controlled Phase II Study. 2022. Biological Psychiatry.

Kocak, D.D., Roth, B.L. Examining psychedelic drug action. 2024. Nat. Chem.

McClure-Begley et al. The promises and perils of psychedelic pharmacology for psychiatry. 2022. Nat Rev Drug Discov.

Moliner R et al. Psychedelics promote plasticity by directly binding to BDNF receptor TrkB. 2023. Nature Neuroscience.

Ross S et al. Rapid and sustained symptom reduction following psilocybin treatment for anxiety and depression in patients with life-threatening cancer: a randomized controlled trial. 2016. J Psychopharmacol.

Von Rotz et al. Single-dose psilocybin-assisted therapy in major depressive disorder: a placebo-controlled, double-blind, randomised clinical trial. 2022. eClinical Medicine.

The Organization of Abstract Brain Regions Like Sensory Brain Regions May Facilitate Information Flow

Post by Lani Cupo 

The takeaway

Brain regions previously thought to be solely responsible for abstract processes, such as memory, can be organized like brain regions involved in sensory perception - detecting the world around us through our senses. This might indicate that information is being transferred between regions of the brain involved in sensory processing and abstract processes like memory. 

What's the science?

Certain brain regions are known to be involved in external perception (e.g. vision), while others are associated with more abstract processes (e.g. memory). How information is communicated between these two types of processes is still an open question in neuroscience, especially since the neural code (or how the information is represented by neurons) for both systems is thought to be different. This week in Nature Neuroscience, Steel and colleagues use functional magnetic resonance imaging (fMRI) in humans to provide evidence for a way in which the perceptual and abstract processes may interact.

How did they do it?

Visual information is well known to have a retinotopic representation in the primary visual cortex, meaning that the neurons in this brain region are arranged corresponding to the region of the eye’s retina that they respond to. In this study, the authors first sought to determine whether the retinotopic organization of neurons exists not only in regions responsible for sensory processes as expected, but also in regions responsible for abstract processes. To do this, they acquired fMRI data from the entire cortex while participants viewed visual stimuli, to model populations of neurons associated with a retinotopic organization. They could also determine whether the neural responses to the stimulus that follow a retinotopic pattern were positive (greater than baseline) or negative (less than baseline). The authors also examined the ratio of positive to negative activations and identified brain regions outside of the visual cortex that showed a retinotopic pattern. Second, because they hypothesized that retinotopic activation outside of the visual cortex was related to memory, they had participants complete a memory task to test whether the ratio of positive to negative activity changed between the regions. Finally, the authors tested whether the same patterns of activity were observed in tasks more applicable to the real world that might activate both sensory and memory regions, by showing participants images of places that they were familiar with.

What did they find?

First, in addition to the expected retinotopic organization of the primary visual cortex (a key region for sensory perception), the authors found a retinotopic organization of neurons in higher-order regions of the brain involved in abstract cognitive processes. This is important because it has generally been theorized that only the primary visual cortex is organized corresponding to the retina. Specifically, during the perceptual task, brain activity was reduced in some memory areas (e.g. lateral parietal cortex), whereas in visual regions, activity increased in response to a stimulus. These findings suggested this inverse retinotopic organization may be associated with information transfer between perceptive and memory regions, so the authors next conducted imaging during a memory task. Consistent with this hypothesis, they observed opposite activation patterns, when comparing the perceptual and memory tasks. That is, they saw positive activations within retinotopic memory regions and negative activation in sensory regions. Finally, when they presented the participants with familiar scenes, the authors found the opposing interaction between sensory and memory regions persisted. This suggests that their findings are likely replicable in real-world scenarios and extend beyond the artificial and highly controlled laboratory settings of the first two experiments.

What's the impact?

This study provides additional evidence for a retinotopic organization of memory regions and suggests contrasting activity in memory and visual regions may be responsible for information transfer between sensory and higher-order regions. The findings further the understanding of how sensory and non-sensory brain regions communicate.

Access the original scientific publication here.

Empathic Artificial Intelligence: The Good and the Bad

Post by Shireen Parimoo 

What does it mean to be empathic? 

Empathy is one of the most distinguishing human traits. Empathy allows us to take on others’ points of view, share emotional experiences, and help others feel understood and cared for. As a result, empathy facilitates social bonding and helps strengthen interpersonal relationships. There are three main components of empathy: 

1. Cognitive empathy is our ability to recognize and understand others’ emotional states.

2. Emotional empathy involves affective resonance, or the ability to share in the emotions of others by feeling those emotions ourselves.

3. Motivational empathy refers to the feelings of care and concern for others that make us want to act to improve their well-being. 

Over the years, machines and robots have found their way into many roles that were previously filled by humans. Robotic pets that keep older adults with dementia company reduce their feelings of loneliness and improve their well-being. Chatbots and voice assistants that are powered by AI technology help us with a wide range of situations and provide personalized solutions to our problems. Even empathic conversational AI agents can be used to solicit donations for charitable causes, with features like a trembling voice both showing and eliciting empathy from listeners resulting in more donations. Going a step further, smart journals have been developed to incorporate AI into the journaling process, providing users with real-time feedback and even coaching. Technology like this can be immensely useful for those who are unable to afford therapy or require immediate feedback.

With the advent of large language models like ChatGPT and the adoption of increasingly intelligent technology into our day-to-day lives, there are several ongoing debates surrounding artificial intelligence (AI). Can AI agents be empathic? If so, when is it ethical to use them, if at all? What are the benefits and harms of allowing empathic AI agents to interact with people? Should the use of AI be regulated? This topic overview will touch on some of these questions by introducing examples of human-AI interactions, describing empathic AI and its uses in different contexts, and discussing the pros and cons of empathic AI.

What does empathic AI look like? 

People often treat AI similarly to other humans. We ascribe emotional states to AI agents and in interacting with them, experience similar reactions as we would with other humans. For example, Cozmo is a social robot that can express rudimentary forms of happiness and sadness. When denied a fist bump, Cozmo expresses sadness by turning away and making a sad sound. In response to the sad gesture, both children and adults show concern for Cozmo. Similarly, people feel more guilty and ashamed when voice assistants like Siri respond to verbal aggression with empathy rather than avoidance.

Artificially intelligent agents can simulate – if not genuinely feel – some aspects of empathy. ChatGPT, for instance, can recognize the user’s emotional state (cognitive empathy). When informed that "I feel horrible because I failed my chemistry exam”, ChatGPT responded with a sympathetic statement (“I’m sorry to hear that you’re feeling this way”) and showed insight into what the user might be feeling (“It's completely normal to feel disappointed or upset about exam results”). It then provided suggestions for coping with the situation (e.g., “give yourself time to feel”, “focus on the future”), much like a friend or mentor might provide in a similar situation (motivational empathy). 

Although AI can simulate expressions of cognitive and motivational empathy, it is unclear if AI can engage in emotional empathy because affective resonance (i.e., the ability to resonate with the emotions of others) may have a neurophysiological basis. For example, people who watch others in pain will activate some of the same brain regions as those who are experiencing the pain. Even seeing pictures of people with a pained facial expression will activate brain areas involved in pain empathy. This ability makes it easier for us to feel what the other person is feeling but may be difficult for a non-biological agent like AI to achieve. Nevertheless, it may be enough that AI agents can express empathy in various situations and elicit specific emotional responses from humans, which raises the question: are there any costs associated with empathic AI? 

The benefits and harms of empathic AI 

A major risk of adopting AI technology in general is that it can propagate the biases of those who create it. It is already known that many machine learning models and AI tools exhibit biases against certain sociodemographic groups. For instance, an algorithm used in the US healthcare system showed racial bias against Black patients who were predicted by the algorithm to be healthier than their equally sick White counterparts, thereby preventing them from receiving the extra care that they required. ChatGPT also exhibits gender biases against women. When writing recommendation letters, ChatGPT described men in terms of their skills and competence (‘expert’, ‘respectful’) whereas women were described in terms of their appearance and temperament (‘stunning’, ‘emotional’). In fields such as healthcare and technology where racial and/or gender bias is present and minorities are under-represented, these biases have the potential to manifest in harmful ways for the users.

Nonetheless, there are numerous ways that empathic AI can benefit our lives. As mentioned before, conversational AI can increase prosocial behavior by nudging people to donate to charitable causes. In this situation, empathy is not necessarily directed toward but instead evoked in the user. Research indicates that people are receptive to expressions of empathy from AI, which may be particularly useful in the healthcare context. For example, patients are more likely to disclose information, adhere to their treatment, and generally cope better when they perceive their physician to be empathic towards them in their interactions. When healthcare practitioners like physicians and therapists are not readily available (e.g., in between appointments) to provide patient-centered care, empathic AI can fill the gap by providing emotional support as needed.

People can also use empathic AI services such as smart journals in their daily lives without being restricted by cost or the fear of social judgment that often prevents people from seeking help. An AI agent can also provide empathy consistently and reliably because it does not suffer from compassion fatigue, whereas people might begin to feel the burden of continually providing emotional support. However, there is a risk of becoming too dependent on AI for emotional support with potentially negative consequences.

On the other hand, expressions of empathy from AI can be seen as inherently manipulative because AI agents cannot yet truly feel empathy. Empathy offered by healthcare practitioners is driven by their emotional states and past experiences that allow them to relate to their patients, which is something that AI inherently cannot do. Moreover, even though people can benefit from expressions of empathy from AI, this is largely only true when they are aware that they are interacting with AI agents. We may hold AI to a different standard and have different expectations from our interactions with AI agents compared to other people. If people do not realize that they are receiving feedback from AI agents, such as in virtual therapy, its effect can be diluted and even negatively impact well-being, erode trust, and call into question the ethics of using such technology or platforms. Lastly, the potential for manipulation and deception is particularly important to keep in mind and guard against when empathic AI is used in interactions with vulnerable populations like children and the elderly. There are cases where AI has been misused to commit fraud through social engineering, such as conversational AI mimicking the voice of an individual’s family member to obtain sensitive information.

References +

Ashcraft et al. (2016). Women in tech: The facts. Report.

Chin et al. (2020, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems). Empathy is all you need: How a conversational agent should respond to verbal abuse.

Efthymiou & Hildebrand. (2023, IEEE Transactions on Affective Computing). Empathy by design: The influence of trembling AI voices on prosocial behavior.

Inzlicht et al. (2023, Trends in Cognitive Sciences). In praise of empathic AI.

Montemayor et al. (2022, AI & Society). In principle obstacles for empathic AI: Why we can’t replace human empathy in healthcare.

Obermeyer et al. (2019, Science). Dissecting racial bias in an algorithm used to manage the health of populations.

Pelikan et al. (2020, Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction). "Are You Sad, Cozmo?": How humans make sense of a home robot's emotion displays.

Perry, A. (2023, Nature Human Behavior). AI will never convey the essence of human empathy.

Portacolone et al. (2020, Generations). Seeking a sense of belonging.

Singer et al. (2004, Science). Empathy for pain involves the affective but not sensory components of pain.

Srinivasan & González. (2022, Journal of Responsible Technology). The role of empathy for artificial intelligence accountability.

Wan et al. (2023, arXiv). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters.

Xiong et al. (2019, Neural Regenerative Research). Brain pathways of pain empathy activated by pained facial expressions: A meta-analysis of fMRI using the activation likelihood estimation method.

Mindsera Smart Journal. https://www.mindsera.com/