Neuroimaging Features Help Predict Treatment Outcomes for Major Depressive Disorder

Post by Meagan Marks

The takeaway

Neuroimaging data shows great potential in predicting treatment outcomes for patients with major depressive disorder, which can help clinicians choose the most effective treatment option.  

What's the science?

Major depressive disorder (MDD) is a mental health condition that is very prevalent and challenging to treat. While a handful of treatment options are available for MDD, their effectiveness varies from person to person. Clinicians currently use various clinical features to choose a treatment for a given patient, yet 30-50% of patients don’t respond well to initial treatments, leading to a trial-and-error approach where different options are tested over several weeks or months to find the most effective one. 

Recent research suggests that neuroimaging assessments – where clinicians scan the brain and analyze the data with machine learning models – may better predict which MDD treatments will work best for a particular patient. This week in Molecular Psychiatry, Long and colleagues review multiple studies to evaluate how well neuroimaging can predict treatment outcomes, which imaging techniques are most accurate, and which brain areas are most useful for prediction.

How did they do it?

To gain a more comprehensive understanding of how neuroimaging data can predict treatment success for patients with MDD, the authors conducted a meta-analysis, examining combined data from over 50 treatment-prediction studies. The authors first selected which studies to analyze based on predefined criteria. Ultimately, the authors included 13 studies on pretreatment clinical features (4,301 total patients) and 44 pretreatment neuroimaging studies (2,623 total patients). 

The authors then extracted and combined key data from each study, running a series of statistical tests to evaluate whether pretreatment clinical features such as mood-assessment scores and patient demographics, or neuroimaging features such as brain region structure and activity were better predictors of successful treatment outcomes. They also assessed which imaging modalities (resting-state fMRI, task-based fMRI, and structural MRI) most accurately predicted patient responses to electroconvulsive therapy (ECT) or antidepressant medication treatments, and which brain regions correlated to the success of these treatments. 

What did they find?

Following their analysis, the authors found that pretreatment brain-imaging features were more effective than clinical features at predicting patient responsiveness and treatment success. Specifically, resting-state fMRI demonstrated greater sensitivity to predictive variables and most accurately identified which patients were likely to benefit from particular treatments. The neuroimaging results revealed that key predictive brain regions were predominantly in the limbic system and default mode network, brain networks that are known to be involved in depression. Notably, alterations in various brain regions within the limbic network were associated with either antidepressant or ECT success, whereas brain regions within the default mode network were primarily linked to antidepressant efficacy.

What's the impact?

This study found that neuroimaging data can reliably predict which treatment options are most effective for patients with MDD, highlighting which imaging modalities and brain regions are best at estimating treatment success. This research could help clinicians accurately identify which patients are most likely to respond to specific treatments, allowing them to consider alternative options when necessary. Additionally, these findings could inspire further research into how neuroimaging might be used to predict treatment outcomes for other psychiatric conditions or diseases. 

Access the original scientific publication here

Speaking With Your Mind: Restoring Speech in ALS

Post by Anastasia Sares

The takeaway

In this case study, scientists demonstrate a system that can take signals from electrodes implanted in the brain and turn them into speech that can be played through a speaker. In this way, they were able to restore speech capacity to a man who had lost the ability to speak due to amyotrophic lateral sclerosis (ALS).

What's the science?

Amyotrophic Lateral Sclerosis (ALS) is a debilitating disease where motor neurons gradually atrophy and die, leaving the sufferer unable to move their bodies, though their brain continues to function normally. You may remember the “ice bucket challenge,” an ALS fundraiser that went viral on social media in 2014. Ten years later, the money raised from that challenge has done an enormous amount of good, advancing research and care, and new treatments have come to market that can slow the progression of the disease. However, ALS is still without a cure. In late-stage ALS, motor function deteriorates enough that people’s speech becomes extremely slow and distorted, which dramatically affects their quality of life.

This week in the New England Journal of Medicine, Card and colleagues published a case study of a man with advanced ALS who received brain implants that allow him to speak with the aid of a brain-computer interface.

How did they do it?

Electrode arrays have been implanted in brains before, often in patients with severe epilepsy who have to undergo brain surgery anyway in order to monitor and treat their condition. In these experiments, electrode arrays (chips with a bunch of tiny electrodes in a grid-like pattern) have been placed in various spots in the brain and scientists have been able to figure out which regions have activity that can be “decoded” to correctly predict speech. The best areas are around the ventral premotor cortex (see image).

In this study, the authors used what had been learned from previous research and chose four spots along this premotor strip to implant the electrodes in this patient. The signals from the electrodes were sent via a cable to a computer, where a neural network was used to match the brain activity with the most likely phoneme (a phoneme is a speech sound like “sh” or “a” or “ee”) that the man was trying to say. The string of phonemes was then sent to two separate language models: the first predicted possible words from the phonemes, and the second predicted possible phrases from the individual words. These models function in a similar way to the predictive text on your phone or in speech-to-text software. Finally, the predicted word sequence was turned into speech at the end of each sentence, using a synthesized voice created from the man’s own pre-ALS speech samples.

What did they find?

The authors evaluated the accuracy of the system in two ways. First, they prompted the man to think of certain words and phrases to see if the system could reliably reproduce the prompt. Second, they allowed the man to “speak” freely and then had him evaluate whether the system had faithfully produced what he wanted to say. Since the patient could not move, they had him do the evaluation using a rating screen with different bubbles (“100% correct,” “mostly correct,” and “incorrect”) and an eye-tracking system that could track which of the rating bubbles he looked at. The system started with about a 10% error rate, which gradually reduced over time as the system was trained to only 2.5% errors, with a vocabulary size of 125,000 words—a substantial increase in performance compared to the few other studies of this kind. The patient’s speaking rate also increased from the 6 words per minute he could produce naturally to around 30 words per minute (the normal English speaking rate is close to 160 words per minute).

What's the impact?

This study demonstrates how brain-computer interfaces are not only possible but can dramatically improve the quality of life for those who have lost normal functioning due to disease. As stated in the article, the first block of trials was excluded from the experiment because “the experience of using the system elicited tears of joy from the participant and his family as the words he was trying to say appeared correctly on screen.” Videos of the system can be accessed on the page of the original publication.

Increasing Glucose Metabolism Improves Alzheimer’s Disease Symptoms

Post by Shireen Parimoo 

The takeaway

The enzyme IDO1 in astrocytes is important for glucose metabolism in the brain. Inhibiting IDO1 in Alzheimer’s disease pathology restores glucose metabolism and rescues cognitive deficits such as spatial memory.

What's the science?

Alzheimer’s disease is characterized by deficits in learning and memory. In the brain, there is an accumulation of the protein amyloid beta as well as misfolding of the tau protein with increasing severity of the disease, eventually leading to neuron death. Astrocytes regulate glucose metabolism and provide lactate as energetic fuel to neurons in the brain. Glucose metabolism is known to decline in Alzheimer’s disease, but the mechanism through which this process is disrupted is unclear. Past research also shows that inflammatory stimuli – such as amyloid beta – tend to increase IDO1 activity. Thus, one possibility is that the enzyme IDO1, which helps convert the amino acid tryptophan to kynurenine in astrocytes, is involved. This week in Science, Minhas and colleagues investigated the genetic and physiological effects of IDO1 on the glucose metabolism process and on learning and memory in Alzheimer’s disease. 

How did they do it?

The authors systematically examined the effects of Alzheimer’s pathology and the role of IDO1 in different stages of the glucose metabolism pathway in hippocampal astrocytes and neurons. First, they derived astrocytes from mouse hippocampi and human induced pluripotent stem cells. These included astrocytes derived from post-mortem brains of individuals with varying stages of late-onset dementia, with later stages being characterized by higher levels of amyloid and tau accumulation in the brain. The astrocytes were treated with oligomers of amyloid beta and tau in-vitro to simulate Alzheimer’s pathology and with PF068, which inhibits IDO1 activity. They then recorded changes in IDO1, tryptophan, and kynurenine levels in response to Alzheimer’s pathology and IDO1 inhibition. Additionally, they studied the downstream effects of IDO1 inhibition on glucose metabolism by measuring changes in the concentrations of intermediate metabolites and lactate. Next, the authors replicated the above experiments in astrocytes derived from mouse models of Alzheimer’s disease that overexpressed either the amyloid or tau protein. Finally, they examined the effects of IDO1 inhibition on object memory (novel object recognition) and spatial memory tasks (Morris water maze, Barnes maze) in the mouse models of Alzheimer’s disease.

What did they find?

Glucose metabolism was disrupted in Alzheimer’s pathology - the concentration of intermediate metabolites and lactate were reduced following amyloid and tau treatment. However, inhibiting IDO1 (or knocking it out) restored glucose metabolism. Similarly, kynurenine levels, which were elevated in Alzheimer’s pathology, returned to normal when IDO1 activity was inhibited. On the other hand, administering additional kynurenine disrupted the metabolic pathway in astrocytes treated with amyloid and tau oligomers in a dose-dependent manner. Similar effects were observed in mouse models of Alzheimer’s disease, with elevated kynurenine and lower lactate levels in hippocampal astrocytes. As before, inhibiting IDO1 lowered kynurenine levels and increased lactate production. Thus, these results show that the production of kynurenine via IDO1 activity is a crucial component of glucose metabolism in Alzheimer’s pathology

Inhibiting IDO1 also rescued memory deficits and reduced the accumulation of amyloid beta in the subiculum near the hippocampus. Similarly, the deletion of IDO1 led to a reduction in kynurenine and increased lactate levels. In post-mortem human brains, kynurenine levels were higher in cases with more severe Alzheimer’s disease pathology. In astrocytes derived from individuals with late-onset dementia, there was a reduction in glucose metabolism while kynurenine levels were elevated. Inhibiting IDO1 in these astrocytes rescued both glucose metabolism and returned kynurenine to comparable levels to those without dementia. Altogether, these findings highlight IDO1 as a key enzyme in regulating glucose metabolism and consequently, cognition in both humans and in mouse models of Alzheimer’s disease.

What's the impact?

This study is the first to demonstrate the mechanism through which glucose metabolism is disrupted in Alzheimer’s disease and the important role that the IDO1 enzyme plays in this process. These findings have important applications for the development of treatments for pathologies that are characterized by protein aggregation in the brain. 

Access the original scientific publication here.