The Nuanced Relationship Between Neuronal Activity and Blood Flow

Post by Shahin Khodaei

The takeaway

Increased neuronal activity in a brain region increases blood flow to that area. When neurons in a region are active, the signal gets sent up along the vessels that supply blood to that region, causing them to dilate upstream. 

What's the science?

When there is increased neuronal activity in a region of the brain, more blood flows to that area – a process called neurovascular coupling (NVC). This coupling is the basis for functional magnetic resonance imaging (fMRI), which measures blood flow to a brain region as a surrogate for neuronal activity. The regulation of NVC at the spatial level is not well understood – does increased neuronal activity in a small brain area lead to dilation of blood vessels in the same region? Or is the relationship more nuanced? This week in Nature Neuroscience, Martineau and colleagues addressed these questions by studying neuronal activity and blood vessel dilation in small regions of the mouse brain using microscopy.

How did they do it?

The authors used a mouse model and focused on a brain region called the sensory cortex, which is active in response to physical stimulation. Within the rodent sensory cortex, there are cortical “barrels” which become active in response to stimulation of each of the mouse’s whiskers – a cortical barrel for whisker W1, a barrel for the next whisker W2, then W3, and so on. To study the relationship between neuronal activity and blood flow in the brain of mice, the authors removed a portion of the skull directly over the sensory cortex and surgically replaced it with glass. This gave them a window through which they could study the sensory cortex, using microscopes.

The authors performed their experiments on mice whose neurons expressed a fluorescent calcium indicator, meaning that active neurons emitted red light. They then stimulated the whiskers of mice, and used wide field imaging to locate the corresponding barrel for each whisker. Simultaneously, they made use of the fact that oxygenated and de-oxygenated hemoglobin scatters the microscope’s light differently, and were able to characterize blood flow to each barrel. They also used a very high-resolution technique called two-photon microscopy to study the dilation and blood flow through individual vessels in each barrel, and how it changed due to whisker stimulation and neuronal activity.

What did they find?

As expected, when each whisker was stimulated, the corresponding barrel in the sensory cortex showed increased neuronal activity and increased blood flow. Then the authors used higher resolution imaging techniques to study blood vessel dilation in response to whisker stimulation in each barrel. They found that the response of blood vessels to was very heterogeneous: some vessels in barrel W1 dilated when whisker W1 was stimulated, some did not, and some actually dilated when whisker W2 was stimulated. Further experiments showed that blood vessels were not dilating due to increased neuronal activity in their immediate surroundings. Instead, downstream neuronal activity sent a signal up the vessel, causing dilation; meaning that blood vessels dilated in response to downstream neuronal activity. So, in the example above, a blood vessel that was imaged in barrel W1, but was in fact carrying blood toward W2, would dilate in response to neuronal activity in W2 and not W1. 

What's the impact?

This study shed light on the spatial regulation of neurovascular coupling. As the spatial resolution of imaging techniques such as fMRI increase, these findings are incredibly relevant: they suggest that at high resolutions, changes in blood vessels do not report neuronal activity of their surroundings, but instead reflect an integration of neuronal activity downstream. 

Access the original scientific publication here.

Neuroimaging Features Help Predict Treatment Outcomes for Major Depressive Disorder

Post by Meagan Marks

The takeaway

Neuroimaging data shows great potential in predicting treatment outcomes for patients with major depressive disorder, which can help clinicians choose the most effective treatment option.  

What's the science?

Major depressive disorder (MDD) is a mental health condition that is very prevalent and challenging to treat. While a handful of treatment options are available for MDD, their effectiveness varies from person to person. Clinicians currently use various clinical features to choose a treatment for a given patient, yet 30-50% of patients don’t respond well to initial treatments, leading to a trial-and-error approach where different options are tested over several weeks or months to find the most effective one. 

Recent research suggests that neuroimaging assessments – where clinicians scan the brain and analyze the data with machine learning models – may better predict which MDD treatments will work best for a particular patient. This week in Molecular Psychiatry, Long and colleagues review multiple studies to evaluate how well neuroimaging can predict treatment outcomes, which imaging techniques are most accurate, and which brain areas are most useful for prediction.

How did they do it?

To gain a more comprehensive understanding of how neuroimaging data can predict treatment success for patients with MDD, the authors conducted a meta-analysis, examining combined data from over 50 treatment-prediction studies. The authors first selected which studies to analyze based on predefined criteria. Ultimately, the authors included 13 studies on pretreatment clinical features (4,301 total patients) and 44 pretreatment neuroimaging studies (2,623 total patients). 

The authors then extracted and combined key data from each study, running a series of statistical tests to evaluate whether pretreatment clinical features such as mood-assessment scores and patient demographics, or neuroimaging features such as brain region structure and activity were better predictors of successful treatment outcomes. They also assessed which imaging modalities (resting-state fMRI, task-based fMRI, and structural MRI) most accurately predicted patient responses to electroconvulsive therapy (ECT) or antidepressant medication treatments, and which brain regions correlated to the success of these treatments. 

What did they find?

Following their analysis, the authors found that pretreatment brain-imaging features were more effective than clinical features at predicting patient responsiveness and treatment success. Specifically, resting-state fMRI demonstrated greater sensitivity to predictive variables and most accurately identified which patients were likely to benefit from particular treatments. The neuroimaging results revealed that key predictive brain regions were predominantly in the limbic system and default mode network, brain networks that are known to be involved in depression. Notably, alterations in various brain regions within the limbic network were associated with either antidepressant or ECT success, whereas brain regions within the default mode network were primarily linked to antidepressant efficacy.

What's the impact?

This study found that neuroimaging data can reliably predict which treatment options are most effective for patients with MDD, highlighting which imaging modalities and brain regions are best at estimating treatment success. This research could help clinicians accurately identify which patients are most likely to respond to specific treatments, allowing them to consider alternative options when necessary. Additionally, these findings could inspire further research into how neuroimaging might be used to predict treatment outcomes for other psychiatric conditions or diseases. 

Access the original scientific publication here

Speaking With Your Mind: Restoring Speech in ALS

Post by Anastasia Sares

The takeaway

In this case study, scientists demonstrate a system that can take signals from electrodes implanted in the brain and turn them into speech that can be played through a speaker. In this way, they were able to restore speech capacity to a man who had lost the ability to speak due to amyotrophic lateral sclerosis (ALS).

What's the science?

Amyotrophic Lateral Sclerosis (ALS) is a debilitating disease where motor neurons gradually atrophy and die, leaving the sufferer unable to move their bodies, though their brain continues to function normally. You may remember the “ice bucket challenge,” an ALS fundraiser that went viral on social media in 2014. Ten years later, the money raised from that challenge has done an enormous amount of good, advancing research and care, and new treatments have come to market that can slow the progression of the disease. However, ALS is still without a cure. In late-stage ALS, motor function deteriorates enough that people’s speech becomes extremely slow and distorted, which dramatically affects their quality of life.

This week in the New England Journal of Medicine, Card and colleagues published a case study of a man with advanced ALS who received brain implants that allow him to speak with the aid of a brain-computer interface.

How did they do it?

Electrode arrays have been implanted in brains before, often in patients with severe epilepsy who have to undergo brain surgery anyway in order to monitor and treat their condition. In these experiments, electrode arrays (chips with a bunch of tiny electrodes in a grid-like pattern) have been placed in various spots in the brain and scientists have been able to figure out which regions have activity that can be “decoded” to correctly predict speech. The best areas are around the ventral premotor cortex (see image).

In this study, the authors used what had been learned from previous research and chose four spots along this premotor strip to implant the electrodes in this patient. The signals from the electrodes were sent via a cable to a computer, where a neural network was used to match the brain activity with the most likely phoneme (a phoneme is a speech sound like “sh” or “a” or “ee”) that the man was trying to say. The string of phonemes was then sent to two separate language models: the first predicted possible words from the phonemes, and the second predicted possible phrases from the individual words. These models function in a similar way to the predictive text on your phone or in speech-to-text software. Finally, the predicted word sequence was turned into speech at the end of each sentence, using a synthesized voice created from the man’s own pre-ALS speech samples.

What did they find?

The authors evaluated the accuracy of the system in two ways. First, they prompted the man to think of certain words and phrases to see if the system could reliably reproduce the prompt. Second, they allowed the man to “speak” freely and then had him evaluate whether the system had faithfully produced what he wanted to say. Since the patient could not move, they had him do the evaluation using a rating screen with different bubbles (“100% correct,” “mostly correct,” and “incorrect”) and an eye-tracking system that could track which of the rating bubbles he looked at. The system started with about a 10% error rate, which gradually reduced over time as the system was trained to only 2.5% errors, with a vocabulary size of 125,000 words—a substantial increase in performance compared to the few other studies of this kind. The patient’s speaking rate also increased from the 6 words per minute he could produce naturally to around 30 words per minute (the normal English speaking rate is close to 160 words per minute).

What's the impact?

This study demonstrates how brain-computer interfaces are not only possible but can dramatically improve the quality of life for those who have lost normal functioning due to disease. As stated in the article, the first block of trials was excluded from the experiment because “the experience of using the system elicited tears of joy from the participant and his family as the words he was trying to say appeared correctly on screen.” Videos of the system can be accessed on the page of the original publication.