Long-Term Exercise Boosts Brain Waste Clearance

Post by Meagan Marks

The takeaway

Long-term physical exercise makes it easier for the brain to clear toxic waste, promoting healthy cognitive function and potentially slowing the progression of neurological diseases. 

What's the science?

As you sleep, your brain clears toxic waste products through a specialized network of channels called the glymphatic system. This system transports harmful substances, such as damaged proteins and nonfunctional metabolites, from the fluid between your brain cells into your cerebrospinal fluid, which surrounds and cushions the brain. From there, the waste flows into the meningeal lymphatic vessels and nodes, where it is eventually eliminated from the body through the lymphatic system. 

When the glymphatic system gets disrupted or clogged, waste products accumulate, which can hinder brain function and potentially contribute to the progression of neurological diseases. Finding ways to boost this waste clearance process has been of keen interest, as enhancing it may help prevent neurological disease and promote healthy brain aging. 

This week in Nature Communications, Yoo and colleagues explore a potential technique to enhance brain waste clearance, showing that long-term exercise may boost the efficiency of the glymphatic and meningeal lymphatic drainage systems. 

How did they do it?

To investigate the impact of exercise on the brain’s waste clearance system, the authors enlisted 37 adult participants. Sixteen of these participants were instructed to engage in 30-minute cycling sessions three times a week for three months, with exercise intensity progressively increasing each week. The remaining participants completed a single 30-minute cycling session.

Before and following exercise, the authors collected blood samples and conducted MRI scans on all participants. The blood samples were analyzed for changes in protein expression, while the imaging protocol included advanced techniques such as intravenous contrast-enhanced dynamic T1 imaging and interslice flow imaging (to trace blood flow throughout the brain), and black blood imaging (to visualize blood vessel structure). The primary focus of the imaging was on the glymphatic channels within the putamen, a brain region essential for motor control and learning, as well as the meningeal lymphatic vessels.

What did they find?

The authors found that long-term exercise significantly boosted the flow in both glymphatic and meningeal lymphatic vessels, whereas short-term exercise did not result in changes. Additionally, the size of the meningeal lymphatic vessels increased with long-term exercise, indicating more efficient fluid circulation. These results suggest that consistent exercise may enhance glymphatic drainage, supporting more effective waste removal in the brain.

Furthermore, the study identified 15 differentially expressed proteins in the long-term exercise group. These proteins were primarily involved in inflammation and immunity, with proinflammatory proteins being downregulated and immune-boosting proteins being upregulated. This suggests that the long-term exercise had both anti-inflammatory and immune-boosting effects, which potentially played a role in the improvement of brain drainage. 

What's the impact?

This study found that long-term exercise enhances the function of the brain's waste clearance system, which is essential for maintaining healthy brain function. These findings suggest that consistent exercise can be a valuable tool for preventing neurological diseases. However, since most participants in this study were on the younger and healthier side, it’s important to explore how exercise might influence the progression of disease in older individuals or those already affected by neurological conditions. 

Access the original scientific publication here.

How Do Brain Dynamics Affect Cognitive Performance?

Post by Meredith McCarty

The takeaway

The brain criticality hypothesis is a unifying theory of brain function and dysfunction, but lacks thorough empirical evidence. This study provides evidence linking cognitive function and critical dynamics in humans with epilepsy. 

What's the science?

Cognitive impairments, including learning, memory, and attention difficulties, are a feature of numerous disorders and can stem from many factors that are difficult to model in a unified framework. 

The brain criticality hypothesis is a theoretical framework that links brain structure to its dynamics, with potential for use in understanding brain function and dysfunction. Within this “critical dynamics” framework, optimal network dynamics occur at an equilibrium between order and disorder, with optimal long-range temporal correlation (TC) of brain activity occurring at long time delays. Without direct access to measures of TC within brain networks, prior attempts to apply the critical dynamics framework to understand cognitive function and dysfunction have fallen short. 

This week in PNAS, Müller and colleagues investigated the relationship between cognitive impairment and critical dynamics using extensive neural recordings in humans. 

How did they do it?

To effectively capture the neural dynamics central to this theoretical framework, the authors analyzed previously collected datasets from 104 people (47 female) with epilepsy who had electrodes implanted in their brains for presurgical evaluation. Unlike prior non-invasive studies in a similar vein, this dataset allowed the authors to analyze data directly recorded from neural tissue and quantify the temporal correlation (TC) of brain dynamics as a measure of signal decay over time. 

They related the changes in TC in individual participants to measures of cognitive impairment, which were captured via cognitive testing that measured levels of language, attention, working memory, and verbal learning. Additional factors analyzed included the dose of anti-seizure medication (ASM) given, occurrence of interictal epileptiform discharges (IEDs), and occurrence of slow-wave sleep (SWS) during each day of the participant’s recording. 

To understand how these factors relate to overall network dynamics, the authors studied a neural network model, consisting of 1024 neurons, and were able to simulate how anti-seizure medication, occurrence of interictal epileptiform discharges, and occurrence of slow-wave sleep significantly changed TC and other measures of network dynamics. 

What did they find?

Analysis of the neural network model revealed optimal dynamics when TCs were maximized, and that simulating the network effects of slow-wave sleep, interictal epileptiform discharges, and anti-seizure medication all reduced TCs, leading to impaired dynamics. 

They found the same decreased TC with increased slow-wave sleep, interictal epileptiform discharges, and anti-seizure medication in the neural data recorded from epilepsy patients as in the neural network model. Interestingly, interictal epileptiform discharges led to TC reduction in a dose-dependent way, which points to a potential mechanism by which epileptic activity may lead to cognitive impairments. 

They found some evidence that tissue in the seizure onset zone (where seizures typically originate) was closer to criticality, with longer TCs, potentially linking these critical dynamics and potential imbalances to seizure initiation. Further statistical analyses revealed that TC changes significantly predicted cognitive task performance, with decreased TC predicting attention, working memory, and language impairment. 

What's the impact?

This study found that TC predicts cognitive performance and is perturbed by slow-wave sleep, interictal epileptiform discharges, and anti-seizure medication in people with epilepsy. Further research into critical dynamics in neurotypical individuals and those with neuropsychiatric disorders will be key in developing a unifying framework by which to understand cognition and potential therapeutic targets in the human brain. 

Access the original scientific publication here

Cracking the Serotonin Code: How Your Brain Predicts Future Rewards

Post by Rachel Sharp

The takeaway

Serotonin neurons signal a prospective code for value – a prediction of near-future rewards that explains why these neurons respond to both rewards and punishments. This unifying theory reconciles seemingly contradictory previous explanations and provides a framework for understanding serotonin's role in learning and behavior.

What's the science?

Serotonin has puzzled scientists for decades. This chemical messenger in the brain has been linked to everything from mood and sleep to learning and decision-making. Despite this, as well as its crucial role in mental health, scientists have struggled to figure out just how serotonin-producing neurons function and respond to stimuli.

The brain receives most of its serotonin from a cluster of cells deep in the brainstem called the dorsal raphe nucleus. Previous theories suggested that serotonin neurons might signal reward, surprise, salience (how significant something is), or even uncertainty, but none of these theories alone could explain all the observed patterns.

A new study published in Nature by Harkin and colleagues might have finally cracked the code. This study introduces a model for serotonin signalling that unifies these perspectives - a "prospective code for value" - that predicts future rewards while efficiently compressing information. The researchers explain unanswered questions like why serotonin neurons activate in response to both rewards and punishments, and why they respond more strongly to surprising rewards, but show no such preference for surprising punishments.

How did they do it?

The researchers built a mathematical model of serotonin neuron activity based on reinforcement learning principles, where "value" represents an estimate of total future reward. Their key insight was that serotonin neurons don't just encode raw value, but rather they filter this signal through a process called spike-frequency adaptation – where neurons gradually decrease their firing rate in response to sustained stimulation.

This filtering creates what's called a "prospective code" that emphasizes surprising value changes while also compressing slow, expected fluctuations in value. Overall, the proposed model predicts that serotonin neurons will:

  • Show temporary activation when a reward-predicting cue appears

  • Activate at the end of a punishment (when value increases as punishment ends)

  • Respond more strongly to surprising rewards than expected ones

  • Show similar responses to both expected and unexpected punishments

To test their model, the researchers analyzed multiple datasets of serotonin neuron recordings from mice during experiments in which animals learned to associate cues with delayed rewards. Finally, they compared their model to competing theories that base the coding of serotonergic activity solely on reward, surprise, and salience.

What did they find?

The prospective value coding model successfully accounted for previously unexplained aspects of serotonin neuron activity:

Responses to rewards and punishments: The model explains why serotonin neurons increase their firing when either a reward begins or a punishment ends: both represent increases in value.

Context modulation: The model accounts for why baseline serotonin activity is higher in reward-rich environments and lower in punishment-rich environments: the activity reflects the average value of each context.

Surprise preference for rewards but not punishments: The model clarifies why serotonin neurons show stronger activation to unexpected rewards but similar responses to both expected and unexpected punishments: previous theories have been unable to account for this.

Finally, when tested against previous theories using quantitative analysis of real neural data, the prospective code for value outperformed competing models in predicting actual serotonin neuron activity.

What's the impact?

By providing a unified computational framework for serotonin function, this research reconciles competing theories and establishes a clear connection to critical aspects of reinforcement learning.

Understanding serotonin's role as a prospective code for value has implications for both basic neuroscience and clinical applications. Many psychiatric disorders involve disruptions in the serotonin system, including depression, anxiety, and obsessive-compulsive disorder. A clearer understanding of serotonin's computational role could lead to more targeted treatments and better explanations of how existing treatments work.

Beyond clinical relevance, this research illuminates a fundamental principle of neural processing: adaptation mechanisms. Rather than simply representing information directly, neurons transform signals in ways that make them more useful to downstream brain regions. This principle of efficient coding likely extends beyond the serotonin system to other modulatory systems throughout the brain, suggesting a common computational strategy across neural circuits.

Access the original scientific publication here.