Neurons Driving Sugar Consumption

Post by Lila Metko 

The takeaway

A population of neurons in the hypothalamus with a well-established function in satiety, the sensation of being full, may have another important role. This research suggests that pro-opiomelanocortin (POMC) neurons in the hypothalamus signal to another brain region to drive sugar consumption in states of fullness. 

What's the science?

There is a drive present in both humans and many animal species to consume high amounts of sugar even after a substantial meal. Understanding the neurobiological mechanism behind this drive could assist with the production of effective obesity therapeutics. It is well understood that the activation of POMC-projecting neurons in the hypothalamus promotes satiety in a fed state. However, POMC is also a precursor for the neuropeptide b-endorphin that acts on a specific receptor, the mu opioid receptor, to stimulate appetite. This winter in Science, Minère and colleagues measure and manipulate activity in hypothalamic POMC neurons during both standard and high sugar consumption after a meal to investigate their role in the drive to consume sugar. 

How did they do it?

The authors first investigated which regions in the brain had both high amounts of mu opioid receptors and POMC. They used fluorescence in-situ hybridization, a technique that reveals the number of nucleic acid sequences coding for a protein of interest, for the receptor and immunohistochemistry, a detection technique for visualizing cellular components, for POMC. They found that a region with both was the paraventricular nucleus of the thalamus (PVT), a brain region important for feeding and motivated behavior. They then optogenetically activated POMC neurons from the hypothalamus and recorded activity in the PVT under control and different receptor blocker conditions to determine how POMC neurons affect PVT activity and which receptors may be involved. Next, they recorded activity in this circuit (hypothalamic POMC neurons to thalamic PVT neurons) during post-meal high-sugar food consumption or post-meal standard chow consumption to determine if specifically sweet foods were associated with changes in circuit activity. Additionally, the researchers tested if activation of the circuit under control and/or opioid receptor blocker conditions affected general flavor preference to control for potential confounds of sweet taste and post-ingestive sugar sensing. Next, they tested if circuit activation affected conditioned place preference, a preference test that is not associated with food consumption. They then investigated how chemogenetic inhibition of the circuit affected flavor preference (high-sugar food vs standard chow). Next, they used fiber photometry to record circuit activity in response to a high-sugar diet and high-fat diet cues, to determine the circuit's role in different fed-state macronutrient preferences. Finally, they used functional magnetic resonance imaging (fMRI) to examine PVT activity in humans during the consumption of sugar to see if a similar circuit may exist in humans

What did they find?

Activation of POMC neurons decreased the firing rate of neurons in the PVT when being exposed to blockers of other neuromodulators but not blockade of the mu opioid receptor. This suggests that signaling from POMC neurons to the PVT is via the mu opioid receptor and that this results in inhibition. Post-meal consumption of high-sugar food brought about an increase in the activity of POMC neuron terminals in the PVT while consumption of a standard chow diet post-meal did not, which suggests that the high-sugar diet brings about an increase in the activity of POMC neurons that project to the PVT. Activation of the circuit did affect general flavor preference conditions but not when mu-opioid receptor blockers were present. However, the circuit’s activation did not affect conditioned place preference, suggesting that the circuit is dietary preference specific. Inhibition of the circuit changed the length of time for a mouse to start showing a preference for a high-sugar diet. Fiber photometry data showed that, while both brought about an increase, high-sugar diet cues increased POMC to PVT activity more than high-fat diet cues. Additionally, fMRI data showed that activity level in the human PVT is decreased by sugar consumption. This suggests that a similar circuit may exist in humans. 

What's the impact?

This study found that hypothalamic POMC neurons projecting via opioid signaling to the PVT are involved in sugar consumption in fed states. Importantly, it sheds light on a brain circuit that may be involved in compulsive or binge eating. According to the World Health Organization obesity is a global epidemic that is a risk factor for many health conditions such as diabetes mellitus, cardiovascular disease, and stroke. These findings could help researchers develop potential therapeutics for obesity. 

Access the original publication here 

Seemingly Benign Mini-Strokes May Have a Long-Term Impact on Memory

Post by Soumilee Chaudhuri

The takeaway

A transient ischemic attack (TIA), often called a "mini-stroke," is deemed to be potentially harmless as its symptoms—like slurred speech or weakness—resolve quickly. However, this recent study shows that even a single TIA can lead to long-term memory and thinking problems, similar to what happens after a full ischemic stroke

What's the science?

A stroke happens when blood flow to the brain is blocked, causing brain damage. This can lead to lasting physical and cognitive problems. A TIA, on the other hand, often called a "mini-stroke," is characterized by temporary stroke-like symptoms caused by a brief interruption of blood flow to the brain. While its symptoms resolve quickly, prior research has hinted at potential long-term cognitive consequences. However, it’s unclear whether these cognitive changes were directly caused by the TIA event preexisting risk factors, or prior cognitive decline. Recently in JAMA Neurology, Del Bene et al., aimed to determine whether a single, diffusion-weighted image–negative TIA (a TIA without visible brain damage on imaging) was directly associated with cognitive decline over time, after accounting for vascular and demographic factors.

How did they do it?

This study analyzed data from the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study, which included over 30,000 participants across the United States. Researchers compared cognitive trajectories in three groups: 1) 356 people with a first-time TIA, 2) 965 people with a first-time stroke, and 3) 14,882 people with no history of stroke or TIA. Cognitive function was assessed using memory and verbal fluency tests every two years. The researchers used advanced statistical models to compare cognitive changes before and after a TIA or stroke while accounting for factors like age, race, and vascular health. Therefore, key adjustments were made for vascular and demographic risk factors, such as age, sex, race, and preexisting conditions like hypertension and diabetes. Neuroimaging (Magnetic Resonance Imaging - MRI)) was used to confirm the absence of brain damage in TIA cases (diffusion-weighted image–negative).

What did they find?

Before a stroke or TIA, people who later had a stroke already had slightly worse memory and thinking skills (cognitive composite score of -0.25) than those who had a TIA (-0.05) or no stroke at all (0). This suggests that some cognitive decline might already be happening before a stroke occurs. After a stroke or TIA, both groups showed a decline in memory and thinking skills, though the decline was faster in the stroke group. The stroke group’s cognitive composite score declined by -0.14, while the TIA group’s score changed only slightly (0.01). The control group, with no stroke or TIA, showed a small decline of -0.03. Importantly, the annual decline in cognitive function was faster in the TIA group (-0.05) compared to the control group (-0.02) and was similar to the stroke group (-0.04). 

Overall, it was shown that stroke patients showed the largest immediate drop in cognitive function. TIA patients did not have an immediate decline but experienced a faster decline in cognitive function over time than the healthy control group. Surprisingly, the rate of cognitive decline in the TIA group was similar to that of stroke patients, despite the absence of visible brain damage on diffusion-weighted imaging.

What's the impact?

Even in the absence of immediate disability, TIA appears to contribute to long-term cognitive impairment, suggesting that it may trigger subtle but lasting brain changes. The result of this study raises important questions about the necessity of adding cognitive screening to the care plan for stroke and TIA patients, even if they seem to recover fully. Additionally, researchers still need to investigate how these TIA events cause memory problems so that early interventions can be used to prevent subsequent decline in brain health in these patients.

Access the original scientific publication here

Detecting Brain Imaging Anomalies Using Generative AI

Post by Amanda Engstrom

The takeaway

Generative Artificial Intelligence (AI) has become a useful tool for synthesizing large brain imaging datasets and detecting pathological anomalies, but not without error. The introduction of metrics that focus on evaluating normative representation of healthy brain tissue can increase anomaly detection and diagnosis.

What's the science?

The advancement of medical imaging technologies has increased doctors’ ability to diagnose a variety of diseases. Still, it has created the challenge of how to integrate and analyze large volumes of complex imaging data. To capture the complexity and rarity of human pathologies, generative AI has been harnessed for the automated detection of pathological anomalies. Normative representation learning in the brain aims to understand the typical anatomy of the brain using large human datasets. This week in Nature Communications, Bercea and colleagues test three novel metrics that evaluate normative representation in generative AI models, focusing on understanding typical anatomical distributions in healthy individuals, and tested them against various brain pathologies.

How did they do it?

The authors propose three metrics that evaluate the quality of the pseudo-healthy restorations by generative AI models. These metrics are: 

1) Restoration Quality Index (RQI) which evaluates the perceived quality of the synthesized images, 

2) Anomaly to Healthy Index (AHI), measuring the closeness of the distribution of restored pathological images to a healthy reference set

3) Healthy Conservation and Anomaly Correction Index (CACI) measures how the model can both maintain the integrity of healthy regions as well as correct anomalies in pathological areas.

The authors used these metrics to evaluate current generative AI frameworks to assess the ability of each model to learn and apply normative representations to their images. Models were trained on over 500 healthy scans, and evaluated using two datasets that encompassed a wide spectrum of brain pathologies. After evaluating the performance of each model based on normative learning metrics, the authors then determined the relationship between this ranking and anomaly detection metrics.

Finally, the authors performed a clinical evaluation of their metrics with 16 radiologists. Experts were given 180 randomized images (30 pathology-free originals and 30 generated from 5 different AI models) and asked to rate each image for ‘Realness’, ‘Image Quality’, and ‘Health Status’. These evaluations helped evaluate the effectiveness of both the new metrics as well as help measure the clinical relevance of the learned representations.

What did they find?

After applying their three normative learning metrics, the authors found that each individual metric offers a unique perspective on the performance of the AI models. Methods that simply replicate input images, like autoencoders, have a high RQI but score poorly on AHI and CACI. However, models that remove anomalies such as variational autoencoders or latent transfer models, have improved CACI, but poor RQI because the output images are typically blurry. The AHI metric was the most challenging for all models. Guided restoration techniques using intelligent masking tend to achieve the highest overall scores. When all three metrics (RQI, AHI, and CACI) were collectively optimized, those AI models demonstrated enhanced predictive anomaly detection power highlighting the importance of balancing all three metrics rather than relying on one individually.

When clinically validating AI-generated images with radiologists, there was no significant difference between AI-generated and real images. Even real non-pathological images showed variability in scoring, particularly in health score. Real images scored only marginally higher in ‘Realness’ scores. Models such as AutoDDPM and the RA method, both scored within the top 5 for normative learning and, received scores similar to the real images in ‘Realness’ and ‘Health’ respectively. The comprehensive clinical validation concluded that the proposed RQI, and to a lesser extent the AHI (CACI could not be evaluated in this study design), correlated well with clinical assessments.

What's the impact?

This study found that generative AI models that score highly in normative learning metrics can more aptly detect diverse brain pathologies and are more proficient at anomaly detection. These metrics provide a framework for evaluating AI models with greater clinical relevance. Advanced AI medical imaging is an advantageous diagnostic tool to assist clinicians in increasing workflow efficiency, diagnostic accuracy and ultimately improving patient care. 

Access the original scientific publication here.