An Atlas of Microglia in Neurodegenerative Disease

Post by Laura Maile

The takeaway

Microglia, the immune cells of the brain, play important roles in both brain homeostasis and disease. Several human datasets have now been compiled to create a human microglia atlas that characterizes microglia across multiple neurodegenerative diseases.

What's the science?

Microglia, the brain's resident immune cells, help maintain homeostasis and normal function of the CNS environment, including modulating synaptic connections between neurons. In cases of injury or infection, microglia convert to an activated state, where they take on an amoeboid shape and work to return the brain to homeostasis. In neurological disease, however, they can become abnormally activated and contribute to disease. Historically, activated microglia were divided into two categories: M1, a pro-inflammatory type, and M2, a neuroprotective type. Since this initial categorization, gene expression analysis led to a distinct class designated “disease-associated microglia” (DAM). DAM gene expression patterns, or signatures, have been commonly used to identify activated microglia in tissue responding to injury or other pathologies. Though the evolution of this field has proposed that these categories are too simplified to describe the range of microglia observed in disease, a comprehensive classification of microglia across different disease states has not yet been achieved. This week in Nature Communications, Martins-Ferreira and colleagues used 19 human datasets to create an atlas describing nine subpopulations of microglia in neurodegenerative disease.

How did they do it?

The authors integrated data from 19 single-cell RNA sequencing datasets from human brain tissue from patients with a variety of neurodegenerative disorders, including autism spectrum disorder (ASD), Alzheimer’s Disease, multiple sclerosis, epilepsy, Lewy Body Disease, and severe COVID-19. The integrated Human Microglia Atlas (HuMicA) accounts for 90,716 cells from 241 patient samples. They completed a cluster analysis to identify natural groupings of the sorted cells based on their gene expression and nine subpopulations were identified. The authors calculated the upregulated gene markers for each subpopulation and compared these markers with other available gene datasets that describe patterns of transcriptomic signatures in microglia populations. They identified specific patterns in each subpopulation and compared the prevalence of each subpopulation across each pathology to understand how microglial changes are associated with specific neurodegenerative diseases. Finally, the authors used the HuMicA to analyze differentially expressed genes (DEGs) in disease and healthy populations, allowing them to detect specific patterns of gene expression associated with individual pathologies.

What did they find?

They identified three homeostatic clusters, representing relatively healthy, inactivated microglia. They noted these clusters shared patterns of upregulated genes that normally identify homeostatic microglia, though each cluster also had its own signature of upregulated genes.

The DAM signature was broken down into four subpopulations, each with its own transcriptional patterns, including pro-inflammatory pathways, phagocytosis, lipid metabolism, or leukocyte activation  In addition, a group of monocyte-derived microglia-like cells previously described in mice are shown here to be prevalent in human brain as well, showed increases in gene expression cytokine production. Though all the clusters were observed across all analyzed human samples and disease profiles, they discovered patterns of expansion or depletion of specific clusters associated with individual neurodegenerative disorders. For example, they found expansion of a subpopulation expressing genes involved in lipid metabolism in AD and MS. After analyzing DEGs, the authors found some general pathology-related patterns of gene expression that were shared across diseases and others that were more specific to an individual disease or group of diseases. 

What's the impact?

This study was the first to create a comprehensive human microglia atlas, which identified subpopulations of microglia associated with neurodegenerative disorders. With this atlas, the authors demonstrated that microglia are complex and exist in many different states in the diseased brain. This data will advance our understanding of microglia in neurodegenerative diseases, and provide a useful tool in the study of microglia and disease.

The Relationship Between Fluoride Exposure and Child IQ

Post by Lila Metko 

The takeaway

Researchers have yet to determine to what extent fluoride exposure could cause neurotoxic effects. The authors examined multiple studies that measured the relationship between prenatal and child fluoride exposure and child IQ scores. They reported an inverse association between fluoride exposure and child IQ, meaning that IQ went down as fluoride exposure levels went up. 

What's the science?

It is estimated that on average the largest percentage of an American’s fluoride consumption comes from fluoridated drinking water. In 2006, the National Research Council issued a report outlining the possible neurotoxic effects of high fluoride exposure from drinking water. Multiple meta-analyses in the past decade have suggested an inverse relationship between fluoride exposure and child IQ. This week in JAMA Pediatrics, Taylor and colleagues conducted a meta-analysis of 74 studies on this topic, including a study quality assessment (also called risk of bias). 

How did they do it?

The authors systematically searched eight large network databases including PubMed, Scopus, and PsycINFO. The criterion for inclusion in the meta-analysis necessitated that the study “estimated the association between exposure to fluoride…and a quantitative measure of children’s intelligence.” Each study in the meta-analysis was evaluated with the OHAT risk of bias tool, a method developed by the National Toxicology Program. The OHAT risk of bias tool is an 11-question assessment including key questions that evaluate how well individual studies address potential confounding, exposure characterization, and outcome assessment. The majority of studies included reported group averages but 19 reported individual-level exposure, typically determined through fluoride content in drinking water or fluoride concentration in urine. The authors did a mean effects meta-analysis and a regression slopes meta-analysis that evaluated group-level and individual-level fluoride exposures respectively. A mean effects meta-analysis estimates standardized mean differences, a summary statistic that calculates the difference in IQ between children living in areas with high fluoride exposure and children living in areas with low fluoride exposure. A regression slopes meta-analysis uses regression coefficients from individual studies to estimate the change in IQ per a 1 mg/L increase in fluoride exposure. Some studies were excluded from these primary analyses because of factors such as a lack of reported mean IQ scores for outcome measures and overlapping populations. 

What did they find?

The authors found an inverse relationship between fluoride exposure and IQ in both the mean effects and regression slopes meta-analyses. Findings were consistent across high-risk of bias and low-risk of bias studies. Associations remained inverse when the exposure groups were exposed to less than 4 mg/L and less than 2 mg/L in drinking water. In the regression slopes meta-analysis the authors found that for every 1 mg/L increase in urinary fluoride concentration, there is a decrease of 1.63 points in a child’s IQ. While this study only assesses associations, it is significant to note that the inverse relationship between fluoride level exposure and child IQ was intact across different study designs, methods of assessing fluoride exposure, and IQ assessments. 

What's the impact?

This study is one of several in the past decade to find an inverse relationship between fluoride exposure and child IQ. This meta-analysis is notable because it used a rigorous and transparent process to identify all studies relevant to the specific research question, extract data from each study, and assess each of the 74 studies for risk of bias based on pre-specified criteria. Interestingly, associations remained inverse even when the exposure groups were exposed to less than 4 mg/L and less than 2 mg/L in drinking water. The EPA enforces that drinking water cannot have more than 4 mg/L fluoride and recommends that drinking water should have less than 2 mg/L fluoride. 

Human-AI Interactions Can Amplify Human Bias

Post by Meagan Marks

The takeaway

When AI systems are trained on human biases, they can absorb and amplify them over time. When we interact with these biased systems, our biases may be subliminally strengthened and our perceptual, emotional, and social judgments can be affected.

What's the science?

Artificial Intelligence (AI) is rapidly becoming more prevalent in the workplace, with its use expanding across fields like healthcare, marketing, and education. While AI offers numerous benefits, it is crucial to recognize its potential flaws to improve the technology and maximize its effectiveness. One such flaw is the ability of AI to recognize and mimic human biases, which may influence human perceptual, social, and emotional judgments over time. However, the exact ways in which human biases are introduced into AI systems and, in turn, how these biases affect human judgment—both directly (when using AI as a tool) and indirectly (when passively encountering AI-generated content)— have not been extensively studied. This week in Nature Human Behavior, Glickman and Sharot explore how AI systems learn from human biases, how biased results can influence human judgment across different contexts, and how these AI interactions compare to human-to-human interactions. 

How did they do it?

To test how AI systems influence human judgment, the authors conducted a series of experiments involving emotional, perceptual, and social tasks with 1401 participants total. In the first series of tasks, participants were shown a group of 12 faces and asked if they, as a whole, appeared more happy or sad (emotional judgment). An AI algorithm was then trained on the participants’ trials to perform the same task. A new pool of participants was then asked to perform this same task, however, this time participants were presented with an AI-generated judgment after they had submitted their initial judgment. These participants were then given the option to adjust their responses (human-AI interaction). This same test was conducted with human feedback for comparison (human-human interaction).  

In a second series of tasks, participants were shown a group of dots on a screen and estimated the percentage of which were moving from left to right (perceptual judgment). Again, participants first performed this task on their own. The researchers then developed an accurate, unbiased algorithm and a biased algorithm to perform the task. Participants then performed the task again, and after submitting their answers, some were shown the response of the accurate algorithm, while others were shown the results of the biased one. 

In a final series, the authors wanted to produce a set of tasks designed to mimic real-world interactions with AI and assess how they impact social judgments. Within the task, participants were first shown images of people of different races and genders and were asked who would more likely be a financial manager. Participants were then presented with real AI-generated images from a public and popular AI software for 1.5s—a time meant to reflect quick, genuine interactions—and were asked the same question again. 

What did they find?

In the face-labeling series, participants initially showed a slight bias toward labeling faces as sad, but this bias gradually corrected itself throughout the trials. However, when AI was trained on this biased, human data, it reflected and amplified the bias in its responses over time. As participants evaluated their answers in collaboration with this biased AI system, they were more likely to adjust their responses to align with the AI’s outputs, which, over time, increased their own biases. This amplification of bias did not occur when participants were shown responses from other humans, indicating that human biases were more impacted by the AI system than by human feedback. Interestingly, AI’s label contributed to this effect: when researchers labeled human responses as AI-generated, participants were more likely to trust the response as correct. Interestingly, when participants were told the AI responses were human, they absorbed the bias, but to a lesser extent. 

In the moving dot series, participants were initially unbiased but developed increasingly biased responses as they interacted with the biased AI algorithm. However, participants’ answers improved in judgment and accuracy when working with the unbiased AI system. Notably, the participants were reportedly unaware of the biased algorithm’s influence over their judgment. 

Finally, in the real-world task, the authors also showed that exposure to biased AI images altered the social judgments of human participants.

What's the impact?

This study is the first to show that AI systems can reflect and amplify subtle human biases, ultimately influencing our judgments in perceptual, emotional, and social contexts. This is particularly concerning in high-stakes areas like medical diagnoses, hiring decisions, and widely seen advertisements. Greater awareness of AI’s potential to influence human judgment is needed, as is the development of measures to mitigate bias. 

Access the original scientific publication here