How Does the Endocannabinoid System Reduce Chronic Pain Following Injury?

Post by Lani Cupo

The takeaway

Neuropathic pain can result from an injury or disease, and has been related to disruptions in circadian rhythms. Evidence suggests a novel link between circadian rhythms, the endocannabinoid system, and neuropathic pain. 

What's the science?

Prior evidence suggests disruption to circadian rhythms can increase sensitivity to neuropathic pain, however, the role of the underlying genes and proteins that control the circadian rhythms (clock genes) is still poorly understood. This week in PNAS Nexus, Yamakawa and colleagues use mouse models to investigate the role of the clock genes in the development of neuropathic pain, finding a previously undocumented role of the clock genes in neuropathic pain and a link with the endocannabinoid system.

How did they do it?

The authors performed a set of experiments in mouse models to investigate the role of a specific protein known as period2 (Per2 - integral in regulating circadian rhythms) in the development of neuropathic pain. To induce neuropathic pain in the mice, the authors used a well-established model involving the ligation (or clamping) of part of the sciatic nerve in the hind limb of an animal, producing chronic pain that can be measured with tests for pain sensitivity. First, the authors performed the operation in control mice as well as those lacking the protein Per2 and examined whether mice without Per2 still developed hypersensitivity to pain. Additionally, they examined the quantity and form of glial cells following the injury.

To examine what receptors were involved in neuropathic pain, the authors injected a series of compounds that blocked specific receptors in turn and examined the pain response; if the pain response was absent when a certain receptor was blocked, they would know it was key in hypersensitivity to pain. Next, the authors sought compounds whose production was controlled by binding the identified receptors, as well as cells that produced these compounds. Finally, they examined whether increasing expression of these receptors in mice with functioning Per2 protein reduces the neuropathic pain response.

What did they find?

First, the authors were surprised to find that in mice without Per2, there was no evidence of hypersensitivity to pain. They had expected that the Per2 protein was involved in fluctuations of pain sensitivity over the day, however, their results indicate Per2 is actually involved in the development of pain sensitization in general. While pain hypersensitivity was absent in mice lacking Per2, the authors observed alterations in glial cells in mice both with and without Per2, suggesting the lack of Per2 did not prevent changes in this molecular mechanism. 

Next, the authors identified a specific type of adrenergic receptor (α1-AR) involved in the lack of pain hypersensitization in mice without Per2. This receptor is part of a superfamily (G-protein coupled receptors) that, when activated, act as messengers by producing other compounds in a cell. In this case, the authors found that in mice without Per2, an endocannabinoid, 2-AG, was increased, with its production modulated by activation of α1-AR. Specifically, they found Per2 alters the expression of these receptors and the levels of 2-AG produced by astrocytes in the spinal cord. So, in summary, disrupting circadian rhythms by altering the protein Per2 altered a specific receptor in astrocytes which changes the expression of an endocannabinoid, 2-AG, and reduces pain hypersensitivity.  

What's the impact?

This study describes a new role of the circadian clock proteins and the endocannabinoid system in the development of neuropathic pain. The results increase the understanding of how disruptions in sleep cycles may impact neuropathic pain and may, in time, lead to new forms of treatment.

Access the original scientific publication here.

The Shallow Brain Hypothesis

Post by Meredith McCarty

Is a neural network a good model of brain function? 

The brain is a complex physical system that enables the processing of sensory information, the formation of memories, and the guidance of behavior and cognition. To advance the field of machine learning, artificial neural networks were developed, inspired by our understanding of brain connectivity and function. These networks are used in both scientific and technical applications like graphic processing units (GPUs used in video game hardware), healthcare, scientific research, aerospace engineering, and artificial intelligence.  

In neuroscience research, the design of neural networks that can capture aspects of how the brain processes information has incredible implications for theoretical and experimental understanding. However, whether contemporary neural network techniques adequately capture the complexity and structure of the brain is under debate

The complex architecture of the brain

To understand the current debate of how to best design neural networks, we must first understand the basic architecture of the brain. 

When sensory information (visual, auditory, taste, touch) travels from the peripheral nervous system into the central nervous system, these signals arrive to subcortical regions and are relayed into a brain region called the thalamus. The thalamus is located deep within the brain, beneath the cortex, but exhibits rich connectivity with cortical and subcortical regions. There are thalamic regions that receive and transmit information from subcortical sources (first order), and thalamic regions that transmit information between cortical regions (higher-order). These higher-order thalamic-cortical dynamics are the subject of much current research, as these signals have been found to be involved in not just sensory processing, but also attention, arousal, consciousness, and many other cognitive functions

Higher-order thalamic nuclei receive and transmit information to the cortex via complex connectivity patterns. Within the cortex, pyramidal neurons receive information from numerous cortical and subcortical sources. These pyramidal neurons are unique, as they are the most excitatory cells within a given cortical column, receiving information from numerous cortical and subcortical sources. There are many local recurrent connections within each cortical column, as well as long-range connections between distant cortical columns across the cortex. As such, the cortex is involved in both primary sensory processing as well as higher cognitive abilities and is strongly interconnected via pyramidal neurons to transmit information to distant cortical, thalamic, and subcortical regions

While an overly simplistic summary, this connectivity between subcortical, thalamic, and cortical regions is an essential feature of neural dynamics. However, much remains to be understood regarding this complex interconnected system

Hierarchical deep learning neural network models

Early development of neural network models was based on observed connectivity patterns in the visual cortex. Researchers found evidence of hierarchical information processing, from lower to higher cortical areas. Feedforward neural network models are inspired by this architecture and are generally structured with information flowing from input layers, through hidden layers, to output layers. 

The application of deep learning methods introduces “learning” into these networks (known as backpropagation) to enable the model to fine-tune itself. This method requires the adjustment of weights throughout the network hierarchy, with some debate as to how this would be implemented at the rapid scale present in the brain’s architecture. Contemporary neural network models often utilize recurrence, meaning that there is a bidirectional flow of information forward and backward. There is a diversity of architectures used in current neural network modeling, but much debate as to whether a primarily hierarchical-based network design is capable of capturing the computations occurring in the brain

What’s the Shallow Brain Hypothesis?

This potential discrepancy has led to the development of the Shallow Brain hypothesis. The focus of this hypothesis is that the inclusion of the thalamo-cortical and subcortical connectivity patterns of the brain (as opposed to a primarily hierarchical-based network) is essential to model neural dynamics effectively. The primary tenet of this hypothesis is that hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute.” In other words, the transmission of information from the deep regions of the brain directly to the outer cortex and vice versa, bypassing the hierarchical transmission of information through each layer, is very important to brain function.

The Shallow Brain hypothesis is built from the evidence that each cortical column is a highly complex computational unit specialized to process information through distinct recurrent architecture. Across the classical cortical hierarchy, these distributed cortical columns comprise a massive array of parallel recurrent networks. Through extensive thalamic-cortical and cortical-subcortical connections, these parallel recurrent networks are integrated with each other to enable flexible and rapid information processing in the brain. 

Proposed benefits of the Shallow Brain hypothesis include a more physiologically plausible mechanism for local learning, increased speed of information flow in a parallel rather than serial architecture, and the capture of complex representations and flexible integration of features in network models. The Shallow Brain hypothesis outlines many dimensions by which the Shallow Brain architecture can more accurately and realistically capture the dynamics of information processing in the brain. 

The Shallow Brain hypothesis raises many interesting questions, with implications for neuroscience and computational modeling research. 

  • Are neural networks with primarily cortico-centric designs and theoretical underpinnings missing essential features of information processing occurring with subcortical (i.e., deep) regions of the brain? 

  • Are shallow architectures, as proposed in the Shallow Brain hypothesis, able to outperform other architectures in capturing neural dynamics?

  • Does the thalamus play an essential role in information processing, and does disruption of thalamic activity lead to deficits in learning and other cognitive faculties?

  • Finally, does the integration of parallel cortical processing occur at a cortical or a subcortical level?

The development of novel hypotheses of how neural networks should be designed has implications for both neuroscientific research and technological application alike.

References +

Sherman, S.M. The thalamus is more than just a relay. Curr Opin Neurobiol. 2007.

Kumar, V.J., Beckmann, C.F., Scheffler, K., Grodd, W. Relay and higher-order thalamic nuclei show an intertwined functional association with cortical networks. Communications Biology. 2022.

LeCun, Y., Bengio, Y., Hinton, G. Deep learning. Nature. 2015.

Olgenburg, I.A., Hendricks, W.D., Handy, G., Shamardani, K., Bounds, H.A., Doiron, B., Adesnik, H. The logic of recurrent circuits in the primary visual cortex. Nature Neuroscience. 2024.

Voges, N., Lima, V., Hausmann, J., Brovelli, A., Battaglia, D. Decomposing neural circuit function into information processing primitives. Journal of Neuroscience. 2023.

Sherf, N., Shamir, M. Multiplexing rhythmic information by spike timing dependent plasticity. PLoS Computational Biology. 2020.

The Purpose of Sleep is to Restore our Brain to an Optimized State Called Criticality

Post by Trisha Vaidyanathan

The takeaway

The waking experience pushes the cerebral cortex away from “criticality”, a state of neural activity that is optimized for computation and cognition. The function of sleep is to restore the brain to criticality.

What's the science?

We spend about one third of our lives sleeping, but the purpose of sleep is debated. Broadly, we understand that sleep is “restorative”, but it is unclear how sleep contributes to brain computation and information processing. This week in Nature Neuroscience, Xu and colleagues provide new evidence to support a theory that one of the primary functions of sleep is to restore the brain to an optimized state called “criticality.” Criticality is a concept borrowed from physics that describes when a system of individual parts will be most effective at responding to an input. It makes sense that our brain should operate at criticality so that it can quickly and effectively process new information – for example, if our brain receives new visual input caused by a tiger appearing, it should quickly and effectively transmit that information to brain regions that will drive us to run away.

How did they do it?

To measure how close the brain is to criticality, the authors performed continuous extracellular recordings of individual neurons in the visual cortex of rats. Criticality is characterized by neuronal avalanches, which are cascades of bursts of neuronal activity. This allowed them to create a score that measured how close the cortex was to criticality at any given time called the “deviation from criticality coefficient”, or DCC. The higher the DCC score, the further the brain is from criticality. Because rats constantly switch between wake and sleep throughout the day, the authors could assess how the DCC score fluctuated with wake and sleep.

Using the DCC score, the authors tested their theory that wakefulness pushes the brain away from criticality and sleep restores criticality. First, the authors asked how the DCC score changed during sleep and wake. Next, they asked whether the DCC score was predictive of future sleep/wake behavior and if the DCC score could be predicted by previous sleep/wake behavior. Lastly, they asked if the DCC score would change if the rats were forced to stay awake for longer periods of time.

What did they find?

First, the authors found that more time in a waking state correlated with higher deviation from criticality (i.e., higher DCC scores) and more time in sleep correlated with lower DCC scores, consistent with their theory. Interestingly, the effect was greater when the rats spent more time moving during wake and the effect was absent when the rats were awake but in the dark. This suggested that not all wake experiences are the same and that more stimulation during a waking state can result in a greater deviation from criticality.

The authors found that future sleep/wake behavior could be predicted using the DCC score. The DCC score was more predictive than other known regulators of sleep, like the time of day or prior amount of sleep. Further, the authors could predict the DCC score by using the sleep/wake behavior from the previous two hours, in support of the theory that sleep and wake drive changes in criticality.

When rats stayed awake for periods of 90 minutes, slightly longer than normal, the DCC score increased as predicted by their theory that wakefulness pushes the brain away from criticality. When the rats were allowed to sleep again, the DCC score went back down, demonstrating the restorative effect of sleep on criticality.

What's the impact?

This study addresses a big mystery in neuroscience: why do we sleep? The authors provide strong evidence that one of the primary functions of sleep at a systems level is to restore the brain to an optimal state described by the theory of criticality.