How Do Grid Cells Emerge from Neural Circuits?

Post by Andrew Vo

The takeaway

Artificial neural networks can be used to model and study the complex structure and function of brain circuits. Compared to traditional hand-designed models, trained networks better fit neural responses and generalize to other environments.

What's the science?

Grid cells are found in the entorhinal cortex of animals and humans, and their hexagonal firing patterns form spatial maps of the environment important for navigation. To better understand the biological and computational mechanisms underlying these grid-like representations, artificial recurrent neural networks (RNNs) have been used. Existing models are typically hand-tuned with parameters based on potentially biased assumptions. Consequently, it remains unclear if grid-like representations observed in hand-tuned models arise naturally and if they generalize to other environments. This week in Neuron, Sorscher et al. demonstrate how trained RNNs can innately give rise to hexagonal grid cells with greater accuracy than hand-designed models.

How did they do it?

The authors built an RNN that modeled entorhinal neuron activity from simulated mice exploring an environment. Critically, they did not assume beforehand that grid-like representations would emerge in the output. Instead, they trained their network to path integrate—when a network uses cues from an animal’s movement, such as head or body velocity, to compute and remember the animal’s spatial location—and explored whether grid cells would naturally appear from their RNN. They also tested whether their trained model could predict the entorhinal neuron firing patterns in actual electrophysiological recordings in mice.

What did they find?

Comparable to previous models, the authors found that their trained RNN achieved path integration and developed similar grid-like representations. Importantly, this observed output resulted from the network learning through training rather than being hand-tuned with optimal parameters in the first place. They also found that making small changes to the training procedure, such as allowing nonnegative firing rates and center-surround input structure, resulted in the spontaneous emergence of hexagonal grid-like representations and improved generalizability of their network to new environments beyond training. Finally, they found that their trained model was able to account for the firing patterns of actual entorhinal neurons with greater accuracy than traditional hand-tuned models.

What's the impact?

The present study showed that a simple RNN trained to path integrate resulted in the natural emergence of hexagonal grid cells. This model performs in an unbiased manner, without beforehand assumptions that confound traditional hand-tuned models. Such an approach to designing artificial neural networks allows us to test and provides insight into our conceptual understanding of complex neural circuits.

Access the original scientific publication here.