Applying Deep Learning to Extract Meaningful Information from Raw Neural Recordings
Post by Lina Teichmann
What's the science?
Neural recordings contain a vast amount of information but require a lot of time and expertise to disentangle and interpret. Deep learning is a machine learning method that can be used to try and interpret and decode the content of large datasets, to better understand which elements of the data are informative. This week in eLife, Frey et al. show that a convolutional neural network (a type of deep learning model) requiring few assumptions can decode meaningful information from raw neural recordings. Training their network on neurophysiological data recorded from rodents and humans, the authors demonstrate that the network is able to decode a variety of stimuli from the raw, unsorted neural recordings.
How did they do it?
The neural network takes neural data that has been decomposed into a three-dimensional representation of time, recording channels, and frequency as inputs. The model contains convolutional layers and fully connected layers that share their weights across channels and time, respectively, to reduce computational load and improve the generalizability of the model. The model was trained in a supervised fashion on rodent and human neural data. Electrophysiological recordings, two-photon calcium imaging, and electrocorticography (EcoG) were used to test whether the model can decode stimuli across different recording modalities, different species, and different brain areas. The model was trained to decode position information, auditory stimuli, and finger movements.
What did they find?
The model proved successful at decoding information from the raw neural recordings. First, the authors showed that the model could successfully decode position information from electrophysiological recording from mice hippocampus. The ability to decode different positional factors was driven by different features of the neural data. For example, self-location decoding was shown to be dependent on pyramidal cells in CA1 while motion speed was driven by theta oscillations and interneurons. Head direction decoding was driven by CA1 interneurons. To show that the model works beyond recordings from the hippocampus and can also be used for different types of datasets, the authors showed that the model could be used to successfully decode sounds from two-photon calcium imaging of the auditory cortex. Finally, the model also succeeded at decoding finger movements from ECoG recordings of human subjects.
What's the impact?
Understanding how neural signals represent behavior is at the heart of neuroscience. Oftentimes, this is a challenging endeavor, as prior knowledge about the nature of these representations is required to process and analyze the data accordingly. The authors show here that deep learning can be used to read out meaningful information from raw neural recordings, allowing new and unbiased insights into how stimuli are represented in the neural code.
Frey et al. Interpreting wide-band neural activity using convolutional neural networks, eLife (2021). Use these links to access the original scientific publication and the code.