Testing Domain Selectivity in the Human Brain Using Artificial Neural Networks

Post by Lina Teichmann

What's the science?

Several brain areas that are part of the human visual system have been shown to respond to some images more than others. For example, the fusiform face area (FFA) responds strongly to images of faces while the parahippocampal place area (PPA) responds more strongly to scenes. A prominent idea is that certain parts of the brain’s cortex are domain-selective, specializing in different types of visual content. One challenge of putting category-selectiveness in the brain to the test is deciding how to define what counts as an image of a given category. Additionally, we can only test a limited number of stimuli in each experiment, meaning that many potential images are untested. Thus, there is always a possibility that we have not tested the “right” images to put the idea of category-selectiveness to the test. This week in Nature Communications, Ratan Murty and colleagues address these challenges by showing that we can use artificial neural networks to predict the brain response in apparent category-selective areas. 

How did they do it?

Four healthy participants viewed a variety of natural images while their brain activity was recorded with functional magnetic resonance imaging (fMRI). Using the neural responses to a subset of the images, the authors then used artificial neural networks to predict the neural response of held-out images (not seen by participants) in areas FFA, PPA, and the extrastriate body areas (EBA). In addition, they used data recorded from a subset of participants to predict the neural response in other participants. To put the model’s ability to predict neural responses into perspective, the authors asked experts in the field to predict the neural responses they would expect for the given images. Additionally, they screened millions of images to identify images that would evoke a strong response in FFA, PPA, and EBA and also used a specific type of deep learning model to synthesize new images that were predicted to evoke strong neural responses in FFA, PPA, EBA. Finally, the model was used to identify features in the images that would drive the responses in each brain area.

What did they find?

First, the authors demonstrated that the artificial neural network could predict neural responses in FFA, PPA, and EBA, using only pixel-based information as input. The results even showed that the model outperformed the predictions of experts in the field. Based on these findings, the authors showed that the artificial neural network could be used to look at a huge number of images and make image-based predictions about the brain response in different brain areas. When assessing which images would evoke a strong response in FFA, PPA, and EBA, the authors found that images within the hypothesized preferred category (i.e., faces, scenes, and bodies, respectively) were predicted to have the strongest response. Thus, the findings support the hypothesis of category-selectively within areas of the cortex involved in vision.

Lina (3).jpg

What's the impact?

Overall, the authors have used artificial neural networks in an elegant way to enhance our understanding of human vision. The results lend further support to the domain-specificity hypothesis in the human brain, as several million images were predicted to align with category-selective responses in FFA, PPA, and EBA.

murty_sept28.png

Ratan Murty et al. Computational models of category-selective brain regions enable high-throughput tests of selectivity (2021). Access the original scientific publication here.

Applying Deep Learning to Extract Meaningful Information from Raw Neural Recordings

Post by Lina Teichmann

What's the science?

Neural recordings contain a vast amount of information but require a lot of time and expertise to disentangle and interpret. Deep learning is a machine learning method that can be used to try and interpret and decode the content of large datasets, to better understand which elements of the data are informative. This week in eLife, Frey et al. show that a convolutional neural network (a type of deep learning model) requiring few assumptions can decode meaningful information from raw neural recordings. Training their network on neurophysiological data recorded from rodents and humans, the authors demonstrate that the network is able to decode a variety of stimuli from the raw, unsorted neural recordings.

How did they do it?

The neural network takes neural data that has been decomposed into a three-dimensional representation of time, recording channels, and frequency as inputs. The model contains convolutional layers and fully connected layers that share their weights across channels and time, respectively, to reduce computational load and improve the generalizability of the model. The model was trained in a supervised fashion on rodent and human neural data. Electrophysiological recordings, two-photon calcium imaging, and electrocorticography (EcoG) were used to test whether the model can decode stimuli across different recording modalities, different species, and different brain areas. The model was trained to decode position information, auditory stimuli, and finger movements.         

What did they find?

The model proved successful at decoding information from the raw neural recordings. First, the authors showed that the model could successfully decode position information from electrophysiological recording from mice hippocampus. The ability to decode different positional factors was driven by different features of the neural data. For example, self-location decoding was shown to be dependent on pyramidal cells in CA1 while motion speed was driven by theta oscillations and interneurons. Head direction decoding was driven by CA1 interneurons. To show that the model works beyond recordings from the hippocampus and can also be used for different types of datasets, the authors showed that the model could be used to successfully decode sounds from two-photon calcium imaging of the auditory cortex. Finally, the model also succeeded at decoding finger movements from ECoG recordings of human subjects.

lina (2).jpg

What's the impact?

Understanding how neural signals represent behavior is at the heart of neuroscience. Oftentimes, this is a challenging endeavor, as prior knowledge about the nature of these representations is required to process and analyze the data accordingly. The authors show here that deep learning can be used to read out meaningful information from raw neural recordings, allowing new and unbiased insights into how stimuli are represented in the neural code.

Frey et al. Interpreting wide-band neural activity using convolutional neural networks, eLife (2021). Use these links to access the original scientific publication and the code

Resting Brain Activity Predicts Who Responds to Cognitive Behavioral Therapy for OCD

What's the science?

Obsessive Compulsive Disorder (OCD) affects 1-2% of the population and can affect quality of life. Cognitive behavioral therapy (CBT) is a method of treatment that has been shown to be effective in some individuals, but not all. Currently, there is no method to predict who will benefit from CBT. Recently, functional MRI of individuals at rest has emerged as a promising tool for predicting treatment outcomes. This week in PNAS, Reggente and colleagues test whether resting brain activity patterns can predict treatment response.

How did they do it?

Adults with a diagnosis of OCD underwent resting state functional MRI scans before and after 4 weeks of daily CBT. They analyzed the resting state fMRI scans using a multivariate approach and machine learning to detect whether patterns of resting state activity before treatment could predict individual OCD symptom severity scores after treatment. Resting brain activity was extracted from 196 brain regions and patterns of activity in all regions were correlated with one another. Multivariate analyses have the ability to capture multiple patterns of brain activity, and may be better than univariate approaches for predicting individualized responses to treatment. OCD symptom severity was also assessed before and after the 4 weeks of treatment.

What did they find?

OCD symptom severity scores improved after treatment in almost all participants. The authors found that pre-treatment resting state patterns in two brain networks -the default mode network and the visual network - strongly predicted individual variability in OCD symptom severity score. The default mode network (active while an individual is at rest) accounted for 67% of the variation in post-treatment symptom severity scores, while the visual network accounted for 51%. The activity in these networks better predicted post-treatment severity scores than the severity of OCD before treatment.

Brain by cronodon.com, Image by BrainPost

Brain by cronodon.com, Image by BrainPost

What's the impact?

Knowing who will respond to treatment is important as CBT is time consuming and expensive. This is the first study to report resting state network patterns as a reliable predictor of individual response to CBT treatment for obsessive compulsive disorder. Individual resting state patterns could reflect the plasticity or adaptability of brain networks to treatment. This study brings us one step closer to using individualized treatment plans for complex disorders.

Reggente_author_week5_small.jpg

Reach out to study author Dr. Nicco Reggente on Twitter @mobiuscydonia

N. Reggente et al., Multivariate resting-state functional connectivity predicts response to cognitive behavioral therapy in obsessive–compulsive disorder. PNAS. (2018). Access the original scientific publication here.