Testing Domain Selectivity in the Human Brain Using Artificial Neural Networks
Post by Lina Teichmann
What's the science?
Several brain areas that are part of the human visual system have been shown to respond to some images more than others. For example, the fusiform face area (FFA) responds strongly to images of faces while the parahippocampal place area (PPA) responds more strongly to scenes. A prominent idea is that certain parts of the brain’s cortex are domain-selective, specializing in different types of visual content. One challenge of putting category-selectiveness in the brain to the test is deciding how to define what counts as an image of a given category. Additionally, we can only test a limited number of stimuli in each experiment, meaning that many potential images are untested. Thus, there is always a possibility that we have not tested the “right” images to put the idea of category-selectiveness to the test. This week in Nature Communications, Ratan Murty and colleagues address these challenges by showing that we can use artificial neural networks to predict the brain response in apparent category-selective areas.
How did they do it?
Four healthy participants viewed a variety of natural images while their brain activity was recorded with functional magnetic resonance imaging (fMRI). Using the neural responses to a subset of the images, the authors then used artificial neural networks to predict the neural response of held-out images (not seen by participants) in areas FFA, PPA, and the extrastriate body areas (EBA). In addition, they used data recorded from a subset of participants to predict the neural response in other participants. To put the model’s ability to predict neural responses into perspective, the authors asked experts in the field to predict the neural responses they would expect for the given images. Additionally, they screened millions of images to identify images that would evoke a strong response in FFA, PPA, and EBA and also used a specific type of deep learning model to synthesize new images that were predicted to evoke strong neural responses in FFA, PPA, EBA. Finally, the model was used to identify features in the images that would drive the responses in each brain area.
What did they find?
First, the authors demonstrated that the artificial neural network could predict neural responses in FFA, PPA, and EBA, using only pixel-based information as input. The results even showed that the model outperformed the predictions of experts in the field. Based on these findings, the authors showed that the artificial neural network could be used to look at a huge number of images and make image-based predictions about the brain response in different brain areas. When assessing which images would evoke a strong response in FFA, PPA, and EBA, the authors found that images within the hypothesized preferred category (i.e., faces, scenes, and bodies, respectively) were predicted to have the strongest response. Thus, the findings support the hypothesis of category-selectively within areas of the cortex involved in vision.
What's the impact?
Overall, the authors have used artificial neural networks in an elegant way to enhance our understanding of human vision. The results lend further support to the domain-specificity hypothesis in the human brain, as several million images were predicted to align with category-selective responses in FFA, PPA, and EBA.
Ratan Murty et al. Computational models of category-selective brain regions enable high-throughput tests of selectivity (2021). Access the original scientific publication here.