A Neural Population Selective for Song in Human Auditory Cortex
Post by Andrew Vo
The takeaway
The human brain has regions specialized for music compared to speech or other sounds. Using brain recordings and imaging allows further decoding of brain responses specific to different types and features of music.
What's the science?
Music is an important part of society, culture, and the human experience. Research has demonstrated our brains have areas that selectively respond to music compared to speech or other sounds. However, whether this brain response to music contains further information about different types or features of music remains unknown. This week in Current Biology, Norman-Haignere et al. used a combination of brain recordings and imaging to identify neural subpopulations representing different types of music.
How did they do it?
The authors used intracranial recordings (ECoG or electrocorticography) from 15 human patients as they listened to a set of 165 natural sounds (e.g., diverse music, speech, vocalization, and ambient sounds). This recording method has the advantage of high temporal resolution of brain responses to brief auditory stimuli. These data were then analyzed using a custom algorithm that decomposed its statistical structure into components that represented different neural populations in the auditory cortex. Due to the limited spatial resolution of ECoG, the authors correlated their initial findings with functional magnetic resonance imaging (fMRI) responses to the same set of sounds but in a different set of 30 volunteers.
What did they find?
The authors identified 10 reliable components (patterns) from ECoG recordings that were stable across participants. Two of these components responded selectively to speech sounds, regardless of whether that speech was native or foreign to the listener. A different component responded strongly to music, both instrumental and with singing, and less so to speech or other vocalizations. Finally, a single component responded exclusively to music with singing (i.e., song). Using fMRI data, these components were found to be differentially represented along the superior temporal gyrus in the auditory cortex. Brain responses selective for speech, music, and song could not be explained by non-specific features of sound as the identified components showed comparatively weaker responses to matched synthetic sounds.
What's the impact?
This study showed that the human brain not only represents music uniquely from speech or other sounds but that this activity contains further information about different types of music — of note here, for song. The findings here demonstrate how combining the spatiotemporal features of ECoG and fMRI may allow for better decoding of music in the human brain.