Decoding Neural Activity of Imagined Speech

Post by Elisa Guma

The takeaway

The accurate detection of low-frequency neural activity, unique to imagined speech, may aid in the creation of brain-computer interfaces used to help individuals with speech production deficits communicate.

What's the science?

Using brain-computer interface technology to decode neural features of overt or imagined speech may enable communication in real-time for individuals suffering from serious or complete speech production. While progress has been made in decoding overt speech, decoding imagined speech has been more challenging due to weaker and more variable neural signals. This week in Nature Communications, Proix and colleagues investigate the neural activity associated with overt and imagined speech production.

How did they do it?

Electrocorticographic (ECoG) recordings were acquired from individuals with refractory epilepsy who were implanted with a subdural electrode array as part of the standard pre-surgical evaluation process. While electrocorticographic recordings were being acquired, participants were asked to listen to or read multiple words or syllables (ex: ‘ba’, ‘da’, ‘ga’), after which they were instructed to either imagine hearing the word or syllable, imagine saying the word or syllable, or repeating the word or syllable out loud.

First, electrodes were localized to a patient’s pre-implant structural MRI such that the location of each electrode could be associated with a specific brain region. The signal was then transformed such that a power could be assigned to each recording in one of 4 known frequencies ranging from low to high: theta, low-beta, low-gamma bands, and broadband high-frequency activity. For each power spectrum, the authors investigated the association with either listened, overt, or imagined speech, and specific brain regions. Finally, the authors aimed to decode overt and imagined speech by training a specific classifier for each binary classification between distinct words or syllables.

What did they find?

The authors found that overt and imagined speech engaged a large part of the left hemispheric language network including the sensory and motor regions, with more prominent involvement of the superior temporal gyrus for overt speech, potentially attributable to the auditory feedback due to hearing oneself speak. The power spectrum differences between overt and imagined speech were sufficiently reliable to accurately classify which task the participants were engaged in. Broadband high-frequency activity was most associated with overt speech decoding. Neural activity at both low- and high- frequency power could be used to decode imagined speech with equivalent or even higher performance than overt speech. These data suggest that low-frequency power may be critical for decoding imagined speech, that the process of decoding overt and imagined speech may be quite different, and that brain-computer trained on one type of speech production may not be applicable to the other.  

What's the impact?

This study examined neural activity associated with the production of overt or imagined speech and found crucial differences in their oscillatory patterns and neuroanatomical origin. Low-frequency power and cross-frequency dynamics may hold key information for decoding imagined speech. A better understanding of the underlying neural activity of imagined speech may inform more accurate brain-machine interfaces, which could greatly benefit those suffering from severe speech production deficits.