News Release

Brain responses to sentence structure differ for speaking and listening

Peer-Reviewed Publication

Max Planck Institute for Psycholinguistics

Figures and captions

image: 

Figure 1: Timing of syntactic processing following different strategies. The colored circles refer to the nodes of the syntactic structure that are built at the time point the word in the same color is uttered or heard. A: Colored representation of anticipatory top-down phrase structure building, with nodes counted from the top of the syntactic tree to the word. For example, at “He” orange nodes S, NP and PRP are counted. B: Colored representation of integratory bottom-up phrase structure building, with nodes counted from the bottom of the tree to the top. Only nodes where all of their branching nodes have already been met can be counted at each word. For example, at “He” nodes NP and PRP are counted, while S has a branch that is unaccounted for. The unfolding of node counting over time is shown in the gifs.

Figure 2: Graphical representation of the analysis procedure to relate word-by-word predictors of syntactic processing to brain activity. A: Word-by-word predictors of syntactic complexity were extracted from the constituent structure of the sentence spoken by a participant and listened to by other participants (D). The height of the bars in A represents the number of phrase-structure building operations expected to take place at each word following top-down and bottom-up strategies (e.g. at “so” 3 nodes are counted for top-down, 2 for bottom-up). The weights of the syntactic predictors were convolved with the haemodynamic response function (B) to get predictor timeseries of BOLD activity (brain activity that is usually measured with fMRI) at 1.5 sec resolution (C). These predictors timeseries were then compared to the average brain activity (F) in the speaker or the listener (D) in three regions of interest that have been associated with syntactic processing (BA44, BA45 and left posterior middle temporal gyrus (LpMTG), E).

Figure 3: Estimates for the effect of sentence-onset, sentence-offset, top-down and bottom-up syntactic strategies on brain activity in the three regions of interest. Error bars represent standard error of the mean. The top-down strategy is related to an increase in brain activity similarly in the three regions, while there is a decrease in brain activity in association with the bottom-up parser (e.g. at “bodies” in the example in Fig. 1). In comprehension, instead, brain activity decreases in the LpMTG with more top-down counts, but increases with bottom-up (e.g. at “bodies”). BA45 (or pars triangularis of the left inferior frontal gyrus) is instead sensitive to sentence-onsets and offsets, with increased activity at the end of sentences during listening.

view more 

Credit: Laura Giglio

How does the brain respond to sentence structure as we speak and listen? In a neuroimaging study published in PNAS, researchers from the Max Planck Institute for Psycholinguistics (MPI) and Radboud University in Nijmegen investigated sentence processing during spontaneous speech for the first time. While we speak, brain activity increases early on in sentences, anticipating structure building. In contrast, during listening, brain activity increased at the end of phrases, reflecting the integration of sentence structure.

Both speaking and listening involve combining words in a sentence following grammatical rules. However, the precise timing of this ‘syntactic processing’ remains unclear. Sentence production is studied less often than sentence comprehension. Moreover, researchers usually study sentence production with complex tasks that are very different from speaking in natural situations.

Syntactic processing allows us to combine words to create new meanings”, says senior researcher Peter Hagoort, director of the Donders Institute for Brain, Cognition and Behaviour. “We investigated brain responses to spontaneous speech, to better understand how the brain does it and how this process differs when we speak versus when we listen.”

Watching TV in a scanner

The researchers decided to compare brain responses to syntactic processing during spontaneous speaking and listening. Native English speakers watched an episode of the BBC series ‘Sherlock’ in an MRI scanner. Next, they were asked to recall what happened in their own words. Other participants then listened to one of the participants speaking about the episode (“So, they began with like a dream sequence of a shootout”). This enabled the team to compare brain activity during speaking and listening to the same sentences.

The team extracted the syntactic structure for each spoken sentence, and modelled how many syntactic operations had to be done at each word to build the structure for the sentence. They then asked which brain areas were sensitive to these syntactic operations during speaking and listening.

Early or late activation

During speaking, brain areas associated with syntactic processing showed increased activation early on in the sentences. This indicates that while we speak, we build sentence structure incrementally or word-by-word, in anticipation of what comes next. In contrast, during listening, brain activity increased towards the end of phrases—groups of words that function as grammatical units. To understand a sentence, participants in this study were more likely to adopt a ‘wait-and-see’ approach, to successfully integrate all the available information in a structure.

This study brings us closer to understanding the similarities and differences between speaking and listening and how these everyday functions are implemented in the brain”, says first author Laura Giglio. “It is feasible to study spontaneous speech and much can be learnt from it. Future research can take advantage of this study to better model brain responses to linguistic processing and to better describe the complex relationship between speaking and listening.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.