News Release

Audiovisual integration of speech falters under competing demands for attention

Peer-Reviewed Publication

Cell Press

In order to achieve a coherent and accurate representation of the environment, the brain binds together the inputs--for example, vision and audition--arriving from different senses. This process often occurs seamlessly and without any apparent effort or attention. However, researchers have now found that this integration of sensory signals can be disrupted when attention is diverted to a secondary task, suggesting that the integration, or "binding," does not occur as automatically as previously supposed. The new work is reported in the May 10 issue of Current Biology by Salvador Soto-Faraco and colleagues of the University of Barcelona and Ruth Campbell of University College London.

One classical example of how vision and audition come together is speech perception. Although we tend to think of speech as a purely auditory process, it is surprisingly sensitive to visual influences. This is quite evident when we try to follow a conversation in a noisy place, as listeners tend to look at the talker's lip movements--especially as we grow older and our hearing declines. Classical experiments revealed that if a heard "ba" syllable is dubbed to a talker seen to be saying "ga," the observer often "hears" "da"--a sound that has the characteristics of both the heard and seen speech but is different from either. This illusion, first reported by Harry McGurk in 1976, is so powerful that observers usually do not realize what has happened until they look away from the talker--at which point the illusion breaks up and the true auditory event ("ba") is heard.

The currently accepted view on the McGurk illusion and similar multisensory-integration phenomena is that so-called binding processes in the brain occur pre-attentively; that is, they occur automatically and unavoidably as long as the perceiver has access to both input channels. In the new study, the researchers tested this "automaticity" hypothesis directly by making observers perform a difficult task (an attention-demanding secondary perceptual task) while showing them the McGurk illusion. The researchers found that under these conditions, the ability to integrate visual and auditory speech was severely reduced--even when the talker was clearly visible and audible, and regardless of whether the secondary task was visual or auditory.

This finding challenges previous claims that multisensory integration (and, therefore, its potential benefits in perception) occurs without attention. In practical terms, whereas audiovisual speech helps us to follow conversations when the auditory input is degraded, it could mean that we don't do this as effortlessly as was previously believed. The findings imply that some attentional resources are needed for cross-sensory binding to occur. Download PDF

###

The members of the research team include Agnès Alsius, Jordi Navarra, and Salvador Soto-Faraco of Universitat de Barcelona; and Ruth Campbell of University College London. This research was supported by grants from the James McDonnell Foundation and the Ministerio de Ciencia y Tecnología and by a fellowship Beca de Formació en la Recerca i la Docència from the Universitat de Barcelona to A.A.

Alsius, A., Navarra, J., Campbell, R., and Soto-Faraco, S. (2005). Audiovisual Integration of Speech Falters under High Attention Demands. Curr. Biol. 15, 839-843. http://www.current-biology.com


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.