News Release

Carnegie Mellon Cognitive Neuroscientist Investigates How The Human Brain “Sees”

Peer-Reviewed Publication

Carnegie Mellon University

PITTSBURGH--Visual scenes may contain multiple objects and people, and humans can recognize them all with ease and accuracy. But just how the brain gathers and makes sense of raw visual material remains something of a mystery.

Carnegie Mellon University's Marlene Behrmann and other cognitive neuroscientists are uncovering some clues about how our brain "sees." These advances are contributing significantly to the understanding of how information contained in complex visual scenes is relayed from the human retina to the areas of the brain that decipher visual information.

Research in Behrmann's lab focuses on the psychological and neural processes that underlie our ability to interpret visual scenes. These processes allow us to recognize objects, faces and words, and they enable us to know where these items appear so that we can reach out to pick them up, or move our eyes to inspect them further.

An associate professor in Carnegie Mellon's Psychology Department and a member of the Center for the Neural Basis of Cognition (http://www.cnbc.cmu.edu), Behrmann studies the behavior of human adults who have brain damage that has affected their visual systems. Through detailed examination of the behavior of these individuals, as well as using functional magnetic resonance imaging with non-impaired subjects, Behrmann attempts to address three major questions:

How are form and identity represented in our brain?

How is location, or spatial information, "coded" by the brain?

How are form and identity integrated with location to present the unitary visual experience that most of us enjoy?

Cognitive neuroscientists say it appears that the brain first divides information into what is in the scene (for example, form and identity), and where it is located. Then, the brain synthesizes the information in a way humans can understand.

Recent imaging studies confirm this. When subjects are required to process object shape and identity, the temporal lobe is activated. When humans are required to make spatial judgments such as where something is in a scene, the parietal lobe is activated.

The paradox between this segregation of the visual pathways and unified visual experiences has intrigued brain researchers for many years. Behrmann says there is much more to discover about how the separate attributes are put together again.

Behrmann's research results are leading her to explore the following possibilities:

  • Certain regions of the "what" pathway, located in the temporal lobe of the brain, appear to be responsible for particular functions. For example, there is a subcomponent of the temporal pathway that appears to be specialized for the recognition of faces. This same subcomponent may also be able to process other complex stimuli to distinguish items that are perceptually or visually very similar to each other, such as two different poodle dogs or two similar-looking but different types of cars.

  • The investigations of the parietal "where" pathways yield several important insights into how location is computed and represented in the brain. Investigations suggest that the brain transforms location information into a general-purpose map. This common abstract map provides a means of integrating action across the different senses. As such, this map apparently allows us to locate the stimulus, whether the clue to its location has been fed to the brain by our sense of sight, hearing or touch.

  • Although the temporal and parietal pathways do not obviously converge on a third, master area that combines their outputs, our visual experience is coherent. So it is possible that the two pathways mutually influence each other and the final output depends on their collaboration. Studies show that patients with parietal lesions have considerable information about the objects and words that are present in their visual field and that this "what" information influences their spatial "where" behavior. Some functional neuroimaging studies also show that the parietal and temporal areas can be simultaneously activated when both "what" and "where" information are needed to perform a particular task.

"The study of the visual brain sheds light on how we acquire information about the external world. And because we are talking about visual information, it is also a profoundly philosophical endeavor for cognitive neuroscientists like me who are challenged to understand how neural signals are converted into vivid, rich experiences," Behrmann said.

###



Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.