Most of us are familiar with the idea of image compression in computers. File extensions like ".jpg" or ".png" signify that millions of pixel values have been compressed into a more efficient format, reducing file size by a factor of 10 or more with little or no apparent change in image quality. The full set of original pixel values would occupy too much space in computer memory and take too long to transmit across networks.
The brain is faced with a similar problem. The images captured by light-sensitive cells in the retina are on the order of a megapixel. The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.
In today's online issue of Current Biology, a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses visual information down to the essentials.
They found that cells in area "V4," a midlevel stage in the primate brain's object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.
To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells -- the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.
The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.
Why would focusing on acute curvature regions produce such savings? Because, as the group's analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.
Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.
"Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult," he explained
Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.
"Computers can beat us at math and chess," said Connor, "but they can't match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world." This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.
To learn more about the Mind/Brain Institute, go here: http://krieger.jhu.edu/mbi/.
Journal
Current Biology