image: A branching channel in the state space reconciles stability and sensitivity, providing a building block for tree-structured representations in neural systems.
Credit: Muyuan Xu, UTokyo
Tree structures have been widely used to model intelligent behavior, such as reasoning, problem-solving, and language processing. However, whether our brain uses tree-structured representations is still controversial. In particular, a key challenge remains to build such representations in conventional neural networks.
In a recent paper in the Proceedings of the National Academy of Sciences (https://doi.org/10.1073/pnas.2409487121), researchers from IRCN, The University of Tokyo and Tohoku University provided new insights into this problem by examining how the brain actively maintains and updates information. The researchers trained monkeys to perform a group reversal task, which required them to keep an associative rule in their mind and make a binary choice by integrating information from a sensory cue. Neural activity in the prefrontal cortex (PFC) was recorded while the monkeys performed this task.
Consistent with previous studies, the researchers found that the rule was likely maintained by a stable state in neural dynamics, where stability means that when perturbed, the state can automatically restore itself – much like a ball at the bottom of a bowl. When the sensory cue was presented to the monkeys, it triggered transient neural activity, during which information from the cue was integrated into (“added to”) the state of the PFC. Notably, there was a 70 ms delay between the beginning of the transient neural activity and the beginning of the integration. This delay cannot be explained by slow synaptic transmission.
“To understand this delay, we note that maintaining information and updating information involve fundamentally conflicting demands”, says Muyuan Xu, the lead author of this study. He goes on to explain that while maintaining information requires stability, updating information (integrating new information) requires sensitivity to inputs carrying new information (i.e., instability). Thus, the delay between the stable state that maintains the rule and the start of integration may reflect a destabilization of the PFC, in preparation for subsequent integration.
To test this hypothesis, the researchers trained a recurrent neural network (RNN) to perform a task analogous to the one performed by the monkeys, incorporating the delay as a strategy to integrate the information conveyed by a transient input. After training, they reverse-engineered the network to analyze its behavior. Indeed, they found that the network was rapidly destabilized through the delay and became sensitive to the input in a temporal window that coincided with it.
These results reveal a previously unknown dynamic neural representation in the network’s state space, called a “branching channel”. The network’s state “flows” in this channel and is stable (in the sense that when perturbed, the state automatically returns to the original trajectory) when it is distal to the branching point. Near the branching point, the state becomes unstable along a certain dimension. Thus, any new input along this dimension will be summated efficiently, driving the state into one of the branches. In addition, the researchers suggested a mechanism by which such channels might be concatenated or interlinked in series, resulting in tree-structured representations.
This work provides a general framework for understanding cognitive control. On the other hand, the question of whether our brain explicitly uses structured representations is central to an ongoing debate between the symbolic and connectionist paradigms of artificial intelligence (AI). This new work suggests that neural networks can not only use structured, rule-like representations but can also learn such representations. These findings thus open the possibility of reconciling the two AI paradigms, and contribute to building higher cognitive functions in neural network models.
###
The article, “Dynamic tuning of neural stability for cognitive control”, was published in The Proceedings of the National Academy of Sciences (PNAS) at DOI: 10.1073/pnas.2409487121
International Research Center for Neurointelligence (IRCN), The University of Tokyo
The IRCN was established at the University of Tokyo in 2017, as a research center under the WPI program to tackle the ultimate question, “How does human intelligence arise?” The IRCN aims to (1) elucidate fundamental principles of neural circuit maturation, (2) understand the emergence of psychiatric disorders underlying impaired human intelligence, and (3) drive the development of next-generation artificial intelligence based on these principles and function of multimodal neuronal connections in the brain.
Find out more at: https://ircn.jp/en/
About the World Premier International Research Center Initiative (WPI)
The WPI program was launched in 2007 by Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) to foster globally visible research centers boasting the highest standards and outstanding research environments. Operating at institutions throughout Japan, the 18 centers that have been adopted are given a high degree of autonomy, allowing them to engage in innovative modes of management and research. The program is administered by the Japan Society for the Promotion of Science (JSPS).
See the latest research news from the centers at the WPI News Portal: https://www.eurekalert.org/newsportal/WPI
Main WPI program site: www.jsps.go.jp/english/e-toplevel
Journal
Proceedings of the National Academy of Sciences
Method of Research
Computational simulation/modeling
Subject of Research
Animals
Article Title
Dynamic tuning of neural stability for cognitive control
Article Publication Date
25-Nov-2024