News Release

Multimodal fusion of brain imaging data: methods and applications

Peer-Reviewed Publication

Beijing Zhongke Journal Publising Co. Ltd.

Four interrelated topics covered in this review

image: 

Four interrelated topics covered in this review

view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

Neuroimaging provides a means for identifying and measuring the structure and function of the brain. Different non-invasive imaging measurements reveal different characteristics of the nervous system, e.g., architecture, activation, or structural and functional connectivity. Magnetic resonance imaging (MRI) is one of the most important neuroimaging technologies and widely used in neuroscience research and clinical settings. Structural MRI (sMRI) provides information about the tissue type of the brain. Functional MRI (fMRI) measures the hemodynamic response related to neural activity in the brain dynamically. Diffusion-weighted imaging (DWI) can additionally provide information on structural connectivity among brain regions. Typically, these data are analyzed separately in a single-modality fashion. While more recently, collecting multiple types of brain data from the same individual using various imaging techniques has become common practice. Compared to single modality, the fusion of multiple modalities, which may capture cross-modal (both shared and complementary) information, is envisioned to provide more insights into the underlying problem.

 

The common goal of data fusion is to maximally dig out the joint information shared among modalities as well as the modality-specified complementary information. The past decades have witnessed significant improvements in learning-based fusion methods. Based on whether labels are used to guide the learning process, researchers subdivide the existing fusion technologies into unsupervised and supervised learning methods. The objective function for the supervised learning method is obvious and consistent to learn the mapping between input and labels, then the joint representations were optimized through reducing the difference between predicted labels and true labels. While for the unsupervised learning strategy, researchers further subdivide it into three categories according to different objective functions, correlation-based fusion, multi-view clustering and data reconstruction. Advanced methods in each category will be systematically reviewed later. The conventional fusion methods are commonly emphasized to maximally exploit the shared representation, whereas the modality-specified complementary information is often underutilized. Therefore, the variants of those methods towards exploring the complementary information will be discussed along the way.

 

Carrying out multimodal fusion analysis benefits a cumulative understanding of the complex brain networks on different temporal and spatial scales. First, the brain atlas is a prerequisite for studying brain networks, which plays a central role in neuroscience and clinical practice. Though many extensively applied brain atlases segregate brain into distinct brain areas primarily by a single modality (cytoarchitecture, topography, function or connectivity), a series of recent studies shed light on more stable boundaries delineated by fusing various modalities. More importantly, constructing a reference brain atlas paves the way to fuse a large scale of information spanning from genes, proteins, synapses and neurons to areas, pathways, and the whole brain, which provides the possibility to comprehensively explore neuroscience issues for both healthy development and clinical pathology via data fusion technology. Furthermore, the exploration of the mystery of cognition and development has always been a core topic in the field of neuroscience. Using multimodal data has achieved significantly higher accuracies compared to using unimodal data when predicting individuals’ behaviors and intelligence quotient scores in recent studies. Finally, encouraging efforts have been devoted to early diagnosis and prognosis of psychiatric disorders via multimodal fusion methods. During the period of growth, psychiatric symptoms frequently emerge with complex reasons. Typically, psychiatric disorders develop with a long process, imposing a great socioeconomic burden. Consequently, increasing attention is focused on detecting early abnormalities, exploring potential subtypes, as well as revealing possible neuroimaging biomarkers for predicting treatment outcomes.

 

In this review, four interrelated topics are covered. The first topic is about multimodal fusion methods. As brain imaging is often with three-dimension (3D) or higher dimension, it is difficult to determine the linkages via computing simple correlation. To effectively fuse multimodal data, various machine learning methodologies have been proposed. The common pipeline is to first transfer the high-dimension images to a 2D matrix. Supervised or unsupervised strategies are then adopted to reduce the dimension of the 2D matrix of different modalities to a common latent space. The inner associations between modalities are then explored in the latent space. In Section 2, researchers review some important advances of each category that have been successfully used for brain imaging data fusion, including correlation-based fusion, clustering-based fusion, data reconstruction, multi-task learning and variants, deep learning-based fusion.

 

The second topic introduces atlasing via multimodal brain imaging, which reviews brain parcellations at both macro-level and micro-level based on information of anatomical structure, function activation, connectivity, or multiple modalities.

 

The third topic is related to multimodal fusion in studying cognition and development. This part includes representative applications on how multimodal fusion methods help improve the prediction and understanding of behavioral phenotype and brain aging.

 

The fourth topic is about multimodal fusion in studying brain disorders, which elaborates important applications on how multimodal fusion helps accelerate the exploration of underlying biological mechanisms of brain diseases. With the accumulation of clinical multimodal data, multimodal fusion technology has a wide range of applications in clinical scenarios. On the one hand, through the multimodal fusion technology based on supervised learning, researchers can carry out efficient computer-aided diagnosis and identify key biomarkers. On the other, unsupervised fusion methods are expected to help researchers explore potential disease-related factors of complex brain diseases, of which pathogenesis of many diseases has not been fully understood, from multiple perspectives, thus enabling the development of clinical diagnosis and treatment.

 

Section 6 proposes challenges and future directions. With the ability to revealing cross-modal information compared to single modality, multimodal fusion of brain imaging gained promising performances in studying brain parcellation, cognition and development, and brain disorders. However, with the random variation characteristics of some fusion strategies, small sample size may lead to a false positive linkage. Then the first common challenge is to produce big data. Second, the development of imaging technology brings the brain imaging to a finer scale. How to bridge the gap of spatial alignment across-scales is challenging. Third, most current multimodal fusion methods are applied to solve macro-scale problems. With the release of micro-scale and meso-scale data, how to transfer the current fusion strategies for interaging multi-scale datasets is an important focus. Furthermore, the above discussed large cohorts, high-throughput and high-quality imaging, and new models are all creating substantial challenges for resources of storing, analyses and visualization of the data, leaving an increased demand for building new platforms.

 

In this review, researchers attempted to conduct a comprehensive survey on the progress of the multimodal brain imaging fusion studies in recent years, aiming at tracking down the advanced fusion strategies and remarkable applications. Compared to unimodal, the multimodal strategies could better exploit both the shared representation between modalities and the modality-specific complementary information, hence improving the applications on building brain atlases, predicting phenotypic outcomes, and classifying brain diseases. With the development of larger datasets, novel fusion models and supercomputing facilities, multimodal fusion could help better unveil the underlying mechanisms of human brain cognition and clinical disorders in the future.

 

 

 

See the article:

Multimodal Fusion of Brain Imaging Data: Methods and Applications

http://doi.org/10.1007/s11633-023-1442-8


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.