Article Highlight | 7-Nov-2023

Predicting moral elevation conveyed in Danmaku comments using EEGs

Beijing Institute of Technology Press Co., Ltd

A research paper by scientists at Tsinghua University demonstrated the feasibility of decoding moral elevation conveyed in danmaku comments with electroencephalography (EEG) signals.

The newly published research paper in the journal Cyborg and Bionic Systems on June 21 demonstrated the feasibility of using EEG-based affective computing to predict moral elevation – the emotion that arises when individuals observe others' moral behaviors. The study presented the potentials of neural decoding moral elevation experience conveyed in the crowdsourced danmaku comments during video watching.

While recent research has demonstrated the potential to decode basic emotions with brain signals, there has been limited exploration of affective computing for other social-related emotions, for example, moral elevation, an emotion related to social cognition. Considering the differences in theoretical construction, neurocognitive mechanisms, and practical applications between basic emotions and emotions associated with social cognition, it is important to investigate the feasibility of affective computing for social-related emotions.

“To address this gap, we predicted moral elevation conveyed in danmaku comments using EEGs.” explained study author Chenhao Bao from Tsinghua University. Leveraging the benefits of crowdsourcing tagging, the study authors were able to create continuous labels of moral elevation from over 30,000 different danmaku comments within 1-second temporal resolution. Then, by extracting power spectra features from group-level EEGs and employing the least absolute shrinkage and selection operator regularized regression analyses (LASSO), they found a promising prediction performance for moral elevation (prediction r = 0.44 ± 0.11). “Our findings indicate that it is possible to decode moral elevation using EEG signals.” said study authors. "Future studies that integrate the advanced models are expected to further boost the decoding of moral elevation.”

Previous studies have often assumed that the affective state during a relatively long duration is stationary, such as tagging the whole video with the same affective label in video-based paradigms. As the affective state can change rapidly at a scale of seconds, this stationary assumption may not always hold. Therefore, it is preferred to have continuous, dynamic affective labels to enable decoding with a higher temporal resolution rather than using the same affective label for a whole video.

“Internet-based crowdsourcing methods could offer a viable alternative for the continuous tagging of moral elevation. One such method is danmaku comments, a popular type of commentary among Internet video audiences in East Asia.” said Chenhao Bao. “While each audience may only post danmaku comments at some discrete time points, continuous emotion-tagging for the whole video could be achieved with by accumulating audience numbers (for example, when more than 10,000 views).”

The study authors suggested that the continuous moral elevation experience conveyed in danmaku comments accumulated from the large internet audience population is possible to decode based on small-sample EEG signals.

 

Authors of the paper include Chenhao Bao, Xin Hu, Dan Zhang, Zhao Lv, Jingjing Chen

Undergraduate Innovation Foundation of Beijing (202010003058), the Student Research Training Program of Tsinghua University (2011T0450), the Open Project of Anhui Provincial Key Laboratory of Multimodal Cognitive Computation at Anhui University (MMC202001), and the Open Project of Key Laboratory of Intelligent Computing and Signal Processing, Ministry of Education (2020A005).

The paper, “Predicting Moral Elevation Conveyed in Danmaku Comments Using EEGs” was published in the journal Cyborg and Bionic Systems on Jun 21, 2023, at DOI: 10.34133/cbsystems.0028.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.