当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition
arXiv - CS - Human-Computer Interaction Pub Date : 2024-04-15 , DOI: arxiv-2404.09559
Qile Liu, Zhihao Zhou, Jiyuan Wang, Zhen Liang

The integration of human emotions into multimedia applications shows great potential for enriching user experiences and enhancing engagement across various digital platforms. Unlike traditional methods such as questionnaires, facial expressions, and voice analysis, brain signals offer a more direct and objective understanding of emotional states. However, in the field of electroencephalography (EEG)-based emotion recognition, previous studies have primarily concentrated on training and testing EEG models within a single dataset, overlooking the variability across different datasets. This oversight leads to significant performance degradation when applying EEG models to cross-corpus scenarios. In this study, we propose a novel Joint Contrastive learning framework with Feature Alignment (JCFA) to address cross-corpus EEG-based emotion recognition. The JCFA model operates in two main stages. In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals, without the use of labeled data. It extracts robust time-based and frequency-based embeddings for each EEG sample, and then aligns them within a shared latent time-frequency space. In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered. The model capability could be further enhanced for the application in emotion detection and interpretation. Extensive experimental results on two well-recognized emotional datasets show that the proposed JCFA model achieves state-of-the-art (SOTA) performance, outperforming the second-best method by an average accuracy increase of 4.09% in cross-corpus EEG-based emotion recognition tasks.

中文翻译:

基于跨语料库脑电图的情绪识别的联合对比学习与特征对齐

将人类情感融入多媒体应用程序显示出丰富用户体验和增强跨各种数字平台的参与度的巨大潜力。与问卷、面部表情、声音分析等传统方法不同,大脑信号可以更直接、客观地了解情绪状态。然而,在基于脑电图(EEG)的情绪识别领域,之前的研究主要集中在单个数据集中训练和测试脑电图模型,忽视了不同数据集之间的变异性。当将脑电图模型应用于跨语料库场景时,这种疏忽会导致性能显着下降。在这项研究中,我们提出了一种新颖的具有特征对齐(JCFA)的联合对比学习框架来解决基于跨语料库脑电图的情感识别。 JCFA 模型的运作分为两个主要阶段。在预训练阶段,引入联合域对比学习策略来表征脑电图信号的可泛化时频表示,而不使用标记数据。它为每个脑电图样本提取强大的基于时间和基于频率的嵌入,然后将它们在共享的潜在时频空间内对齐。在微调阶段,JCFA 结合下游任务进行细化,其中考虑脑电极之间的结构连接。该模型的能力可以进一步增强,以应用于情绪检测和解释。在两个公认的情感数据集上进行的大量实验结果表明,所提出的 JCFA 模型实现了最先进的 (SOTA) 性能,在基于跨语料库 EEG 的情况下,平均准确度提高了 4.09%,优于第二好的方法情绪识别任务。
更新日期:2024-04-16
down
wechat
bug