当前位置: X-MOL 学术Front Hum Neurosci › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Music emotion recognition based on temporal convolutional attention network using EEG
Frontiers in Human Neuroscience ( IF 2.9 ) Pub Date : 2024-03-28 , DOI: 10.3389/fnhum.2024.1324897
Yinghao Qiao , Jiajia Mu , Jialan Xie , Binghui Hu , Guangyuan Liu

Music is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an electroencephalogram (EEG) data set stimulated by four different types of music (fear, happiness, calm, and sadness). Secondly, the differential entropy features of EEG were extracted, and then the emotion recognition model CNN-SA-BiLSTM was established to extract the temporal features of EEG, and the recognition performance of the model was improved by using the global perception ability of the self-attention mechanism. The effectiveness of the model was further verified by the ablation experiment. The classification accuracy of this method in the valence and arousal dimensions is 93.45% and 96.36%, respectively. By applying our method to a publicly available EEG dataset DEAP, we evaluated the generalization and reliability of our method. In addition, we further investigate the effects of different EEG bands and multi-band combinations on music emotion recognition, and the results confirm relevant neuroscience studies. Compared with other representative music emotion recognition works, this method has better classification performance, and provides a promising framework for the future research of emotion recognition system based on brain computer interface.

中文翻译:

基于脑电图的时间卷积注意力网络的音乐情感识别

音乐是唤起人类情感的主要方式之一。然而,音乐的感觉是主观的,因此很难确定音乐会触发特定个体的哪些情绪。为了正确识别不同类型的音乐引起的情绪问题,我们首先创建了四种不同类型的音乐(恐惧、快乐、平静和悲伤)刺激的脑电图(EEG)数据集。其次,提取脑电图的微分熵特征,然后建立情感识别模型CNN-SA-BiLSTM来提取脑电图的时间特征,利用自我的全局感知能力提高模型的识别性能。 -注意力机制。通过消融实验进一步验证了模型的有效性。该方法在价态和唤醒维度的分类准确率分别为93.45%和96.36%。通过将我们的方法应用于公开可用的脑电图数据集 DEAP,我们评估了我们方法的泛化性和可靠性。此外,我们进一步研究了不同脑电图频段和多频段组合对音乐情感识别的影响,结果证实了相关的神经科学研究。与其他具有代表性的音乐情感识别作品相比,该方法具有更好的分类性能,为未来基于脑机接口的情感识别系统的研究提供了一个有前景的框架。
更新日期:2024-03-28
down
wechat
bug