当前位置: X-MOL 学术Iran. J. Sci. Technol. Trans. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing Emotion Detection with Non-invasive Multi-Channel EEG and Hybrid Deep Learning Architecture
Iranian Journal of Science and Technology, Transactions of Electrical Engineering ( IF 2.4 ) Pub Date : 2024-03-18 , DOI: 10.1007/s40998-024-00710-4
Durgesh Nandini , Jyoti Yadav , Asha Rani , Vijander Singh

Emotion recognition is vital for augmenting human–computer interactions by integrating emotional contextual information for enhanced communication. Hence, the study presents an intelligent emotion detection system developed utilizing hybrid stacked gated recurrent units (GRU)-recurrent neural network (RNN) deep learning architecture. Integration of GRU with RNN allows the system to make use of both models’ capabilities, making it better at capturing complex emotional patterns and temporal correlations. The EEG signals are investigated in time, frequency, and time–frequency domains, meticulously curated to capture intricate multi-domain patterns. Then, the SMOTE-Tomek method ensures a uniform class distribution, while the PCA technique optimizes features by minimizing data redundancy. A comprehensive experimentation including the well-established emotion datasets: DEAP and AMIGOS, assesses the efficacy of the hybrid stacked GRU and RNN architecture in contrast to 1D convolution neural network, RNN and GRU models. Moreover, the “Hyperopt” technique fine-tunes the model’s hyperparameter, improving the average accuracy by about 3.73%. Hence, results revealed that the hybrid GRU-RNN model demonstrates the most optimal performance with the highest classification accuracies of 99.77% ± 0.13, 99.54% ± 0.16, 99.82% ± 0.14, and 99.68% ± 0.13 for the 3D VAD and liking parameter, respectively. Furthermore, the model’s generalizability is examined using the cross-subject and database analysis on the DEAP and AMIGOS datasets, exhibiting a classification with an average accuracy of about 99.75% ± 0.10 and 99.97% ± 0.03. Obtained results when compared with the existing methods in literature demonstrate superior performance, highlighting potential in emotion recognition.



中文翻译:

通过非侵入性多通道脑电图和混合深度学习架构增强情绪检测

情感识别对于通过整合情感上下文信息来增强沟通来增强人机交互至关重要。因此,该研究提出了一种利用混合堆叠门控循环单元(GRU)-循环神经网络(RNN)深度学习架构开发的智能情绪检测系统。GRU 与 RNN 的集成使系统能够利用两种模型的功能,从而更好地捕获复杂的情绪模式和时间相关性。脑电图信号在时间、频率和时频域中进行研究,并精心策划以捕获复杂的多域模式。然后,SMOTE-Tomek 方法确保均匀的类分布,而 PCA 技术通过最小化数据冗余来优化特征。包括完善的情感数据集:DEAP 和 AMIGOS 在内的综合实验评估了混合堆叠 GRU 和 RNN 架构与 1D 卷积神经网络、RNN 和 GRU 模型相比的功效。此外,“Hyperopt”技术对模型的超参数进行了微调,平均准确率提高了约3.73%。因此,结果表明,混合 GRU-RNN 模型表现出最佳性能,3D VAD 和喜好参数的分类精度最高为 99.77% ± 0.13、99.54% ± 0.16、99.82% ± 0.14 和 99.68% ± 0.13,分别。此外,使用 DEAP 和 AMIGOS 数据集上的跨主题和数据库分析来检查模型的普遍性,显示分类的平均准确度约为 99.75% ± 0.10 和 99.97% ± 0.03。与文献中现有的方法相比,获得的结果表现出优越的性能,凸显了情感识别的潜力。

更新日期:2024-03-19
down
wechat
bug