当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning interactions across sentiment and emotion with graph attention network and position encodings
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2024-02-16 , DOI: 10.1016/j.patrec.2024.02.013
Ao Jia , Yazhou Zhang , Sagar Uprety , Dawei Song

Sentiment classification and emotion recognition are two close related tasks in NLP. However, most of the recent studies have treated them as two separate tasks, where the shared knowledge are neglected. In this paper, we propose a multi-task interactive graph attention network with position encodings, termed MIP-GAT, to improve the performance of each task by simultaneously leveraging similarities and differences. The main proposal is a multi-interactive graph interaction layer where a , a and are constructed and incorporated into a unified graphical structure. Empirical evaluation on two benchmarking datasets, i.e., CMU-MOSEI and GoEmotions, shows the effectiveness of the proposed model over state-of-the-art baselines with the margin of 0.18%, 0.67% for sentiment analysis, 1.77%, 0.89% for emotion recognition. In addition, we also explore the superiority and limitations of the proposed model.

中文翻译:

通过图注意力网络和位置编码学习情感和情绪之间的交互

情感分类和情感识别是 NLP 中两个密切相关的任务。然而,最近的大多数研究都将它们视为两个独立的任务,而忽略了共享知识。在本文中,我们提出了一种具有位置编码的多任务交互式图注意网络,称为 MIP-GAT,通过同时利用相似性和差异来提高每个任务的性能。主要提案是一个多交互式图形交互层,其中 a 、 a 和 被构造并合并到统一的图形结构中。对两个基准数据集(即 CMU-MOSEI 和 GoEmotions)的实证评估表明,所提出的模型相对于最先进基线的有效性,情感分析的裕度为 0.18%、0.67%,情感分析的裕度为 1.77%、0.89%。情感识别。此外,我们还探讨了所提出模型的优越性和局限性。
更新日期:2024-02-16
down
wechat
bug