当前位置: X-MOL 学术Comput. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing visual question answering with a two-way co-attention mechanism and integrated multimodal features
Computational Intelligence ( IF 2.8 ) Pub Date : 2023-12-21 , DOI: 10.1111/coin.12624
Mayank Agrawal 1 , Anand Singh Jalal 1 , Himanshu Sharma 1
Affiliation  

In Visual question answering (VQA), a natural language answer is generated for a given image and a question related to that image. There is a significant growth in the VQA task by applying an efficient attention mechanism. However, current VQA models use region features or object features that are not adequate to improve the accuracy of generated answers. To deal with this issue, we have used a Two-way Co-Attention Mechanism (TCAM), which is capable enough to fuse different visual features (region, object, and concept) from diverse perspectives. These diverse features lead to different sets of answers, and also, there is an inherent relationship between these visual features. We have developed a powerful attention mechanism that uses these two critical aspects by using both bottom-up and top-down TCAM to extract discriminative feature information. We have proposed a Collective Feature Integration Module (CFIM) to combine multimodal attention features and thus capture the valuable information from these visual features by employing a TCAM. Further, we have formulated a Vertical CFIM for fusing the features belonging to the same class and a Horizontal CFIM for combining the features belonging to different types, thus balancing the influence of top-down and bottom-up co-attention. The experiments are conducted on two significant datasets, VQA 1.0 and VQA 2.0. On VQA 1.0, the overall accuracy of our proposed method is 71.23 on the test-dev set and 71.94 on the test-std set. On VQA 2.0, the overall accuracy of our proposed method is 75.89 on the test-dev set and 76.32 on the test-std set. The above overall accuracy clearly reflecting the superiority of our proposed TCAM based approach over the existing methods.

中文翻译:

通过双向共同注意机制和集成多模态特征增强视觉问答

在视觉问答 (VQA) 中,为给定图像和与该图像相关的问题生成自然语言答案。通过应用有效的注意力机制,VQA 任务有了显着的增长。然而,当前的 VQA 模型使用的区域特征或对象特征不足以提高生成答案的准确性。为了解决这个问题,我们使用了双向共同注意机制(TCAM),它足以从不同的角度融合不同的视觉特征(区域、对象和概念)。这些不同的特征导致了不同的答案,而且这些视觉特征之间存在内在的关系。我们开发了一种强大的注意力机制,通过使用自下而上和自上而下的 TCAM 来利用这两个关键方面来提取判别性特征信息。我们提出了一个集体特征集成模块(CFIM)来结合多模态注意力特征,从而通过使用 TCAM 从这些视觉特征中捕获有价值的信息。此外,我们还制定了用于融合属于同一类的特征的垂直CFIM和用于组合属于不同类型的特征的水平CFIM,从而平衡自上而下和自下而上共同注意的影响。实验在两个重要数据集 VQA 1.0 和 VQA 2.0 上进行。在 VQA 1.0 上,我们提出的方法的总体准确度在测试开发集上为 71.23,在测试标准集上为 71.94。在 VQA 2.0 上,我们提出的方法的总体准确度在测试开发集上为 75.89,在测试标准集上为 76.32。上述总体精度清楚地反映了我们提出的基于 TCAM 的方法相对于现有方法的优越性。
更新日期:2023-12-24
down
wechat
bug