当前位置: X-MOL 学术ACM Trans. Intell. Syst. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
ACM Transactions on Intelligent Systems and Technology ( IF 5 ) Pub Date : 2024-04-15 , DOI: 10.1145/3649458
Davor Vukadin 1 , Petar Afrić 1 , Marin Šilić 1 , Goran Delač 1
Affiliation  

Recent advancement in deep-neural network performance led to the development of new state-of-the-art approaches in numerous areas. However, the black-box nature of neural networks often prohibits their use in areas where model explainability and model transparency are crucial. Over the years, researchers proposed many algorithms to aid neural network understanding and provide additional information to the human expert. One of the most popular methods being Layer-Wise Relevance Propagation (LRP). This method assigns local relevance based on the pixel-wise decomposition of nonlinear classifiers. With the rise of attribution method research, there has emerged a pressing need to assess and evaluate their performance. Numerous metrics have been proposed, each assessing an individual property of attribution methods such as faithfulness, robustness, or localization. Unfortunately, no single metric is deemed optimal for every case, and researchers often use several metrics to test the quality of the attribution maps. In this work, we address the shortcomings of the current LRP formulations and introduce a novel method for determining the relevance of input neurons through layer-wise relevance propagation. Furthermore, we apply this approach to the recently developed Vision Transformer architecture and evaluate its performance against existing methods on two image classification datasets, namely ImageNet and PascalVOC. Our results clearly demonstrate the advantage of our proposed method. Furthermore, we discuss the insufficiencies of current evaluation metrics for attribution-based explainability and propose a new evaluation metric that combines the notions of faithfulness, robustness, and contrastiveness. We utilize this new metric to evaluate the performance of various attribution-based methods. Our code is available at: https://github.com/davor10105/relative-absolute-magnitude-propagation



中文翻译:

通过相对绝对量级相关性传播和多分量评估提高基于归因的神经网络可解释性

深度神经网络性能的最新进展导致了许多领域新的最先进方法的发展。然而,神经网络的黑盒性质常常阻碍其在模型可解释性和模型透明度至关重要的领域中使用。多年来,研究人员提出了许多算法来帮助神经网络理解并为人类专家提供额外的信息。最流行的方法之一是逐层相关性传播(LRP)。该方法基于非线性分类器的逐像素分解来分配局部相关性。随着归因方法研究的兴起,迫切需要评估和评价他们的绩效。人们提出了许多指标,每个指标都评估归因方法的单个属性,例如忠实性、鲁棒性或本地化。不幸的是,没有一个指标被认为对于每种情况都是最佳的,研究人员经常使用多个指标来测试归因图的质量。在这项工作中,我们解决了当前 LRP 公式的缺点,并引入了一种通过逐层相关性传播确定输入神经元相关性的新方法。此外,我们将此方法应用于最近开发的 Vision Transformer 架构,并根据两个图像分类数据集(即 ImageNet 和 PascalVOC)上的现有方法评估其性能。我们的结果清楚地证明了我们提出的方法的优点。此外,我们讨论了当前基于归因的可解释性评估指标的不足,并提出了一种结合了忠实性、鲁棒性和对比性概念的新评估指标。我们利用这个新指标来评估各种基于归因的方法的性能。我们的代码位于:https://github.com/davor10105/relative-absolute-magnitude-propagation

更新日期:2024-04-15
down
wechat
bug