当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
arXiv - CS - Artificial Intelligence Pub Date : 2024-03-25 , DOI: arxiv-2403.17224
Mihir Mulye, Matias Valdenegro-Toro

Explanation methods help understand the reasons for a model's prediction. These methods are increasingly involved in model debugging, performance optimization, and gaining insights into the workings of a model. With such critical applications of these methods, it is imperative to measure the uncertainty associated with the explanations generated by these methods. In this paper, we propose a pipeline to ascertain the explanation uncertainty of neural networks by combining uncertainty estimation methods and explanation methods. We use this pipeline to produce explanation distributions for the CIFAR-10, FER+, and California Housing datasets. By computing the coefficient of variation of these distributions, we evaluate the confidence in the explanation and determine that the explanations generated using Guided Backpropagation have low uncertainty associated with them. Additionally, we compute modified pixel insertion/deletion metrics to evaluate the quality of the generated explanations.

中文翻译:

神经网络中基于梯度的解释的不确定性量化

解释方法有助于理解模型预测的原因。这些方法越来越多地涉及模型调试、性能优化以及深入了解模型的工作原理。随着这些方法的如此关键的应用,必须测量与这些方法产生的解释相关的不确定性。在本文中,我们提出了一种通过结合不确定性估计方法和解释方法来确定神经网络的解释不确定性的管道。我们使用此管道生成 CIFAR-10、FER+ 和加州住房数据集的解释分布。通过计算这些分布的变异系数,我们评估解释的置信度,并确定使用引导反向传播生成的解释具有较低的不确定性。此外,我们还计算修改后的像素插入/删除指标来评估生成的解释的质量。
更新日期:2024-03-28
down
wechat
bug