当前位置: X-MOL 学术Semant. Web › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Searching for explanations of black-box classifiers in the space of semantic queries
Semantic Web ( IF 3 ) Pub Date : 2023-08-02 , DOI: 10.3233/sw-233469
Jason Liartis 1 , Edmund Dervakos 1 , Orfeas Menis-Mastromichalakis 1 , Alexandros Chortaras 1 , Giorgos Stamou 1
Affiliation  

Abstract

Deep learning models have achieved impressive performance in various tasks, but they are usually opaque with regards to their inner complex operation, obfuscating the reasons for which they make decisions. This opacity raises ethical and legal concerns regarding the real-life use of such models, especially in critical domains such as in medicine, and has led to the emergence of the eXplainable Artificial Intelligence (XAI) field of research, which aims to make the operation of opaque AI systems more comprehensible to humans. The problem of explaining a black-box classifier is often approached by feeding it data and observing its behaviour. In this work, we feed the classifier with data that are part of a knowledge graph, and describe the behaviour with rules that are expressed in the terminology of the knowledge graph, that is understandable by humans. We first theoretically investigate the problem to provide guarantees for the extracted rules and then we investigate the relation of “explanation rules for a specific class” with “semantic queries collecting from the knowledge graph the instances classified by the black-box classifier to this specific class”. Thus we approach the problem of extracting explanation rules as a semantic query reverse engineering problem. We develop algorithms for solving this inverse problem as a heuristic search in the space of semantic queries and we evaluate the proposed algorithms on four simulated use-cases and discuss the results.



中文翻译:

在语义查询空间中搜索黑盒分类器的解释

摘要

深度学习模型在各种任务中取得了令人印象深刻的表现,但它们的内部复杂操作通常是不透明的,模糊了它们做出决策的原因。这种不透明性引起了人们对此类模型在现实生活中的使用的伦理和法律担忧,特别是在医学等关键领域,并导致了可解释人工智能(XAI)研究领域的出现,该领域旨在使操作人类更容易理解的不透明人工智能系统。解释黑盒分类器的问题通常是通过向其提供数据并观察其行为来解决的。在这项工作中,我们向分类器提供知识图谱一部分的数据,并用人类可以理解的知识图谱术语表达的规则来描述行为。我们首先从理论上研究问题,为提取的规则提供保证,然后研究“特定类的解释规则”与“从知识图中收集黑盒分类器分类到该特定类的实例的语义查询”之间的关系”。因此,我们将提取解释规则的问题视为语义查询逆向工程问题。我们开发了用于解决这个逆问题的算法,作为语义查询空间中的启发式搜索,并在四个模拟用例上评估所提出的算法并讨论结果。

更新日期:2023-08-02
down
wechat
bug