当前位置: X-MOL 学术Fuzzy Optim. Decis. Making › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Building the hierarchical Choquet integral as an explainable AI classifier via neuroevolution and pruning
Fuzzy Optimization and Decision Making ( IF 4.7 ) Pub Date : 2022-03-02 , DOI: 10.1007/s10700-022-09384-1
Jih-Jeng Huang

Explainability is considered essential in enabling artificial intelligence (AI) in some crucial industries, e.g., healthcare and banking. However, conventional algorithms suffer a trade-off between readability and performance, encouraging the emergence of explainable AI. In this paper, we propose a novel method to form the hierarchical Choquet integral (HCI) as an explainable AI to retain the model's accuracy and explainability. To achieve this purpose, we first adopted neuroevolution, which combines a genetic algorithm (GA) and a neural network (NN), and pruned weights to obtain information about the hierarchical decomposition of the Choquet integral. We then fine-tuned the weights of the HCI model for the classification problem. In addition, we use four datasets to illustrate the proposed algorithm and compare the results with the conventional classifiers: decision tree, deep learning, and support vector machine (SVM). The empirical results indicate that the proposed algorithm outperforms others in terms of accuracy, and keeps the Choquet integral's explainable property, justifying this paper's contribution.



中文翻译:

通过神经进化和修剪将分层 Choquet 积分构建为可解释的 AI 分类器

可解释性被认为是在一些关键行业(例如医疗保健和银行业)中启用人工智能 (AI) 的关键。然而,传统算法在可读性和性能之间进行权衡,鼓励了可解释人工智能的出现。在本文中,我们提出了一种新的方法来形成分层 Choquet 积分 (HCI) 作为可解释的 AI,以保持模型的准确性和可解释性。为了达到这个目的,我们首先采用了神经进化,它结合了遗传算法(GA)和神经网络(NN),并修剪了权重以获得关于 Choquet 积分的层次分解的信息。然后,我们针对分类问题微调 HCI 模型的权重。此外,我们使用四个数据集来说明所提出的算法,并将结果与​​传统分类器进行比较:决策树、深度学习和支持向量机 (SVM)。实证结果表明,所提出的算法在准确性方面优于其他算法,并保持了 Choquet 积分的可解释性,证明了本文的贡献。

更新日期:2022-03-02
down
wechat
bug