当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FDKT: Towards an interpretable deep knowledge tracing via fuzzy reasoning
ACM Transactions on Information Systems ( IF 5.6 ) Pub Date : 2024-04-05 , DOI: 10.1145/3656167
Fei Liu 1 , Chenyang Bu 1 , Haotian Zhang 1 , Le Wu 1 , Kui Yu 1 , Xuegang Hu 1
Affiliation  

In educational data mining, knowledge tracing (KT) aims to model learning performance based on student knowledge mastery. Deep-learning-based KT models perform remarkably better than traditional KT and have attracted considerable attention. However, most of them lack interpretability, making it challenging to explain why the model performed well in the prediction. In this paper, we propose an interpretable deep KT model, referred to as fuzzy deep knowledge tracing (FDKT) via fuzzy reasoning. Specifically, we formalize continuous scores into several fuzzy scores using the fuzzification module. Then, we input the fuzzy scores into the fuzzy reasoning module (FRM). FRM is designed to deduce the current cognitive ability, based on which the future performance was predicted. FDKT greatly enhanced the intrinsic interpretability of deep-learning-based KT through the interpretation of the deduction of student cognition. Furthermore, it broadened the application of KT to continuous scores. Improved performance with regard to both the advantages of FDKT was demonstrated through comparisons with the state-of-the-art models.



中文翻译:

FDKT:通过模糊推理实现可解释的深度知识追踪

在教育数据挖掘中,知识追踪(KT)旨在根据学生知识掌握情况对学习表现进行建模。基于深度学习的 KT 模型的性能明显优于传统的 KT,并引起了广泛的关注。然而,大多数模型缺乏可解释性,因此很难解释为什么模型在预测中表现良好。在本文中,我们提出了一种可解释的深度 KT 模型,称为通过模糊推理的模糊深度知识追踪(FDKT)。具体来说,我们使用模糊化模块将连续分数形式化为几个模糊分数。然后,我们将模糊分数输入模糊推理模块(FRM)。 FRM旨在推断当前的认知能力,并据此预测未来的表现。 FDKT通过对学生认知推导的解释,极大增强了基于深度学习的KT的内在可解释性。此外,它将KT的应用范围扩大到连续分数。通过与最先进的模型进行比较,证明了 FDKT 的性能有所提高。

更新日期:2024-04-05
down
wechat
bug