当前位置: X-MOL 学术IEEE Trans. Instrum. Meas. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Interpretable Explanation Approach for Signal Modulation Classification
IEEE Transactions on Instrumentation and Measurement ( IF 5.6 ) Pub Date : 2024-03-26 , DOI: 10.1109/tim.2024.3381706
Jing Bai 1 , Yuheng Lian 1 , Yiran Wang 1 , Junjie Ren 1 , Zhu Xiao 2 , Huaji Zhou 3 , Licheng Jiao 1
Affiliation  

Signal modulation classification (SMC) has attracted extensive attention for its wide application in the military and civil fields. The current direction of combining deep-learning (DL) technology with wireless communication technology is developing hotly. DL models are riding high in the field of SMC with their highly abstract feature extraction capability. However, most DL models are decision-agnostic, limiting their application to critical areas. This article proposes combining traditional feature-based (FB) methods to set appropriate manual features as interpretable representations for different modulation classification tasks. The fitted decision tree model is used as the basis for the decision of the original model on the instance to be interpreted, and the trustworthiness of the original DL model is verified by comparing the decision tree model with the prior knowledge of the signal FB modulation classification algorithm. We apply the interpretable explanation method under the current leading DL model in the field of modulation classification. The interpretation results show that the decision basis of the model under a high signal-to-noise ratio (SNR) is consistent with the expert knowledge in the traditional SMC method. The experiments show that our method is stable and can guarantee local fidelity. The decision tree as an interpretation model is intuitive and consistent with human reasoning intuition.

中文翻译:

信号调制分类的可解释解释方法

信号调制分类(SMC)因其在军事和民用领域的广泛应用而受到广泛关注。当前深度学习(DL)技术与无线通信技术相结合的方向正在火热发展。深度学习模型以其高度抽象的特征提取能力在 SMC 领域占据一席之地。然而,大多数深度学习模型与决策无关,限制了它们在关键领域的应用。本文提出结合传统的基于特征(FB)的方法来设置适当的手动特征作为不同调制分类任务的可解释表示。将拟合后的决策树模型作为原始模型对待解释实例的决策依据,通过将决策树模型与信号FB调制分类的先验知识进行比较,验证原始DL模型的可信度算法。我们在调制分类领域应用了当前领先的DL模型下的可解释解释方法。解释结果表明,高信噪比(SNR)下模型的决策依据与传统SMC方法中的专家知识一致。实验表明我们的方法稳定并且能够保证局部保真度。决策树作为解释模型是直观的,符合人类的推理直觉。
更新日期:2024-03-26
down
wechat
bug