当前位置: X-MOL 学术Expert Syst. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Revolutionizing flame detection: Novelization in flame detection through transferring distillation for knowledge to pruned model
Expert Systems with Applications ( IF 8.5 ) Pub Date : 2024-03-20 , DOI: 10.1016/j.eswa.2024.123787
Hongkang Tao , Jiansheng Liu , Zan Yang , Guhong Wang , Jie Shang , Haobo Qiu , Liang Gao

Traditional flame sensors have demonstrated suboptimal detection performance in complex environments, prompting researchers to integrate deep neural network algorithms into these sensors to enhance detection accuracy. However, these algorithms usually rely on sizeable neural networks, resulting in excessive parameterization, which limits their adaptability to different devices. To effectively compress models, this study proposes an effective flame detection framework where the distillation for knowledge is transferred to pruned model (DK-TPM) for the purpose of reducing model parameters. This framework integrates CFasterNet (CFN) and RepConv as foundational components within the base model called YO-CR to accomplish comprehensive feature learning while ensuring better compatibility with devices. Subsequently, structured pruning is applied to the base model to construct a student model. Finally, a new technique called feature distillation with smoothing (FDS) is employed to transfer flame features from the teacher model into the student model. We conducted experiments using three V100 GPUs on five public datasets, demonstrating the generalization capability of the algorithm. Moreover, systematic experiments on flame datasets yielded significant results, achieving an average precision (AP) of 81.5% and frames per second (FPS) of 1048.6, surpassing the requirements of existing devices. Therefore, the proposed DK-TPM is able to provide reliable prediction accuracy and speed for flame detection tasks.

中文翻译:

彻底改变火焰检测:通过将知识蒸馏转移到修剪模型来革新火焰检测

传统的火焰传感器在复杂环境中表现出次优的检测性能,促使研究人员将深度神经网络算法集成到这些传感器中以提高检测精度。然而,这些算法通常依赖于相当大的神经网络,导致参数化过多,限制了它们对不同设备的适应性。为了有效地压缩模型,本研究提出了一种有效的火焰检测框架,将知识的蒸馏转移到剪枝模型(DK-TPM)中,以减少模型参数。该框架将 CFasterNet (CFN) 和 RepConv 作为基础组件集成到称为 YO-CR 的基本模型中,以完成全面的特征学习,同时确保与设备更好的兼容性。随后,将结构化剪枝应用于基础模型以构建学生模型。最后,采用一种称为平滑特征蒸馏(FDS)的新技术将火焰特征从教师模型转移到学生模型中。我们使用三个 V100 GPU 在五个公共数据集上进行了实验,展示了该算法的泛化能力。此外,在火焰数据集上的系统实验取得了显着的成果,平均精度(AP)达到81.5%,每秒帧数(FPS)达到1048.6,超越了现有设备的要求。因此,所提出的 DK-TPM 能够为火焰检测任务提供可靠的预测精度和速度。
更新日期:2024-03-20
down
wechat
bug