当前位置: X-MOL 学术Program. Comput. Softw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Layer-by-Layer Knowledge Distillation for Training Simplified Bipolar Morphological Neural Networks
Programming and Computer Software ( IF 0.7 ) Pub Date : 2024-03-12 , DOI: 10.1134/s0361768823100080
M. V. Zingerenko , E. E. Limonova

Abstract

Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were carried out on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model (the same accuracy as that provided by the classical network) and 86.69% accuracy on the ResNet-22 model compared with 86.43% accuracy of the classical model. The results show that the proposed method with log-sum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation makes it possible to obtain a simplified bipolar morphological network that is not inferior to the classical networks.



中文翻译:

用于训练简化双极形态神经网络的逐层知识蒸馏

摘要

可以使用各种神经元近似来降低神经网络的计算复杂度。一种基于求和和最大运算的近似是双极形态神经元。本文提出了一种改进的双极形态神经元结构,提高了其计算效率,并提出了一种基于最大值连续逼近和知识蒸馏的新训练方法。使用类似 LeNet 的神经网络架构在 MNIST 数据集上进行实验,并使用 ResNet-22 模型架构在 CIFAR10 数据集上进行实验。所提出的训练方法在类 LeNet 模型上实现了 99.45% 的分类准确率(与经典网络提供的准确率相同),在 ResNet-22 模型上实现了 86.69% 的准确率,而经典模型的准确率为 86.43%。结果表明,所提出的最大对数和指数(LSE)近似和逐层知识蒸馏的方法使得获得不逊于经典网络的简化双极形态网络成为可能。

更新日期:2024-03-13
down
wechat
bug