当前位置: X-MOL 学术Genet. Program. Evolvable Mach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evolutionary approximation and neural architecture search
Genetic Programming and Evolvable Machines ( IF 2.6 ) Pub Date : 2022-06-11 , DOI: 10.1007/s10710-022-09441-z
Michal Pinos , Vojtech Mrazek , Lukas Sekanina

Automated neural architecture search (NAS) methods are now employed to routinely deliver high-quality neural network architectures for various challenging data sets and reduce the designer’s effort. The NAS methods utilizing multi-objective evolutionary algorithms are especially useful when the objective is not only to minimize the network error but also to reduce the number of parameters (weights) or power consumption of the inference phase. We propose a multi-objective NAS method based on Cartesian genetic programming for evolving convolutional neural networks (CNN). The method allows approximate operations to be used in CNNs to reduce the power consumption of a target hardware implementation. During the NAS process, a suitable CNN architecture is evolved together with selecting approximate multipliers to deliver the best trade-offs between accuracy, network size, and power consumption. The most suitable 8 × N-bit approximate multipliers are automatically selected from a library of approximate multipliers. Evolved CNNs are compared with CNNs developed by other NAS methods on the CIFAR-10 and SVHN benchmark problems.



中文翻译:

进化近似和神经架构搜索

现在采用自动神经架构搜索 (NAS) 方法来为各种具有挑战性的数据集定期提供高质量的神经网络架构,并减少设计人员的工作量。当目标不仅是最小化网络错误而且还减少参数(权重)的数量或推理阶段的功耗时,利用多目标进化算法的 NAS 方法特别有用。我们提出了一种基于笛卡尔遗传规划的多目标 NAS 方法,用于演化卷积神经网络 (CNN)。该方法允许在 CNN 中使用近似运算来降低目标硬件实现的功耗。在 NAS 过程中,一个合适的 CNN 架构与选择近似乘数一起发展,以在精度、网络规模和功耗之间实现最佳权衡。最合适的 8 × N 位近似乘法器会自动从近似乘法器库中选择。在 CIFAR-10 和 SVHN 基准问题上,将进化的 CNN 与其他 NAS 方法开发的 CNN 进行比较。

更新日期:2022-06-12
down
wechat
bug