当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A highly efficient ADMM-based algorithm for outlier-robust regression with Huber loss
Applied Intelligence ( IF 5.3 ) Pub Date : 2024-04-15 , DOI: 10.1007/s10489-024-05370-9
Tianlei Wang , Xiaoping Lai , Jiuwen Cao

Huber robust regression (HRR) has attracted much attention in machine learning due to its greater robustness to outliers compared to least-squares regression. However, existing algorithms for HRR are computationally much less efficient than those for least-squares regression. Based on a maximally split alternating direction method of multipliers (MS-ADMM) for model fitting, a highly computationally efficient algorithm referred to as the modified MS-ADMM is derived in this article for HRR. After analyzing the convergence of the modified MS-ADMM, a parameter selection scheme is presented for the algorithm. With the parameter values calculated via this scheme, the modified MS-ADMM converges very rapidly, much faster than several typical HRR algorithms. Through applications in the training of stochastic neural networks and comparisons with existing algorithms, the modified MS-ADMM is shown to be computationally much more efficient than the convex quadratic programming method, the Newton method, the iterative reweighted least-squares method, and Nesterov’s accelerated gradient method. Implementation of the proposed algorithm on a GPU-based parallel computing platform demonstrates its higher GPU acceleration ratio compared to the competing methods and, thus, its greater superiority in computational efficiency over the competing methods.



中文翻译:

一种基于 ADMM 的高效算法,用于具有 Huber 损失的异常值稳健回归

Huber 鲁棒回归(HRR)由于与最小二乘回归相比对异常值具有更强的鲁棒性,在机器学习领域引起了广泛关注。然而,现有的 HRR 算法在计算效率上远低于最小二乘回归算法。基于用于模型拟合的最大分裂乘子交替方向法(MS-ADMM),本文针对 HRR 推导了一种计算效率很高的算法(称为改进的 MS-ADMM)。在分析了改进的MS-ADMM的收敛性后,提出了算法的参数选择方案。通过该方案计算出的参数值,改进的MS-ADMM收敛速度非常快,比几种典型的HRR算法快得多。通过在随机神经网络训练中的应用以及与现有算法的比较,改进的 MS-ADMM 被证明比凸二次规划方法、牛顿方法、迭代重加权最小二乘方法和 Nesterov 加速方法在计算上更加高效。梯度法。该算法在基于 GPU 的并行计算平台上的实现表明,与竞争方法相比,该算法具有更高的 GPU 加速比,因此在计算效率上比竞争方法具有更大的优势。

更新日期:2024-04-16
down
wechat
bug