当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Complementary composite minimization, small gradients in general norms, and applications
Mathematical Programming ( IF 2.7 ) Pub Date : 2024-01-05 , DOI: 10.1007/s10107-023-02040-5
Jelena Diakonikolas , Cristóbal Guzmán

Composite minimization is a powerful framework in large-scale convex optimization, based on decoupling of the objective function into terms with structurally different properties and allowing for more flexible algorithmic design. We introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term. This particular form of decoupling is pervasive in statistics and machine learning, due to its link to regularization. The main contributions of our work are summarized as follows. First, we introduce the problem of complementary composite minimization in general normed spaces; second, we provide a unified accelerated algorithmic framework to address broad classes of complementary composite minimization problems; and third, we prove that the algorithms resulting from our framework are near-optimal in most of the standard optimization settings. Additionally, we show that our algorithmic framework can be used to address the problem of making the gradients small in general normed spaces. As a concrete example, we obtain a nearly-optimal method for the standard \(\ell _1\) setup (small gradients in the \(\ell _\infty \) norm), essentially matching the bound of Nesterov (Optima Math Optim Soc Newsl 88:10–11, 2012) that was previously known only for the Euclidean setup. Finally, we show that our composite methods are broadly applicable to a number of regression and other classes of optimization problems, where regularization plays a key role. Our methods lead to complexity bounds that are either new or match the best existing ones.



中文翻译:

互补复合最小化、一般规范中的小梯度及应用

复合最小化是大规模凸优化中的一个强大框架,它基于将目标函数解耦为具有结构不同属性的项,并允许更灵活的算法设计。我们引入了一种新的互补复合最小化算法框架,其中目标函数解耦为(弱)平滑项和均匀凸项。由于与正则化的联系,这种特殊形式的解耦在统计和机器学习中普遍存在。我们工作的主要贡献总结如下。首先,介绍一般赋范空间中的互补复合最小化问题;其次,我们提供了一个统一的加速算法框架来解决广泛的互补复合最小化问题;第三,我们证明我们的框架产生的算法在大多数标准优化设置中都接近最优。此外,我们还表明我们的算法框架可用于解决在一般赋范空间中使梯度变小的问题。作为一个具体的例子,我们获得了标准\(\ell _1\)设置的近乎最优方法( \(\ell _\infty \)范数中的小梯度),本质上匹配 Nesterov 的界限(Optima Math Optim Soc Newsl 88:10–11, 2012),以前仅以欧几里得设置为人所知。最后,我们表明我们的复合方法广泛适用于许多回归和其他类别的优化问题,其中正则化起着关键作用。我们的方法导致复杂性界限要么是新的,要么与现有的最佳方法相匹配。

更新日期:2024-01-06
down
wechat
bug