当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Frank–Wolfe-type methods for a class of nonconvex inequality-constrained problems
Mathematical Programming ( IF 2.7 ) Pub Date : 2024-02-03 , DOI: 10.1007/s10107-023-02055-y
Liaoyuan Zeng , Yongle Zhang , Guoyin Li , Ting Kei Pong , Xiaozhou Wang

Abstract

The Frank–Wolfe (FW) method, which implements efficient linear oracles that minimize linear approximations of the objective function over a fixed compact convex set, has recently received much attention in the optimization and machine learning literature. In this paper, we propose a new FW-type method for minimizing a smooth function over a compact set defined as the level set of a single difference-of-convex function, based on new generalized linear-optimization oracles (LO). We show that these LOs can be computed efficiently with closed-form solutions in some important optimization models that arise in compressed sensing and machine learning. In addition, under a mild strict feasibility condition, we establish the subsequential convergence of our nonconvex FW-type method. Since the feasible region of our generalized LO typically changes from iteration to iteration, our convergence analysis is completely different from those existing works in the literature on FW-type methods that deal with fixed feasible regions among subproblems. Finally, motivated by the away steps for accelerating FW-type methods for convex problems, we further design an away-step oracle to supplement our nonconvex FW-type method, and establish subsequential convergence of this variant. Numerical results on the matrix completion problem with standard datasets are presented to demonstrate the efficiency of the proposed FW-type method and its away-step variant.



中文翻译:

一类非凸不等式约束问题的 Frank-Wolfe 型方法

摘要

Frank-Wolfe (FW) 方法实现了高效的线性预言,可最小化固定紧凑凸集上目标函数的线性逼近,最近在优化和机器学习文献中受到了广泛关注。在本文中,我们提出了一种新的 FW 型方法,基于新的广义线性优化预言(LO),在定义为单个凸函数差的水平集的紧凑集上最小化平滑函数。我们证明,在压缩感知和机器学习中出现的一些重要优化模型中,可以使用封闭式解来有效地计算这些 LO。此外,在温和严格的可行性条件下,我们建立了非凸 FW 型方法的后续收敛性。由于我们的广义 LO 的可行区域通常在迭代之间发生变化,因此我们的收敛分析与处理子问题之间固定可行区域的 FW 型方法文献中的现有工作完全不同。最后,受用于加速解决凸问题的 FW 型方法的客步的启发,我们进一步设计了一个客步预言机来补充我们的非凸 FW 型方法,并建立了该变体的后续收敛。给出了标准数据集矩阵补全问题的数值结果,以证明所提出的 FW 型方法及其远离步变体的效率。

更新日期:2024-02-04
down
wechat
bug