当前位置: X-MOL 学术Math. Control Signals Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Newton and interior-point methods for (constrained) nonconvex–nonconcave minmax optimization with stability and instability guarantees
Mathematics of Control, Signals, and Systems ( IF 1.2 ) Pub Date : 2023-10-10 , DOI: 10.1007/s00498-023-00371-4
Raphael Chinchilla , Guosong Yang , João P. Hespanha

We address the problem of finding a local solution to a nonconvex–nonconcave minmax optimization using Newton type methods, including primal-dual interior-point ones. The first step in our approach is to analyze the local convergence properties of Newton’s method in nonconvex minimization. It is well established that Newton’s method iterations are attracted to any point with a zero gradient, irrespective of it being a local minimum. From a dynamical system standpoint, this occurs because every point for which the gradient is zero is a locally asymptotically stable equilibrium point. We show that by adding a multiple of the identity such that the Hessian matrix is always positive definite, we can ensure that every non-local-minimum equilibrium point becomes unstable (meaning that the iterations are no longer attracted to such points), while local minima remain locally asymptotically stable. Building on this foundation, we develop Newton-type algorithms for minmax optimization, conceptualized as a sequence of local quadratic approximations for the minmax problem. Using a local quadratic approximation serves as a surrogate for guiding the modified Newton’s method toward a solution. For these local quadratic approximations to be well-defined, it is necessary to modify the Hessian matrix by adding a diagonal matrix. We demonstrate that, for an appropriate choice of this diagonal matrix, we can guarantee the instability of every non-local-minmax equilibrium point while maintaining stability for local minmax points. Using numerical examples, we illustrate the importance of guaranteeing the instability property. While our results are about local convergence, the numerical examples also indicate that our algorithm enjoys good global convergence properties.



中文翻译:

用于(约束)非凸非凹最小最大优化的牛顿法和内点法,具有稳定性和不稳定性保证

我们解决了使用牛顿型方法(包括原始对偶内点方法)寻找非凸非凹最小最大优化的局部解的问题。我们方法的第一步是分析非凸最小化中牛顿法的局部收敛特性。众所周知,牛顿法迭代会被吸引到任何梯度为零的点,无论它是否是局部最小值。从动力系统的角度来看,发生这种情况是因为梯度为零的每个点都是局部渐近稳定的平衡点。我们证明,通过添加多个恒等式使得 Hessian 矩阵始终是正定的,我们可以确保每个非局部最小平衡点变得不稳定(意味着迭代不再被吸引到这些点),而局部最小值保持局部渐近稳定。在此基础上,我们开发了用于最小最大优化的牛顿型算法,将其概念化为最小最大问题的一系列局部二次近似。使用局部二次近似作为指导修正牛顿法求解的替代方法。为了明确定义这些局部二次近似,需要通过添加对角矩阵来修改 Hessian 矩阵。我们证明,通过适当选择该对角矩阵,我们可以保证每个非局部最小最大平衡点的稳定性,同时保持局部最小最大点的稳定性。通过数值例子,我们说明了保证不稳定性的重要性。虽然我们的结果是关于局部收敛的,但数值例子也表明我们的算法具有良好的全局收敛特性。

更新日期:2023-10-13
down
wechat
bug