当前位置:
X-MOL 学术
›
Inverse Probl.
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive minimax optimality in statistical inverse problems via SOLIT—Sharp Optimal Lepskiĭ-Inspired Tuning
Inverse Problems ( IF 2.1 ) Pub Date : 2023-12-27 , DOI: 10.1088/1361-6420/ad12e0 Housen Li , Frank Werner
Inverse Problems ( IF 2.1 ) Pub Date : 2023-12-27 , DOI: 10.1088/1361-6420/ad12e0 Housen Li , Frank Werner
We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form
f ˆ α = q α T ∗ T T ∗ Y
, where Y is the available data, T the forward operator,
q α α ∈ A
an ordered filter, and α > 0 a regularization parameter. Whenever such a method is used in practice, α has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible α in the sense that mean squared error (MSE)
E [ ∥ f ˆ α − f † ∥ 2 ]
w.r.t. the true solution
f †
is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an a posteriori parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on Y and the noise level σ as well as the operator T and the filter
q α α ∈ A
and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other a posteriori parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepskiĭ-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.
中文翻译:
通过 SOLIT 实现统计反问题中的自适应极小极大最优性——Sharp Optimal Lepskiĭ-Inspired Tuning
我们考虑可分离希尔伯特空间中的统计线性逆问题和基于滤波器的重构方法
F ^ α = q α 时间 * 时间 时间 * 是
, 在哪里是 是可用数据,时间 前向算子,
q α α ε A
有序过滤器,以及α > 0 正则化参数。每当在实践中使用这样的方法时,α 必须适当选择。通常,目标是找到或至少近似最好的可能α 从某种意义上说,均方误差(MSE)
乙 [ ∥ F ^ α - F † ∥ 2 ]
真正的解决方案
F †
被最小化。在本文中,我们介绍了 Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) 方法,该方法产生后验的 参数选择规则确保自适应极小极大收敛率。这仅取决于是 和噪音水平σ 以及运营商时间 和过滤器
q α α ε A
并且不需要任何与问题相关的进一步参数调整。我们在一般环境下证明了相应 MSE 的预言不等式,并得出了不同场景下的收敛率。通过仔细分析,我们发现没有其他后验的 参数选择规则可以在MSE收敛速度的阶数方面产生更好的性能。特别是,我们的结果表明,在反问题中导致对数因子丢失的 Lepskiĭ 型方法的典型理解是错误的。此外,还通过模拟检验了 SOLIT 的经验性能。
更新日期:2023-12-27
中文翻译:
通过 SOLIT 实现统计反问题中的自适应极小极大最优性——Sharp Optimal Lepskiĭ-Inspired Tuning
我们考虑可分离希尔伯特空间中的统计线性逆问题和基于滤波器的重构方法