当前位置: X-MOL 学术Int. J. Uncertain. Quantif. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
STABLE LIKELIHOOD COMPUTATION FOR MACHINE LEARNING OF LINEAR DIFFERENTIAL OPERATORS WITH GAUSSIAN PROCESSES
International Journal for Uncertainty Quantification ( IF 1.7 ) Pub Date : 2022-01-01 , DOI: 10.1615/int.j.uncertaintyquantification.2022038966
O. Chatrabgoun 1 , Mohsen Esmaeilbeigi 2 , M. Cheraghi 2 , A. Daneshkhah 3
Affiliation  

In many applied sciences, the main aim is to learn the parameters in the operational equations which best fit the observed data. A framework for solving such problems is to employ Gaussian process (GP) emulators which are well-known as nonparametric Bayesian machine learning techniques. GPs are among a class of methods known as kernel machines which can be used to approximate rather complex problems by tuning their hyperparameters. The maximum likelihood estimation (MLE) has widely been used to estimate the parameters of the operators and kernels. However, the MLE-based and Bayesian inference in the standard form are usually involved in setting up a covariance matrix which is generally ill-conditioned. As a result, constructing and inverting the covariance matrix using the standard form will become unstable to learn the parameters in the operational equations. In this paper, we propose a novel approach to tackle these computational complexities and also resolve the ill-conditioning problem by forming the covariance matrix using alternative bases via the Hilbert−Schmidt SVD (HS-SVD) approach. Applying this approach yields a novel matrix factorization of the block-structured covariance matrix which can be implemented stably by isolating the main source of the ill-conditioning. In contrast to standard matrix decompositions which start with a matrix and produce the resulting factors, the HS-SVD is constructed from the Hilbert−Schmidt eigenvalues and eigenvectors without the need to ever form the potentially ill-conditioned matrix. We also provide stable MLE and Bayesian inference to adaptively estimate hyperparameters, and the corresponding operators can then be efficiently predicted at some new points using the proposed HS-SVD bases. The efficiency and stability of the proposed HS-SVD method will be compared with the existing methods by several illustrations of the parametric linear equations, such as ordinary and partial differential equations, and integro-differential and fractional order operators.

中文翻译:

具有高斯过程的线性微分算子机器学习的稳定似然计算

在许多应用科学中,主要目的是学习最适合观测数据的操作方程中的参数。解决此类问题的一个框架是采用高斯过程 (GP) 仿真器,它是众所周知的非参数贝叶斯机器学习技术。GP 是一类称为核机的方法之一,可用于通过调整超参数来逼近相当复杂的问题。最大似然估计(MLE)已广泛用于估计算子和核的参数。然而,标准形式的基于 MLE 和贝叶斯推理通常涉及建立一个通常是病态的协方差矩阵。因此,使用标准形式构造和反转协方差矩阵将变得不稳定以学习操作方程中的参数。在本文中,我们提出了一种新方法来解决这些计算复杂性,并通过 Hilbert-Schmidt SVD (HS-SVD) 方法使用替代基形成协方差矩阵来解决病态问题。应用这种方法产生了一种新的块结构协方差矩阵的矩阵分解,可以通过隔离病态的主要来源来稳定地实现。与从矩阵开始并产生结果因子的标准矩阵分解相比,HS-SVD 是从 Hilbert-Schmidt 特征值和特征向量构造的,而无需形成潜在的病态矩阵。我们还提供稳定的 MLE 和贝叶斯推理来自适应地估计超参数,然后可以使用提出的 HS-SVD 基在一些新点上有效地预测相应的算子。所提出的 HS-SVD 方法的效率和稳定性将通过参数线性方程组(例如常微分方程和偏微分方程,以及积分微分和分数阶算子)的几个说明与现有方法进行比较。
更新日期:2022-01-01
down
wechat
bug