Skip to main content
Log in

Novel approaches for hyper-parameter tuning of physics-informed Gaussian processes: application to parametric PDEs

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Today, Physics-informed machine learning (PIML) methods are one of the effective tools with high flexibility for solving inverse problems and operational equations. Among these methods, physics-informed learning model built upon Gaussian processes (PIGP) has a special place due to provide the posterior probabilistic distribution of their predictions in the context of Bayesian inference. In this method, the training phase to determine the optimal hyper parameters is equivalent to the optimization of a non-convex function called the likelihood function. Due to access the explicit form of the gradient, it is recommended to use conjugate gradient (CG) optimization algorithms. In addition, due to the necessity of computation of the determinant and inverse of the covariance matrix in each evaluation of the likelihood function, it is recommended to use CG methods in such a way that it can be completed in the minimum number of evaluations. In previous studies, only special form of CG method has been considered, which naturally will not have high efficiency. In this paper, the efficiency of the CG methods for optimization of the likelihood function in PIGP has been studied. The results of the numerical simulations show that the initial step length and search direction in CG methods have a significant effect on the number of evaluations of the likelihood function and consequently on the efficiency of the PIGP. Also, according to the specific characteristics of the objective function in this problem, in the traditional CG methods, normalizing the initial step length to avoid getting stuck in bad conditioned points and improving the search direction by using angle condition to guarantee global convergence have been proposed. The results of numerical simulations obtained from the investigation of seven different improved CG methods with different angles in angle condition (four angles) and different initial step lengths (three step lengths), show the significant effect of the proposed modifications in reducing the number of iterations and the number of evaluations in different types of CG methods. This increases the efficiency of the PIGP method significantly, especially when the traditional CG algorithms fail in the optimization process, the improved algorithms perform well. Finally, in order to make it possible to implement the studies carried out in this paper for other parametric equations, the compiled package including the methods used in this paper is attached.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The source code that supports the findings of this study are available on GitHub at https://github.com/massezati/Classical-CG-Normalized-CG-Normalized-with-Restart-CG-Methods.git.

References

  1. Liu X, Yao W, Peng W, Zhou W (2023) Bayesian physics-informed extreme learning machine for forward and inverse PDE problems with Noisy data. Neurocomputing 549:126425

    Article  Google Scholar 

  2. Donnelly J, Daneshkhah A, Abolfathi S (2024) Forecasting global climate drivers using Gaussian processes and convolutional autoencoders. Eng Appl Artif Intell 128:107536

    Article  Google Scholar 

  3. Dwivedi V, Srinivasan B (2020) Physics informed extreme learning machine (PIELM)-a rapid method for the numerical solution of partial differential equations. Neurocomputing 391:96–118

    Article  Google Scholar 

  4. Xiang Z, Peng W, Liu X, Yao W (2022) Self-adaptive loss balanced Physics-informed neural networks. Neurocomputing 496:11–34

    Article  Google Scholar 

  5. Escapil-Inchauspe P, Ruz GA (2023) Hyper-parameter tuning of physics-informed neural networks: application to Helmholtz problems. Neurocomputing 561:126826

    Article  Google Scholar 

  6. Chatrabgoun O, Esmaeilbeigi M, Cheraghi M, Daneshkhah A (2022) Stable likelihood computation for machine learning of linear differential operators with Gaussian processes. Int J Uncertain Quantif 12(3):75–99

    Article  MathSciNet  Google Scholar 

  7. Donnelly J, Daneshkhah A, Abolfathi S (2024) Physics-informed neural networks as surrogate models of hydrodynamic simulators. Sci Total Environ 912:168814

    Article  Google Scholar 

  8. Hao Z, Liu S, Zhang Y, Ying C, Feng Y, Su H, Zhu J (2022) Physics-informed machine learning: a survey on problems, methods and applications. arXiv preprint arXiv:2211.08064

  9. Asrav T, Aydin E (2023) Physics-informed recurrent neural networks and hyper-parameter optimization for dynamic process systems. Comput Chem Eng 173:108195

    Article  Google Scholar 

  10. Yang X, Tartakovsky G, Tartakovsky A (2018) Physics-informed kriging: a physics-informed Gaussian process regression method for data-model convergence. arXiv preprint arXiv:1809.03461

  11. Alvarez MA, Luengo D, Lawrence ND (2013) Linear latent force models using Gaussian processes. IEEE Trans Pattern Anal Mach Intell 35(11):2693–2705

    Article  Google Scholar 

  12. Raissi M, Perdikaris P, Em Karniadakis G (2017) Machine learning of linear differential equations using Gaussian processes. J Comput Phys 348(1):683–693

    Article  MathSciNet  Google Scholar 

  13. Williams CK, Rasmussen CE (2006) Gaussian processes for machine learning. MIT Press, Cambridge

    Google Scholar 

  14. Esmaeilbeigi M, Cheraghi M (2023) Hybrid kernel approach to improving the numerical stability of machine learning for parametric equations with Gaussian processes in the noisy and noise-free data assumptions. Eng Comput 1–34. https://doi.org/10.1007/s00366-023-01818-7

  15. Narayan A, Yan L, Zhou T (2021) Optimal design for kernel interpolation: applications to uncertainty quantification. J Comput Phys 430:1–20

    Article  MathSciNet  Google Scholar 

  16. Qin T, Chen Z, Jakeman JD, Xiu D (2021) Deep learning of parameterized equations with applications to uncertanity quantification. Int J Uncertain Quantif 11(2):63–82

    Article  MathSciNet  Google Scholar 

  17. Nocedal J, Wright SJ (eds) (1999) Numerical optimization. Springer, New York

    Google Scholar 

  18. Fletcher R, Reeves CM (1964) Function minimization by conjugate gradients. Comput J 7(2):149–154

    Article  MathSciNet  Google Scholar 

  19. Polak E, Ribiere G (1969) Note sur la convergence de methodes de directions conjuguees. Revue francaise dinformatique et de recherche operationnelle. Serie Rouge 3(16):35–43

  20. Hestenes MR, Stiefel E (1952) Methods of conjugate gradients for solving linear systems. J Res Natl Bur Stand 49(6):409–436

    Article  MathSciNet  Google Scholar 

  21. Fletcher R (2000) Practical methods of optimization. Wiley, New York

    Book  Google Scholar 

  22. Liu Y, Storey C (1991) Efficient generalized conjugate gradient algorithms, part 1: theory. J Optim Theory Appl 69:129–137

    Article  MathSciNet  Google Scholar 

  23. Dai YH, Yuan Y (1999) A nonlinear conjugate gradient method with a strong global convergence property. SIAM J Optim 10(1):177–182

    Article  MathSciNet  Google Scholar 

  24. Hager WW, Zhang H (2005) A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J Optim 16(1):170–192

    Article  MathSciNet  Google Scholar 

  25. Barzilai J, Borwein JM (1988) Two-point step size gradient methods. IMA J Numer Anal 8(1):141–148

    Article  MathSciNet  Google Scholar 

  26. Hager WW, Zhang H (2006) A survey of nonlinear conjugate gradient methods. Pac J Optim 2(1):35–58

    MathSciNet  Google Scholar 

  27. Raissi M, Perdikaris P, Em Karniadakis G (2018) Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM J Sci Comput 40(1):172–198

    Article  MathSciNet  Google Scholar 

  28. Perkins TJ, Jaeger J, Reinitz J, Glass L (2006) Reverse engineering the gap gene network of Drosophila melanogaster. PLoS Comput Biol 2(5):e51

    Article  Google Scholar 

  29. Poustelnikova E, Pisarev A, Blagov M, Samsonova M, Reinitz J (2004) A database for management of gene expression data in situ. Bioinformatics 20(14):2212–2221

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to forward their sincere thanks to editor and anonymous referees, who spend their precious time in reviewing this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohsen Esmaeilbeigi.

Ethics declarations

Conflict of interest

The authors declare that they have no any Conflict of interest.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ezati, M., Esmaeilbeigi, M. & Kamandi, A. Novel approaches for hyper-parameter tuning of physics-informed Gaussian processes: application to parametric PDEs. Engineering with Computers (2024). https://doi.org/10.1007/s00366-024-01970-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00366-024-01970-8

Keywords

Navigation