Abstract
In this work, we study the optimal control of stochastic Burgers equation perturbed by Gaussian and Lévy-type noises with distributed control process acting on the state equation. We use dynamic programming approach for the feedback synthesis to obtain an infinite-dimensional second-order Hamilton–Jacobi–Bellman (HJB) equation consisting of an integro-differential operator with Lévy measure associated with the stochastic control problem. Using the regularizing properties of the transition semigroup corresponding to the stochastic Burgers equation and compactness arguments, we solve the HJB equation and the resultant feedback control problem.
Similar content being viewed by others
Data Sharing Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
References
Agarwal, P., Manna, U., Debopriya, M.: Stochastic control of tidal dynamics equation with Lévy noise. Appl. Math. Optim. 79(2), 327–396 (2019)
Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press, Cambridge (2009)
Bardi, M., Capuzzo Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Birkhäuser, Berlin (1997)
Beck, C., Becker, S., Cheridito, P., Jentzen, A., Neufeld, A.: Deep splitting method for parabolic PDEs. SIAM J. Sci. Comput. 43(5), A3135–A3154 (2021)
Birnir, B.: The Kolmogorov–Obukhov Theory of Turbulence: A Mathematical Theory of Turbulence. Springer (2013)
Davis, M.H.A., Johansson, M.P.: Malliavin Monte Carlo Greeks for jump diffusions. Stoch. Process. Appl. 116, 101–129 (2006)
de Acosta, A.: Large deviations for vector valued Lévy process. Stoch. Process. Appl. 51, 75–115 (1994)
Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces. Cambridge University Press (2002)
Da Prato, G., Zabczyk, J.: Differentiability of the Feynman–Kac semigroup and a control application. Rend. Lincei—Mat. Appl. 8, 183–188 (1997)
Da Prato, G., Debussche, A.: Control of the stochastic Burgers model of turbulence. SIAM J. Control. Optim. 37, 1123–1149 (1999)
Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Burgers equations. Ann. Mat. Pura Appl. 178, 143–174 (2000)
Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Navier–Stokes equations. Math. Model. Numer. Anal. 34, 459–475 (2000)
Debussche, A.: Ergodicity results for the stochastic Navier–Stokes equations: an introduction. In: Topics in Mathematical Fluid Mechanics, Volume 2073 of the Series Lecture Notes in Mathematics, pp. 23–108. Springer (2013)
Dong, Z., Xu, T.G.: One dimensional stochastic Burgers equation Briven by Lévy processes. J. Funct. Anal. 243, 631–678 (2007)
Dong, Z., Xie, Y.: Ergodicity of stochastic 2D Navier–Stokes equation with Lévy noise. J. Differ. Equ. 251, 196–222 (2011)
Elworthy, K.D., Li, X.M.: Formulae for the derivative of heat semigroups. J. Funct. Anal. 125, 252–286 (1994)
Fabbri, G., Gozzi, F., Swiech, A.: Stochastic Optimal Control in Infinite Dimension. Springer, Cham (2017)
Gozzi, F., Sritharan, S.S., Swiech, A.: Viscosity solutions of dynamic programming equations for optimal control of Navier–Stokes equations. Arch. Ration. Mech. Anal. 163(4), 295–327 (2002)
Gozzi, F., Sritharan, S.S., Swiech, A.: Bellman equations associated to the optimal feedback control of stochastic Navier–Stokes equations. Commun. Pure Appl. Math. LVIII, 0001–0030 (2005)
Hirai, Y.: Itô–Föllmer calculus in Banach spaces I: the Itô formula. Electron. J. Probab. 28, 1–41 (2023)
Ichikawa, A.: Some inequalities for martingales and stochastic convolutions. Stoch. Anal. Appl. 4, 329–339 (1986)
Jacod, J., Protter, P.: Probability Essentials. Springer (2003)
Marinelli, C., Prévöt, C., Röckner, M.: Regular dependence on initial data for stochastic evolution equations with multiplicative Poisson noise. J. Funct. Anal. 258, 616–649 (2010)
Mohan, M.T., Sritharan, S.S.: Ergodic control of stochastic Navier–Stokes equation with Lévy noise. Commun. Stoch. Anal. 10, 389–404 (2016)
Mohan, M.T., Sakthivel, K., Sritharan, S.S.: Ergodicity for the 3D stochastic Navier–Stokes equations perturbed by Lévy noise. Math. Nachr. 292(5), 1056–1088 (2019)
Métivier, M.: Stochastic Partial Differential Equations in Infinite Dimensional Spaces. Quaderni, Scuola Normale Superiore, Pisa (1988)
Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise. Cambridge University Press, Cambridge (2007)
Priola, E., Zabczyk, J.: Liouville theorems for non-local operators. J. Funct. Anal. 216(2), 455–490 (2004)
Röckner, M., Zhang, T.: Stochastic evolution equations of jump type: existence, uniqueness and large deviation principles. Potent. Anal. 26, 255–279 (2007)
Sakthivel, K., Sritharan, S.S.: Martingale solutions for stochastic Navier–Stokes equations driven by Lévy noise. Evol. Equ. Control Theory 1, 355–392 (2012)
Sritharan, S.S.: An introduction to deterministic and stochastic control of viscous flow. In: Sritharan, S.S. (ed.) Optimal Control of Viscous Flow, pp. 1–42. SIAM, Philadelphia (1998)
Sritharan, S.S.: Dynamic programming of the Navier–Stokes equations. Syst. Contin. Lett. 16, 299–307 (1991)
Sritharan, S.S.: Deterministic and stochastic control of Navier–Stokes equation with linear, monotone, and hyperviscosities. Appl. Math. Optim. 41, 255–308 (2000)
Swiech, A., Zabczyk, J.: Large deviations for stochastic PDE with Lévy noise. J. Funct. Anal. 260, 674–723 (2011)
Weinan, E., Hutzenthaler, M., Jentzen, A., Kruse, T.: On multilevel Picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations. J. Sci. Comput. 79(3), 1534–1571 (2019)
Weinan, E., Hutzenthaler, M., Jentzen, A., Kruse, T.: Linear scaling algorithms for solving high-dimensional nonlinear parabolic differential equations. SAM Res. Rep. (2017)
Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian systems and HJB Equations. Springer, New York (1999)
Zhu, J., Brzeźniak, Z., Liu, W.: Maximal inequalities and exponential estimates for stochastic convolutions driven by Lévy-type processes in Banach spaces with application to stochastic quasi-geostrophic equations. SIAM J. Math. Anal. 51(3), 2121–2167 (2019)
Acknowledgements
M. T. Mohan would like to thank the Department of Science and Technology (DST), India for Innovation in Science Pursuit for Inspired Research (INSPIRE) Faculty Award (IFA17-MA110), and Indian Statistical Institute—Bangalore Centre (ISI-Bangalore), Indian Institute of Technology Roorkee (IIT-Roorkee) and Indian Institute of Space Science and Technology (IIST), Trivandrum for providing stimulating scientific environment and resources. The work of K.Sakthivel is supported by the National Board for Higher Mathematics, Govt. of India through the research Grant No: 02011/13/2022/R &D-II/10206. S. S. Sritharan’s work has been funded by U. S. Army Research Office, Probability and Statistics program and the NRC Senior Associateship of Air Force Research Laboratory and the National Academies. Authors would also like to thank Prof. A. Debussche for useful suggestions in proving Theorem 6.1. The authors sincerely would like to thank the reviewers for their valuable comments and suggestions, which led to the improvement of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Fausto Gozzi.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Dynamic Programming Principle
We briefly give the key steps in deriving the stochastic Dynamic Programming Principle and HJB equation. Let us define
Let \(T>0\) be given. Then for any \(t\in [0,T),\) consider the problem of minimizing the cost functional
over all controls \({\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho }\) and \({\textrm{X}}(\cdot )\) satisfying the following state equation
Here \({\mathbb {E}}_t[{\textrm{X}}(s)]={\mathbb {E}}[{\textrm{X}}(s) | {\textrm{X}}(t)=x]\), for \(s\ge t\) and the admissible control set is defined as
where \({\mathscr {F}}_{t,s}\) is the \(\sigma \)-algebra generated by the paths of \({\textrm{W}}\) and random measures \({\textrm{N}}\) upto time s, i.e., \(\sigma \{{\textrm{W}}(r); t\le r\le s\}\) and \(\sigma \{{\textrm{N}}(S); S\in {\mathscr {B}}([t,s]\times {\mathcal {Z}})\}.\) The value function of the above control problem is defined as
Following the dynamic programming strategy, any admissible control \({\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho }\) is the combination of controls in \( {\mathscr {U}}^{t,\tau }_{\rho }\) and \( {\mathscr {U}}^{\tau ,T}_{\rho }\) for any \(0\le t< \tau < T.\) More precisely, suppose the processes \({\textrm{U}}_1(s)\) and \({\textrm{U}}_2(s)\) be the restriction of the control process \({\textrm{U}}(s)\) to the time intervals \([t,\tau ] \) and \([\tau ,T] \), respectively, i.e.,
Accordingly, the admissible control set is written as \({\mathscr {U}}^{t,T}_{\rho }={\mathscr {U}}^{t,\tau }_{\rho }\oplus {\mathscr {U}}^{\tau ,T}_{\rho }.\) Note also that the controls \({\textrm{U}}_1(s)\) and \( {\textrm{U}}_2(s)\) are adapted to \({\mathscr {F}}_{t,s}\) and \({\mathscr {F}}_{\tau ,s}\), respectively.
The system state \({\textrm{X}}(\cdot )\) is determined by (A.1) with \({\textrm{U}}(s)=({\textrm{U}}_1\oplus {\textrm{U}}_2)(s)\in {\mathscr {U}}^{t,T}_{\rho }.\) We decompose the system state as \({\textrm{X}}(s)=({\textrm{X}}_1\oplus {\textrm{X}}_2)(s),\) where \({\textrm{X}}_1\) and \({\textrm{X}}_2\) satisfy
and
By the tower property of conditional expectation
we write
Using the Markovian property of the process \({\textrm{X}}(\cdot )\) one can get for \(\tau \le s\le T,\)
and the same reasoning is true for \(\Psi ({\textrm{X}}_2(T))\) as well. It leads to
Besides, for more details on the case of continuous diffusion, one can refer to Yong and Zhou [37] (and also [19]) and that can be modified to this case. Hence
Thus, we have proved the following DPP (or Bellman’s principle of optimality)
1.1 A.1. Dynamic Programming Equation
Rewrite (A.2) as follows
Now the HJB equation can be formally obtained by applying the Itô formula for \(\mathscr {V}(\tau ,{\textrm{X}}(\tau )) - \mathscr {V}(t,x).\)
Let us define a class of functionals
Remark 7.1
(Itô formula) [20, 26] For any \(t\ge 0\) and \(\Phi \in {{\mathcal {D}}}_{{\mathcal {A}}}^{T}\) the following relation holds:
where \({\mathscr {A}}^{{\textrm{U}}}\Phi \) is the second-order partial integro-differential operator
and \({\textrm{M}}_{\tau }\) is the martingale given by
Taking \(\Phi (\tau ,{\textrm{X}}(\tau ))=\mathscr {V}(\tau ,{\textrm{X}}(\tau ))\) (of course by assuming required smoothness and boundedness on \(\mathscr {V}\)) and plug this back into (A.3), and use the fact that martingale \({\textrm{M}}_{\tau }\) has a zero mean to obtain the following
Finally, take \(\tau =t+h, h>0\) and divide by h. Passing \(h\rightarrow 0\) and using the conditional expectation, we formally obtain the HJB equation
Moreover, setting \(v(t,x)=\mathscr {V}(T-t,x),\) one can obtain the following initial value problem
where \(\mathscr {H}\) is given by
Appendix B: Bismut–Elworthy–Li formula(BEL formula)
The BEL formula for Gaussian case has been derived in [8, 9, 16]. Further, this formula is obtained for some general stochastic evolution equations with Lévy noise in finite and infinite dimensions in [6, 23, 28]. For the sake of readers point of view, we give a formal derivation in the case of Burgers equation with Lévy noise.
Let us consider the following Kolmogorov equation associated with the uncontrolled stochastic Burger’s equation (3.9):
where \({\mathscr {L}}_x v\) is the operator defined in (3.5). For \(\Phi \in {{\mathcal {D}}}_{{\mathcal {A}}}^{T},\) Itô’s formula given by Remark 7.1 yields
For \(f\in {\mathcal {D}}\left( \mathscr {A}^{{\textrm{U}}}\right) \), we can prove that \(v(t,x)={\mathbb {E}}\left[ f({\textrm{Y}}(t,x)\right] \) is a solution of (B.1) by using Itô’s formula (B.2). Let us take \(\Phi (s,{\textrm{Y}}(s,x))=v(t-s,{\textrm{Y}}(s,x))\in {{\mathcal {D}}}_{{\mathcal {A}}}^{T}\). Then applying (B.2) yields
Let us take expectation on both sides of (B.3) and note that the final two terms on the right hand side of (B.3) are martingales, we obtain \(v(t,x)={\mathbb {E}}\left[ f({\textrm{Y}}(t,x)\right] \). Now we multiply both sides of (B.3) by \(\textrm{M}(t)={\int \limits _0^t\left( {\textrm{Q}}^{-1/2}\eta ^h(s,x),\varepsilon _1{\textrm{dW}}(s)\right) },\) where \(\eta ^h(t,x):=(D_xY(t,x),h)\) is the solution of the equation:
Then taking expectation and using the Markov property of semigroup, we get the Bismut–Elworthy–Li formula for the stochastic Navier–Stokes equations perturbed by Lévy noise as follows:
In order to get (B.4), we used Itô’s product rule to obtain \({\mathbb {E}}[\textrm{M}(t){\textrm{P}}(t)]=0\), where \({\textrm{P}}(t)\) is the final term from the right hand side of the equality (B.3). This follows from the fact that the stochastic integrals \(\textrm{M}(t)\) and \({\textrm{P}}(t)\) are uncorrelated, since \({\textrm{W}}(\cdot )\) and \(\widetilde{{\textrm{N}}}(\cdot ,\cdot )\) are independent.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Mohan, M.T., Sakthivel, K. & Sritharan, S.S. Dynamic Programming of the Stochastic Burgers Equation Driven by Lévy Noise. J Optim Theory Appl (2024). https://doi.org/10.1007/s10957-024-02387-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10957-024-02387-5