Skip to main content
Log in

Dynamic Programming of the Stochastic Burgers Equation Driven by Lévy Noise

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this work, we study the optimal control of stochastic Burgers equation perturbed by Gaussian and Lévy-type noises with distributed control process acting on the state equation. We use dynamic programming approach for the feedback synthesis to obtain an infinite-dimensional second-order Hamilton–Jacobi–Bellman (HJB) equation consisting of an integro-differential operator with Lévy measure associated with the stochastic control problem. Using the regularizing properties of the transition semigroup corresponding to the stochastic Burgers equation and compactness arguments, we solve the HJB equation and the resultant feedback control problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Sharing Statement

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Agarwal, P., Manna, U., Debopriya, M.: Stochastic control of tidal dynamics equation with Lévy noise. Appl. Math. Optim. 79(2), 327–396 (2019)

    Article  MathSciNet  Google Scholar 

  2. Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press, Cambridge (2009)

    Book  Google Scholar 

  3. Bardi, M., Capuzzo Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Birkhäuser, Berlin (1997)

    Book  Google Scholar 

  4. Beck, C., Becker, S., Cheridito, P., Jentzen, A., Neufeld, A.: Deep splitting method for parabolic PDEs. SIAM J. Sci. Comput. 43(5), A3135–A3154 (2021)

    Article  MathSciNet  Google Scholar 

  5. Birnir, B.: The Kolmogorov–Obukhov Theory of Turbulence: A Mathematical Theory of Turbulence. Springer (2013)

  6. Davis, M.H.A., Johansson, M.P.: Malliavin Monte Carlo Greeks for jump diffusions. Stoch. Process. Appl. 116, 101–129 (2006)

    Article  MathSciNet  Google Scholar 

  7. de Acosta, A.: Large deviations for vector valued Lévy process. Stoch. Process. Appl. 51, 75–115 (1994)

    Article  Google Scholar 

  8. Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces. Cambridge University Press (2002)

  9. Da Prato, G., Zabczyk, J.: Differentiability of the Feynman–Kac semigroup and a control application. Rend. Lincei—Mat. Appl. 8, 183–188 (1997)

    MathSciNet  Google Scholar 

  10. Da Prato, G., Debussche, A.: Control of the stochastic Burgers model of turbulence. SIAM J. Control. Optim. 37, 1123–1149 (1999)

    Article  MathSciNet  Google Scholar 

  11. Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Burgers equations. Ann. Mat. Pura Appl. 178, 143–174 (2000)

    Article  MathSciNet  Google Scholar 

  12. Da Prato, G., Debussche, A.: Dynamic programming for the stochastic Navier–Stokes equations. Math. Model. Numer. Anal. 34, 459–475 (2000)

    Article  MathSciNet  Google Scholar 

  13. Debussche, A.: Ergodicity results for the stochastic Navier–Stokes equations: an introduction. In: Topics in Mathematical Fluid Mechanics, Volume 2073 of the Series Lecture Notes in Mathematics, pp. 23–108. Springer (2013)

  14. Dong, Z., Xu, T.G.: One dimensional stochastic Burgers equation Briven by Lévy processes. J. Funct. Anal. 243, 631–678 (2007)

    Article  MathSciNet  Google Scholar 

  15. Dong, Z., Xie, Y.: Ergodicity of stochastic 2D Navier–Stokes equation with Lévy noise. J. Differ. Equ. 251, 196–222 (2011)

    Article  ADS  Google Scholar 

  16. Elworthy, K.D., Li, X.M.: Formulae for the derivative of heat semigroups. J. Funct. Anal. 125, 252–286 (1994)

    Article  MathSciNet  Google Scholar 

  17. Fabbri, G., Gozzi, F., Swiech, A.: Stochastic Optimal Control in Infinite Dimension. Springer, Cham (2017)

    Book  Google Scholar 

  18. Gozzi, F., Sritharan, S.S., Swiech, A.: Viscosity solutions of dynamic programming equations for optimal control of Navier–Stokes equations. Arch. Ration. Mech. Anal. 163(4), 295–327 (2002)

    Article  MathSciNet  Google Scholar 

  19. Gozzi, F., Sritharan, S.S., Swiech, A.: Bellman equations associated to the optimal feedback control of stochastic Navier–Stokes equations. Commun. Pure Appl. Math. LVIII, 0001–0030 (2005)

    MathSciNet  Google Scholar 

  20. Hirai, Y.: Itô–Föllmer calculus in Banach spaces I: the Itô formula. Electron. J. Probab. 28, 1–41 (2023)

    Article  Google Scholar 

  21. Ichikawa, A.: Some inequalities for martingales and stochastic convolutions. Stoch. Anal. Appl. 4, 329–339 (1986)

    Article  MathSciNet  Google Scholar 

  22. Jacod, J., Protter, P.: Probability Essentials. Springer (2003)

  23. Marinelli, C., Prévöt, C., Röckner, M.: Regular dependence on initial data for stochastic evolution equations with multiplicative Poisson noise. J. Funct. Anal. 258, 616–649 (2010)

    Article  MathSciNet  Google Scholar 

  24. Mohan, M.T., Sritharan, S.S.: Ergodic control of stochastic Navier–Stokes equation with Lévy noise. Commun. Stoch. Anal. 10, 389–404 (2016)

    MathSciNet  Google Scholar 

  25. Mohan, M.T., Sakthivel, K., Sritharan, S.S.: Ergodicity for the 3D stochastic Navier–Stokes equations perturbed by Lévy noise. Math. Nachr. 292(5), 1056–1088 (2019)

    Article  MathSciNet  Google Scholar 

  26. Métivier, M.: Stochastic Partial Differential Equations in Infinite Dimensional Spaces. Quaderni, Scuola Normale Superiore, Pisa (1988)

    Google Scholar 

  27. Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise. Cambridge University Press, Cambridge (2007)

    Book  Google Scholar 

  28. Priola, E., Zabczyk, J.: Liouville theorems for non-local operators. J. Funct. Anal. 216(2), 455–490 (2004)

    Article  MathSciNet  Google Scholar 

  29. Röckner, M., Zhang, T.: Stochastic evolution equations of jump type: existence, uniqueness and large deviation principles. Potent. Anal. 26, 255–279 (2007)

    Article  MathSciNet  Google Scholar 

  30. Sakthivel, K., Sritharan, S.S.: Martingale solutions for stochastic Navier–Stokes equations driven by Lévy noise. Evol. Equ. Control Theory 1, 355–392 (2012)

    Article  MathSciNet  Google Scholar 

  31. Sritharan, S.S.: An introduction to deterministic and stochastic control of viscous flow. In: Sritharan, S.S. (ed.) Optimal Control of Viscous Flow, pp. 1–42. SIAM, Philadelphia (1998)

  32. Sritharan, S.S.: Dynamic programming of the Navier–Stokes equations. Syst. Contin. Lett. 16, 299–307 (1991)

    Article  MathSciNet  Google Scholar 

  33. Sritharan, S.S.: Deterministic and stochastic control of Navier–Stokes equation with linear, monotone, and hyperviscosities. Appl. Math. Optim. 41, 255–308 (2000)

    Article  MathSciNet  Google Scholar 

  34. Swiech, A., Zabczyk, J.: Large deviations for stochastic PDE with Lévy noise. J. Funct. Anal. 260, 674–723 (2011)

    Article  MathSciNet  Google Scholar 

  35. Weinan, E., Hutzenthaler, M., Jentzen, A., Kruse, T.: On multilevel Picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations. J. Sci. Comput. 79(3), 1534–1571 (2019)

    Article  MathSciNet  Google Scholar 

  36. Weinan, E., Hutzenthaler, M., Jentzen, A., Kruse, T.: Linear scaling algorithms for solving high-dimensional nonlinear parabolic differential equations. SAM Res. Rep. (2017)

  37. Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian systems and HJB Equations. Springer, New York (1999)

    Book  Google Scholar 

  38. Zhu, J., Brzeźniak, Z., Liu, W.: Maximal inequalities and exponential estimates for stochastic convolutions driven by Lévy-type processes in Banach spaces with application to stochastic quasi-geostrophic equations. SIAM J. Math. Anal. 51(3), 2121–2167 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

M. T. Mohan would like to thank the Department of Science and Technology (DST), India for Innovation in Science Pursuit for Inspired Research (INSPIRE) Faculty Award (IFA17-MA110), and Indian Statistical Institute—Bangalore Centre (ISI-Bangalore), Indian Institute of Technology Roorkee (IIT-Roorkee) and Indian Institute of Space Science and Technology (IIST), Trivandrum for providing stimulating scientific environment and resources. The work of K.Sakthivel is supported by the National Board for Higher Mathematics, Govt. of India through the research Grant No: 02011/13/2022/R &D-II/10206. S. S. Sritharan’s work has been funded by U. S. Army Research Office, Probability and Statistics program and the NRC Senior Associateship of Air Force Research Laboratory and the National Academies. Authors would also like to thank Prof. A. Debussche for useful suggestions in proving Theorem 6.1. The authors sincerely would like to thank the reviewers for their valuable comments and suggestions, which led to the improvement of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sivaguru S. Sritharan.

Additional information

Communicated by Fausto Gozzi.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Dynamic Programming Principle

We briefly give the key steps in deriving the stochastic Dynamic Programming Principle and HJB equation. Let us define

$$\begin{aligned} \mathfrak {L}({\textrm{X}},{\textrm{U}})=&\Vert {\textrm{D}}_\xi {\textrm{X}}\Vert ^2+\frac{1}{2}\Vert {\textrm{U}}\Vert ^2, \\ \Psi ({\textrm{X}}(T))=&\Vert {\textrm{X}}(T)\Vert ^2, \quad {\textrm{F}}({\textrm{X}},{\textrm{U}})=-\textrm{AX}+{\textrm{B}}({\textrm{X}})+{\textrm{U}}. \end{aligned}$$

Let \(T>0\) be given. Then for any \(t\in [0,T),\) consider the problem of minimizing the cost functional

$$\begin{aligned} {{\mathcal {J}}}(t,T;x,{\textrm{U}})= {\mathbb {E}}_t\left[ \int _t^T\mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s))ds+ \Psi ({\textrm{X}}(T)) \right] , \end{aligned}$$

over all controls \({\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho }\) and \({\textrm{X}}(\cdot )\) satisfying the following state equation

$$\begin{aligned}{} & {} {\textrm{dX}}(s)={\textrm{F}}({\textrm{X}}(s),{\textrm{U}}(s)){\textrm{d}} s+\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s) +\varepsilon _2\int _{{\mathcal {Z}}}{\textrm{G}}(s,z) \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z), \quad s\in [t,T],\nonumber \\{} & {} {\textrm{X}}(t)=x, \quad x\in {\textrm{H}}. \end{aligned}$$
(A.1)

Here \({\mathbb {E}}_t[{\textrm{X}}(s)]={\mathbb {E}}[{\textrm{X}}(s) | {\textrm{X}}(t)=x]\), for \(s\ge t\) and the admissible control set is defined as

$$\begin{aligned} {\mathscr {U}}^{t,T}_{\rho } =\Big \{{\textrm{U}}\in {\textrm{L}}^2(\Omega , {\textrm{L}}^2(t,T;{\textrm{H}})):\Vert {\textrm{U}}\Vert \le \rho , \ \text{ and } \ {\textrm{U}} \ \text{ is } \text{ adapted } \text{ to } \ {\mathscr {F}}_{t,s}\Big \}, \end{aligned}$$

where \({\mathscr {F}}_{t,s}\) is the \(\sigma \)-algebra generated by the paths of \({\textrm{W}}\) and random measures \({\textrm{N}}\) upto time s,  i.e., \(\sigma \{{\textrm{W}}(r); t\le r\le s\}\) and \(\sigma \{{\textrm{N}}(S); S\in {\mathscr {B}}([t,s]\times {\mathcal {Z}})\}.\) The value function of the above control problem is defined as

$$\begin{aligned} \mathscr {V}(t,x)= & {} \inf _{{\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho }} {\mathcal J}(t,T;x,{\textrm{U}}), \text { for all }t\in [0,T), \quad x\in {\textrm{H}}, \\ \mathscr {V}(T,x)= & {} \Vert x\Vert ^2. \end{aligned}$$

Following the dynamic programming strategy, any admissible control \({\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho }\) is the combination of controls in \( {\mathscr {U}}^{t,\tau }_{\rho }\) and \( {\mathscr {U}}^{\tau ,T}_{\rho }\) for any \(0\le t< \tau < T.\) More precisely, suppose the processes \({\textrm{U}}_1(s)\) and \({\textrm{U}}_2(s)\) be the restriction of the control process \({\textrm{U}}(s)\) to the time intervals \([t,\tau ] \) and \([\tau ,T] \), respectively, i.e.,

$$\begin{aligned} {\textrm{U}}(s)= ({\textrm{U}}_1\oplus {\textrm{U}}_2)(s) =\left\{ \begin{array}{ll} {\textrm{U}}_t(s),&{} s\in [t,\tau ], \\ {\textrm{U}}_\tau (s),&{} s\in [\tau , T]. \\ \end{array}\right. \end{aligned}$$

Accordingly, the admissible control set is written as \({\mathscr {U}}^{t,T}_{\rho }={\mathscr {U}}^{t,\tau }_{\rho }\oplus {\mathscr {U}}^{\tau ,T}_{\rho }.\) Note also that the controls \({\textrm{U}}_1(s)\) and \( {\textrm{U}}_2(s)\) are adapted to \({\mathscr {F}}_{t,s}\) and \({\mathscr {F}}_{\tau ,s}\), respectively.

The system state \({\textrm{X}}(\cdot )\) is determined by (A.1) with \({\textrm{U}}(s)=({\textrm{U}}_1\oplus {\textrm{U}}_2)(s)\in {\mathscr {U}}^{t,T}_{\rho }.\) We decompose the system state as \({\textrm{X}}(s)=({\textrm{X}}_1\oplus {\textrm{X}}_2)(s),\) where \({\textrm{X}}_1\) and \({\textrm{X}}_2\) satisfy

$$\begin{aligned} {\textrm{dX}}_1(s)= & {} {\textrm{F}}({\textrm{X}}_1(s),{\textrm{U}}_1(s)){\textrm{d}} s+\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s) +\varepsilon _2\int _{{\mathcal {Z}}}{\textrm{G}}(s,z) \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z), \quad s\in [t,\tau ], \\ {\textrm{X}}_1(t)= & {} {\textrm{X}}(t)=x\in {\textrm{H}}, \end{aligned}$$

and

$$\begin{aligned} {\textrm{dX}}_2(s)= & {} {\textrm{F}}({\textrm{X}}_2(s),{\textrm{U}}_2(s)){\textrm{d}} s+\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s) +\varepsilon _2\int _{{\mathcal {Z}}}{\textrm{G}}(s,z) \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z), \ s\in [\tau ,T],\\ {\textrm{X}}_2(\tau )= & {} {\textrm{X}}_1(\tau )={\textrm{X}}(\tau ). \end{aligned}$$

By the tower property of conditional expectation

$$\begin{aligned} {\mathbb {E}}_t\Big [{\mathbb {E}}_t\big (\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s))\ | {\mathscr {F}}_{t,\tau }\big )\Big ] ={\mathbb {E}}_t\big [\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s))\big ], \tau \le s\le T, \end{aligned}$$

we write

$$\begin{aligned} \mathscr {V}(t,x)&= \inf _{{\textrm{U}}\in {\mathscr {U}}^{t,T}_{\rho } }{\mathbb {E}}_t\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s)){\textrm{d}} s + \int _\tau ^T\mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s)){\textrm{d}} s+\Psi ({\textrm{X}}(T))\right\} \\&= \inf _{\begin{array}{c} {\textrm{U}}_1\in {\mathscr {U}}^{t,\tau }_{\rho }, {\textrm{U}}_2\in {\mathscr {U}}^{\tau ,T}_{\rho } \\ {\textrm{X}}_2(\tau )={\textrm{X}}_1(\tau ) \end{array}} {\mathbb {E}}_t\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}_1(s),{\textrm{U}}_1(s)){\textrm{d}} s \right. \\&\left. \quad +\, {\mathbb {E}}_t\left[ \int _\tau ^T\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s)){\textrm{d}} s+\Psi ({\textrm{X}}_2(T))\big |{\mathscr {F}}_{t,\tau }\right] \right\} . \end{aligned}$$

Using the Markovian property of the process \({\textrm{X}}(\cdot )\) one can get for \(\tau \le s\le T,\)

$$\begin{aligned} {\mathbb {E}}_t\Big [\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s))\ | {\mathscr {F}}_{t,\tau }\Big ] ={\mathbb {E}}_{\tau }\Big [\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s))\ | \ {\textrm{X}}_2(\tau )={\textrm{X}}(\tau ) \Big ] \end{aligned}$$

and the same reasoning is true for \(\Psi ({\textrm{X}}_2(T))\) as well. It leads to

$$\begin{aligned} {{\mathcal {J}}}(\tau ,T;{\textrm{X}}_2(\tau ),{\textrm{U}}_2)= {\mathbb {E}}_t\left\{ \int _\tau ^T\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s)){\textrm{d}} s+\Psi ({\textrm{X}}_2(T)) \ \Big | {\mathscr {F}}_{t,\tau }\right\} . \end{aligned}$$

Besides, for more details on the case of continuous diffusion, one can refer to Yong and Zhou [37] (and also [19]) and that can be modified to this case. Hence

$$\begin{aligned} \mathscr {V}(t,x)&= \inf _{{\textrm{U}}_1\in {\mathscr {U}}^{t,\tau }_{\rho }} {\mathbb {E}}_t\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}_1(s),{\textrm{U}}_1(s)){\textrm{d}} s\right\} \nonumber \\&\quad +\, {\mathbb {E}}_t\left\{ \inf _{\begin{array}{c} {\textrm{U}}_2\in {\mathscr {U}}^{\tau ,T}_{\rho } \\ {\textrm{X}}_2(\tau )={\textrm{X}}_1(\tau ) \end{array}}{\mathbb {E}}_t\left[ \int _\tau ^T\mathfrak {L}({\textrm{X}}_2(s),{\textrm{U}}_2(s)){\textrm{d}} s+\Psi ({\textrm{X}}_2(T))\Big | {\mathscr {F}}_{t,\tau }\right] \right\} \nonumber \\&= \inf _{{\textrm{U}}_1\in {\mathscr {U}}^{t,\tau }_{\rho }} {\mathbb {E}}_t\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}_1(s),{\textrm{U}}_1(s)){\textrm{d}} s +\mathscr {V}(\tau ,{\textrm{X}}_1(\tau )) \right\} . \end{aligned}$$

Thus, we have proved the following DPP (or Bellman’s principle of optimality)

$$\begin{aligned} \mathscr {V}(t,x)=\inf _{{\textrm{U}}\in {\mathscr {U}}^{t,\tau }_{\rho }} {\mathbb {E}}\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s)){\textrm{d}} s +\mathscr {V}(\tau ,{\textrm{X}}(\tau )) \big | {\textrm{X}}(t)=x\right\} . \end{aligned}$$
(A.2)

1.1 A.1. Dynamic Programming Equation

Rewrite (A.2) as follows

$$\begin{aligned} \inf _{{\textrm{U}}\in {\mathscr {U}}^{t,\tau }_{\rho }} {\mathbb {E}}\left\{ \int _t^\tau \mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s))ds +\mathscr {V}(\tau ,{\textrm{X}}(\tau )) - \mathscr {V}(t,x) \big | {\textrm{X}}(t)=x\right\} =0. \end{aligned}$$
(A.3)

Now the HJB equation can be formally obtained by applying the Itô formula for \(\mathscr {V}(\tau ,{\textrm{X}}(\tau )) - \mathscr {V}(t,x).\)

Let us define a class of functionals

$$\begin{aligned} {{\mathcal {D}}}_{{\mathcal {A}}}^{T}=\left\{ \Phi :[0,T]\times {\textrm{H}} \rightarrow {\mathbb {R}}; \Phi (\cdot , x)\in C^{1}[0,T] \text{ and } \Phi (t,\cdot )\in \mathcal {D}(\mathscr {A}^{{\textrm{U}}})\right\} . \end{aligned}$$

Remark 7.1

(Itô formula) [20, 26] For any \(t\ge 0\) and \(\Phi \in {{\mathcal {D}}}_{{\mathcal {A}}}^{T}\) the following relation holds:

$$\begin{aligned} \Phi (\tau ,{\textrm{X}}(\tau ))-\Phi (t,x)=\int _t^\tau \big [{\textrm{D}}_s\Phi (s,{\textrm{X}}(s))+{\mathscr {A}}^{{\textrm{U}}}\Phi (s,{\textrm{X}}(s))\big ]{\textrm{d}} s + {\textrm{M}}_\tau , \ {\mathbb {P}} \text {-a.s.}, \end{aligned}$$

where \({\mathscr {A}}^{{\textrm{U}}}\Phi \) is the second-order partial integro-differential operator

$$\begin{aligned} {\mathscr {A}}^{{\textrm{U}}}\Phi (s,{\textrm{X}}(s))&= \left( {\textrm{D}}_x\Phi (s,{\textrm{X}}(s)), {\textrm{F}}({\textrm{X}}(s),{\textrm{U}}(s))\right) \\&\quad +\,\frac{1}{2} {\textrm{Tr}}(\varepsilon _1^2{\textrm{QD}}^2_x\Phi (s,{\textrm{X}}(s))) \\&\quad +\,\int _{{\mathcal {Z}}}\big [\Phi (s,{\textrm{X}}(s)+\varepsilon _2{\textrm{G}}(s,z))-\Phi (s,{\textrm{X}}(s))\\&\quad -\,\left( \varepsilon _2{\textrm{G}}(s,z),{\textrm{D}}_x\Phi (s,{\textrm{X}}(s))\right) \big ]\mu ({\textrm{d}} z) \end{aligned}$$

and \({\textrm{M}}_{\tau }\) is the martingale given by

$$\begin{aligned} {\textrm{M}}_\tau&=\int _t^\tau \left( {\textrm{D}}_x\Phi (s,{\textrm{X}}(s)), \varepsilon _1\sqrt{{\textrm{Q}}}{\textrm{dW}}(s)\right) \\&\quad +\, \int _t^\tau \int _{{\mathcal {Z}}} \big [\Phi (s,{\textrm{X}}(s-)+\varepsilon _2{\textrm{G}}(s,z))-\Phi (s,{\textrm{X}}(s-))\big ]\widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z). \end{aligned}$$

Taking \(\Phi (\tau ,{\textrm{X}}(\tau ))=\mathscr {V}(\tau ,{\textrm{X}}(\tau ))\) (of course by assuming required smoothness and boundedness on \(\mathscr {V}\)) and plug this back into (A.3), and use the fact that martingale \({\textrm{M}}_{\tau }\) has a zero mean to obtain the following

$$\begin{aligned} \inf _{{\textrm{U}}\in {\mathscr {U}}^{t,\tau }_{\rho }} {\mathbb {E}}\left\{ \int _t^\tau \big [\mathfrak {L}({\textrm{X}}(s),{\textrm{U}}(s))+{\textrm{D}}_s\mathscr {V}(s,{\textrm{X}}(s))+{\mathscr {A}}^{{\textrm{U}}}\mathscr {V}(s,{\textrm{X}}(s))\big ]{\textrm{d}} s \big | {\textrm{X}}(t)=x\right\} =0. \end{aligned}$$

Finally, take \(\tau =t+h, h>0\) and divide by h. Passing \(h\rightarrow 0\) and using the conditional expectation, we formally obtain the HJB equation

$$\begin{aligned}{} & {} {\textrm{D}}_t\mathscr {V}(t,x)+\inf _{\Vert {\textrm{U}}\Vert \le \rho } \big \{{\mathscr {A}}^{{\textrm{U}}}\mathscr {V}(t,x) +\mathfrak {L}(x,{\textrm{U}}(t)) \big \}=0, \quad t\in [0,T), \\{} & {} \mathscr {V}(T,x)=\Vert x\Vert ^2, \quad x\in {\textrm{H}}. \end{aligned}$$

Moreover, setting \(v(t,x)=\mathscr {V}(T-t,x),\) one can obtain the following initial value problem

$$\begin{aligned}{} & {} {\textrm{D}}_tv(t,x)= \mathscr {H}(x,t,{\textrm{D}}_xv,{\textrm{D}}^2_xv), \quad t\in (0,T), \\{} & {} v(0,x)= \Vert x\Vert ^2, \quad x\in {\textrm{H}}, \end{aligned}$$

where \(\mathscr {H}\) is given by

$$\begin{aligned} \mathscr {H}(x,t,{\textrm{D}}_xv,{\textrm{D}}^2_xv)&=\frac{1}{2}{\textrm{Tr}}(\varepsilon _1^2{\textrm{Q}} {\textrm{D}}_x^2v)+\left( - {\textrm{A}} x+{\textrm{B}}(x),{\textrm{D}}_xv\right) \\&\quad +\,\int _{{\mathcal {Z}}}\big [v(t,x+\varepsilon _2{\textrm{G}}(t,z))-v(t,x)-\left( \varepsilon _2{\textrm{G}}(t,z),{\textrm{D}}_xv\right) \big ]\mu ({\textrm{d}} z) \\&\quad +\,\Vert {\textrm{D}}_\xi x\Vert ^2+\inf _{\Vert {\textrm{U}}\Vert \le \rho }\left\{ \left( {\textrm{U}}(t),{\textrm{D}}_xv\right) +\frac{1}{2}\Vert {\textrm{U}}(t)\Vert ^2\right\} . \end{aligned}$$

Appendix B: Bismut–Elworthy–Li formula(BEL formula)

The BEL formula for Gaussian case has been derived in [8, 9, 16]. Further, this formula is obtained for some general stochastic evolution equations with Lévy noise in finite and infinite dimensions in [6, 23, 28]. For the sake of readers point of view, we give a formal derivation in the case of Burgers equation with Lévy noise.

Let us consider the following Kolmogorov equation associated with the uncontrolled stochastic Burger’s equation (3.9):

$$\begin{aligned}{} & {} {\textrm{D}}_tv(t,x)={\mathscr {L}}_x v(t,x), \quad t\in (0,T), \nonumber \\{} & {} v(0,x)= f(x), \quad x\in {\textrm{H}}, \end{aligned}$$
(B.1)

where \({\mathscr {L}}_x v\) is the operator defined in (3.5). For \(\Phi \in {{\mathcal {D}}}_{{\mathcal {A}}}^{T},\) Itô’s formula given by Remark 7.1 yields

$$\begin{aligned} \Phi \left( t,{\textrm{Y}}(t,x)\right)&=\Phi (0,x)+\int _0^t\left[ \textrm{D}_s\Phi \left( s,{\textrm{Y}}(s,x)\right) \right. \nonumber \\&\quad \left. +\,\mathscr {L}_x\Phi \left( s,{\textrm{Y}}(s,x)\right) \right] {\textrm{d}} s+\int _0^t\left( {\textrm{D}}_x\Phi \left( s,{\textrm{Y}}(s,x)\right) ,\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s)\right) \nonumber \\&\quad +\,\int _0^t\int _{{\mathcal {Z}}}\left[ \Phi \left( s-,{\textrm{Y}}(s-,x)+\varepsilon _2\textrm{G}(s-,z)\right) \right. \nonumber \\&\quad \left. -\,\Phi \left( s,{\textrm{Y}}(s-,x)\right) \right] \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z). \end{aligned}$$
(B.2)

For \(f\in {\mathcal {D}}\left( \mathscr {A}^{{\textrm{U}}}\right) \), we can prove that \(v(t,x)={\mathbb {E}}\left[ f({\textrm{Y}}(t,x)\right] \) is a solution of (B.1) by using Itô’s formula (B.2). Let us take \(\Phi (s,{\textrm{Y}}(s,x))=v(t-s,{\textrm{Y}}(s,x))\in {{\mathcal {D}}}_{{\mathcal {A}}}^{T}\). Then applying (B.2) yields

$$\begin{aligned} f\left( {\textrm{Y}}(t,x)\right)&=v(t,x)+\int _0^t\left[ -\textrm{D}_s v\left( t-s,{\textrm{Y}}(s,x)\right) +\mathscr {L}_xv\left( t-s,{\textrm{Y}}(s,x)\right) \right] {\textrm{d}} s\nonumber \\&\quad +\,\int _0^t\left( {\textrm{D}}_x\left[ v\left( t-s,{\textrm{Y}}(s,x)\right) \right] ,\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s)\right) \nonumber \\&\quad +\,\int _0^t\int _{{\mathcal {Z}}}\left[ v\left( t-s,{\textrm{Y}}(s-,x)+\varepsilon _2{\textrm{G}}(s-,z)\right) \right. \nonumber \\&\left. \quad -\,v\left( t-s,{\textrm{Y}}(s-,x)\right) \right] \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z)\nonumber \\&= v(t,x) +\int _0^t\left( {\textrm{D}}_x[v\left( t-s,{\textrm{Y}}(s,x)\right) ],\varepsilon _1{\textrm{Q}}^{1/2}{\textrm{dW}}(s)\right) \nonumber \\&\quad +\,\int _0^t\int _{{\mathcal {Z}}}\left[ v\left( t-s,{\textrm{Y}}(s-,x)+\varepsilon _2\textrm{G}(s-,z)\right) \right. \nonumber \\&\quad \left. -\,v\left( t-s,{\textrm{Y}}(s-,x)\right) \right] \widetilde{{\textrm{N}}}({\textrm{d}} s,{\textrm{d}} z). \end{aligned}$$
(B.3)

Let us take expectation on both sides of (B.3) and note that the final two terms on the right hand side of (B.3) are martingales, we obtain \(v(t,x)={\mathbb {E}}\left[ f({\textrm{Y}}(t,x)\right] \). Now we multiply both sides of (B.3) by \(\textrm{M}(t)={\int \limits _0^t\left( {\textrm{Q}}^{-1/2}\eta ^h(s,x),\varepsilon _1{\textrm{dW}}(s)\right) },\) where \(\eta ^h(t,x):=(D_xY(t,x),h)\) is the solution of the equation:

$$\begin{aligned}{} & {} \frac{\partial }{\partial t}\eta ^h(t,x)=\left[ -{\textrm{A}}\eta ^h(t,x)+\textrm{B}(\textrm{X}(t,x),\eta ^h(t,x))+ \textrm{B}(\eta ^h(t,x),\textrm{X}(t,x))\right] ,\\{} & {} \eta ^h(0,x)=h. \end{aligned}$$

Then taking expectation and using the Markov property of semigroup, we get the Bismut–Elworthy–Li formula for the stochastic Navier–Stokes equations perturbed by Lévy noise as follows:

$$\begin{aligned} \left( {\textrm{D}}_xv(t,x), h\right) =\frac{1}{t}{\mathbb {E}}\left[ \psi \left( \textrm{X}(t,x)\right) \int _0^t\left( \textrm{Q}^{-1/2}\eta ^h(s,x),\varepsilon _1 {\textrm{W}}(s)\right) \right] . \end{aligned}$$
(B.4)

In order to get (B.4), we used Itô’s product rule to obtain \({\mathbb {E}}[\textrm{M}(t){\textrm{P}}(t)]=0\), where \({\textrm{P}}(t)\) is the final term from the right hand side of the equality (B.3). This follows from the fact that the stochastic integrals \(\textrm{M}(t)\) and \({\textrm{P}}(t)\) are uncorrelated, since \({\textrm{W}}(\cdot )\) and \(\widetilde{{\textrm{N}}}(\cdot ,\cdot )\) are independent.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohan, M.T., Sakthivel, K. & Sritharan, S.S. Dynamic Programming of the Stochastic Burgers Equation Driven by Lévy Noise. J Optim Theory Appl (2024). https://doi.org/10.1007/s10957-024-02387-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10957-024-02387-5

Keywords

Mathematics Subject Classification

Navigation