Skip to main content
Log in

Concentration in Lotka–Volterra parabolic equations: an asymptotic-preserving scheme

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

In this paper, we introduce and analyze an asymptotic-preserving scheme for Lotka–Volterra parabolic equations. It is a class of nonlinear and nonlocal stiff equations, which describes the evolution of a population structured with phenotypic trait. In a regime of large time scale and small mutations, the population concentrates at a set of dominant traits. The dynamics of this concentration is described by a constrained Hamilton–Jacobi equation, which is a system coupling a Hamilton–Jacobi equation with a Lagrange multiplier determined by a constraint. This coupling makes the equation nonlocal. Moreover, the constraint does not enjoy much regularity, since it can have jumps. The scheme we propose is convergent in all the regimes, and enjoys stability in the long time and small mutations limit. Moreover, we prove that the limiting scheme converges towards the viscosity solution of the constrained Hamilton–Jacobi equation, despite the lack of regularity of the constraint. The theoretical analysis of the schemes is illustrated and complemented with numerical simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Almeida, L., Perthame, B., Ruan, X.: An asymptotic preserving scheme for capturing concentrations in age-structured models arising in adaptive dynamics. J. Comput. Phys. 464, 111335 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variations and Free Discontinuities Problems. Oxford University Press, Oxford (2000)

    MATH  Google Scholar 

  3. Barles, G.: Solutions de Viscosité des Équations de Hamilton–Jacobi. Mathématiques & Applications (Berlin) [Mathematics & Applications], vol. 17. Springer-Verlag, Paris (1994)

    MATH  Google Scholar 

  4. Barles, G., Mirrahimi, S., Perthame, B.: Concentration in Lotka–Volterra parabolic or integral equations: a general convergence result. Methods Appl. Anal. 16(3), 321–340 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Barles, G., Perthame, B.: Concentrations and constrained Hamilton–Jacobi equations arising in adaptive dynamics. Contemp. Math. 439, 57–68 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Calvez, V., Figueroa Iglesias, S., Hivert, H., Méléard, S., Melnykova, A., Nordmann, S.: Horizontal gene transfer: numerical comparison between stochastic and deterministic approaches. In: CEMRACS 2018—Numerical and Mathematical Modeling for Biological and Medical Applications: Deterministic, Probabilistic and Statistical Descriptions, volume 67 of ESAIM Proceeding Surveys, pp. 135–160. EDP Sci, Les Ulis (2020)

  7. Calvez, V., Hivert, H., Yoldaş, H.: Concentration in Lotka–Volterra parabolic equations: codes of the asymptotic preserving scheme. https://plmlab.math.cnrs.fr/hivert/parabolic-lotka-volterra (2022)

  8. Calvez, V., Lam, K.-Y.: Uniqueness of the viscosity solution of a constrained Hamilton–Jacobi equation. Calc. Var. Partial. Differ. Equ. 59(5), 163 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Carrillo, J.A., Cuadrado, S., Perthame, B.: Adaptive dynamics via Hamilton-Jacobi approach and entropy methods for a juvenile-adult model. Math. Biosci. 205(1), 137–161 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Crandall, M.G., Ishii, H., Lions, P.-L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. (N.S.) 27(1), 1–67 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  11. Crandall, M.G., Lions, P.L.: Two approximations of solutions of Hamilton–Jacobi equations. Math. Comput. 43(167), 1–19 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  12. Desvillettes, L., Jabin, P.-E., Mischler, S., Raoul, G.: On selection dynamics for continuous structured populations. Commun. Math. Sci. 6(3), 729–747 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Diekmann, O.: A Beginner’s Guide to Adaptive Dynamics, Volume 63 of Banach Center Publications, pp. 47–86. Institute of Mathematics of the Polish Academy of Sciences, Warsaw (2004)

  14. Diekmann, O., Jabin, P.-E., Mischler, S., Perthame, B.: The dynamics of adaptation: an illuminating example and a Hamilton–Jacobi approach. Theor. Popul. Biol. 67(4), 257–271 (2005)

    Article  MATH  Google Scholar 

  15. Dimarco, G., Pareschi, L.: Numerical methods for kinetic equations. Acta Numer. 23, 369–520 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (2010)

    MATH  Google Scholar 

  17. Geritz, S.A.H., Kisdi, E., Meszéna, G., Metz, J.A.J.: Evolutionarily singular strategies and the adaptive growth and branching of the evolutionary tree. Evol. Ecol. 12, 35–57 (1998)

    Article  Google Scholar 

  18. Geritz, S.A.H., Metz, J.A.J., Kisdi, E., Meszéna, G.: Dynamics of adaptation and evolutionary branching. Phys. Rev. Lett. 78, 2024–2027 (1997)

    Article  Google Scholar 

  19. Guerand, J., Koumaiha, M.: Error estimates for a finite difference scheme associated with Hamilton–Jacobi equations on a junction. Numer. Math. 142(3), 525–575 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hivert, H.: A first-order asymptotic preserving scheme for front propagation in a one-dimensional kinetic reaction-transport equation. J. Comput. Phys. 367, 253–278 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  21. Jin, S.: Efficient asymptotic-preserving (AP) schemes for some multiscale kinetic equations. SIAM J. Sci. Comput. 21(2), 441–454 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  22. Jin, S.: Asymptotic preserving (AP) schemes for multiscale kinetic and hyperbolic equations: a review. Riv. Math. Univ. Parma (N.S.) 3(2), 177–216 (2012)

    MathSciNet  MATH  Google Scholar 

  23. Kim, Y.: On the uniqueness of solutions to one-dimensional constrained Hamilton-Jacobi equations. Minimax Theory Appl. 6(1), 145–154 (2021)

    MathSciNet  MATH  Google Scholar 

  24. Klar, A.: An asymptotic-induced scheme for nonstationary transport equations in the diffusive limit. SIAM J. Numer. Anal. 35(3), 1073–1094 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  25. Klar, A.: An asymptotic preserving numerical scheme for kinetic equations in the low Mach number limit. SIAM J. Numer. Anal. 36(5), 1507–1527 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lorenzi, T., Pouchol, C.: Asymptotic analysis of selection-mutation models in the presence of multiple fitness peaks. Nonlinearity 33(11), 5791–5816 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  27. Lorz, A., Lorenzi, T., Clairambault, J., Escargueil, A., Perthame, B.: Modeling the effects of space structure and combination therapies on phenotypic heterogeneity and drug resistance in solid tumors. Bull. Math. Biol. 77(1), 1–22 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Lorz, A., Mirrahimi, S., Perthame, B.: Dirac mass dynamics in multidimensional nonlocal parabolic equations. Commun. Partial Differ. Equ. 36(6), 1071–1098 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  29. Meszéna, G., Gyllenberg, M., Jacobs, F.J., Metz, J.A.J.: Link between population dynamics and dynamics of Darwinian evolution. Phys. Rev. Lett. 95, 078105 (2005)

    Article  Google Scholar 

  30. Metz, J.A.J., Geritz, S.A.H., Meszéna, G., Jacobs, F.J.A., van Heerwaarden, J.S.: Adaptive dynamics, a geometrical study of the consequences of nearly faithful reproduction. In: Stochastic and Spatial Structures of Dynamical Systems (Amsterdam, 1995), volume 45 of Koninklijke Nederlandse Akademie van Wetenschappen. Verhandelingen, Afd. Natuurkunde. Eerste Reeks, pp. 183–231. North-Holland, Amsterdam (1996)

  31. Mirrahimi, S., Roquejoffre, J.-M.: A class of Hamilton–Jacobi equations with constraint: uniqueness and constructive approach. J. Differ. Equ. 260(5), 4717–4738 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. Nordmann, S., Perthame, B., Taing, C.: Dynamics of concentration in a population model structured by age and a phenotypical trait. Acta Appl. Math. 155, 197–225 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  33. Perthame, B.: Transport Equations in Biology. Frontiers in Mathematics, Birkhäuser, Basel (2007)

    Book  MATH  Google Scholar 

  34. Perthame, B., Barles, G.: Dirac concentrations in Lotka-Volterra parabolic PDEs. Indiana Univ. Math. J. 57(7), 3275–3301 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Shu, C.-W.: High order numerical methods for time dependent Hamilton–Jacobi equations. In: Mathematics and Computation in Imaging Science and Information Processing, Volume 11 of Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore, pp. 47–91. World Scientific Publishing, Hackensack (2007)

  36. Souganidis, P.E.: Approximation schemes for viscosity solutions of Hamilton–Jacobi equations. J. Differ. Equ. 59(1), 1–43 (1985)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Benoît Gaudeul for the proofreading of this paper. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERC consolidator grant WACONDY no 865711). HY was partially supported by the Vienna Science and Technology Fund (WWTF) with a Vienna Research Groups for Young Investigators project, grant VRG17-014 (since October 2021). The third author would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme “Frontiers in kinetic theory: connecting microscopic to macroscopic scales—KineCon 2022” when work on this paper was undertaken. This work was supported by EPSRC Grant Number EP/R014604/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hélène Hivert.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Proof of Lemma 3.2

In this appendix, we prove Lemma 3.2. We proceed by induction. Thanks to the assumptions, the initial data \(u^0=(u^0_i)_{i\in {{\mathbb {Z}}}}\) satisfies the properties (i)-(ii) and (iv) of Lemma 3.2. Let us suppose that the items (i)-(ii)-(iv) of Lemma 3.2 hold true for a given \(n\in [\![ 0,N_t-1]\!]\), and prove that \(u^{n+1}=(u^{n+1}_i)_{i\in {{\mathbb {Z}}}}\) enjoys these properties, while \(I^{n+1}\) satisfies (iii).

  • First of all, we recall that \(I^{n+1}\) is well defined for all \(\varepsilon \in (0,1]\), see Remark 2.3. We now prove that, if \(\varepsilon \leqslant \varepsilon _0\), then \(I^{n+1} \geqslant I_m/2\). For a given \(j\in {{\mathbb {Z}}}\), the following inequality holds

    $$\begin{aligned} I-{\Delta x} \sum \limits _{i\in {{\mathbb {Z}}}} \psi (x_i){\textrm{e}}^{-{\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _i/\varepsilon } {\textrm{e}}^{{\Delta t} R(x_i,I)/\varepsilon } \leqslant I-{\Delta x} \psi _m {\textrm{e}}^{- {\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _j /\varepsilon } {\textrm{e}}^{{\Delta t} R(x_j,I)/\varepsilon }, \end{aligned}$$

    where we used (A1). An upper bound for \({\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _j\) is obtained thanks to the positivity of H and to (i)

    $$\begin{aligned} {\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _j&= u^n_j + \varepsilon \frac{{\Delta t}}{{\Delta x}} \frac{u^{n}_{j+1}-2u^n_{j} + u^n_{j-1}}{{\Delta x}} -{\Delta t} H\left( \frac{u^n_{j}-u^n_{j-1}}{{\Delta x}}, \frac{u^n_{j+1}-u^n_{j}}{{\Delta x}} \right) \\&\leqslant u^n_j + 2\varepsilon \frac{{\Delta t}}{{\Delta x}}L_n, \end{aligned}$$

    using the upper bound \(L_0+TK\) of \(L_n\), and with the choice of j such that \(u^n_j=\min _{i\in {{\mathbb {Z}}}} u^n_i\), property (iv) provides

    $$\begin{aligned} {\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _j \leqslant \varepsilon \left( c_M + 2\frac{L_0 +TK}{C_H(L_0+TK)} \right) , \end{aligned}$$

    where the estimate independent of \({\Delta t}\) and \({\Delta x}\) comes from (\(\textrm{CFL}_{\varepsilon \rightarrow 0}\)). This estimate yields

    $$\begin{aligned} \varphi (I) \leqslant I - {\Delta x} \psi _m {\textrm{e}}^{ -c_M -2(L_0+TK)/C_H(L_0+TK) } {\textrm{e}}^{{\Delta t} R(x_j,I)/\varepsilon }, \end{aligned}$$

    and the bound from below for \(I^{n+1}\) in (iii) is then obtained by a contradiction argument. Indeed, since \(\varphi \) is an increasing function, for all \(I<I_m/2\), the following inequality holds true

    $$\begin{aligned} \varphi (I)\leqslant \varphi (I_m/2) \leqslant \frac{I_m}{2} - {\Delta x} \psi _m {\textrm{e}}^{ -c_M -2(L_0+TK)/C_H(L_0+TK) } {\textrm{e}}^{{\Delta t} R(x_j,I_m/2)/\varepsilon },\nonumber \\ \end{aligned}$$
    (42)

    and thanks to the strict monotonicity of R with respect to its second argument, together with (A2), one can show that \(R(x_j,I_m/2)\) is uniformly positive with respect to \(j\in {{\mathbb {Z}}}\). Indeed, it writes

    $$\begin{aligned} R(x_j,I_m/2) \geqslant R(x_j,I_m) + \frac{ I_m}{2K}, \end{aligned}$$

    thanks to (A3), and assumption (A2) eventually yields

    $$\begin{aligned} R(x_j,I) \geqslant \frac{I_m}{2K}. \end{aligned}$$

    Coming back to (42), one has for all \(I\leqslant I_m/2\)

    $$\begin{aligned} \varphi (I)\leqslant \varphi (I_m/2) \leqslant \frac{I_m}{2} - {\Delta x}\psi _m {\textrm{e}}^{-c_M-2(L_0+TK)/c_H(L_0+TK)}{\textrm{e}}^{{\Delta t} I_m/{2K\varepsilon }} \underset{\varepsilon \rightarrow 0}{\longrightarrow }-\infty , \end{aligned}$$

    hence there exists an \(\varepsilon _1>0\), depending only on \(I_m\), \(\psi _m\), \(c_M\), \(C_H\), \(L_0\), T, K, \({\Delta t}\), and \({\Delta x}\), such that

    $$\begin{aligned} \forall \varepsilon \in (0,\varepsilon _1),\; \forall I\leqslant I_m/2, \; \varphi (I)\leqslant -1. \end{aligned}$$

    Since \(I^{n+1}\) is defined as the solution of \(\varphi (I^{n+1})=0\), the first inequality in (iii) holds true.

  • The bound from below (ii) of \((u^{n+1}_i)_{i\in {{\mathbb {Z}}}}\), is a consequence of the monotonicity of the first step of the scheme (\(\textrm{S}_\varepsilon \)). Indeed, if we denote \(v^n_i= {\underline{a}}|x_i-x_0|+{\underline{b}}_n\), the scheme (15) applied to \(v^n=(v^n_i)_{i\in {{\mathbb {Z}}}}\) gives

    $$\begin{aligned} {\mathcal {M}}_{\Delta t}^\varepsilon \left( v^n \right) _i = \left\{ \begin{array}{l l} \displaystyle {\underline{a}} |x_i-x_0|+{\underline{b}}_n -{\Delta t} H({\underline{a}},{\underline{a}}) &{} \displaystyle \;\;\textrm{if}\; i\ne 0 \\ \displaystyle {\underline{b}}_n + 2{\underline{a}} \frac{\varepsilon {\Delta t}}{{\Delta x}} &{} \displaystyle \;\;\textrm{if}\; i =0, \end{array} \right. \end{aligned}$$

    since \(H({\underline{a}},{\underline{a}})=H(-{\underline{a}},-{\underline{a}})\). Therefore, \({\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _i \geqslant {\mathcal {M}}_{\Delta t}^\varepsilon \left( v^n \right) _i\), for all \(i\in {{\mathbb {Z}}}\), thanks to Lemma 3.1, so that

    $$\begin{aligned} \left\{ \begin{array}{l l} \displaystyle u^{n+1}_i\geqslant {\underline{a}}|x_i-x_0|+{\underline{b}}_n -{\Delta t} H({\underline{a}},{\underline{a}}) -{\Delta t} R(x_i,I^{n+1}) &{} \displaystyle \;\;\textrm{if}\; i \ne 0 \\ \displaystyle u^{n+1}_0 \geqslant {\underline{b}}_n + 2{\underline{a}} \frac{\varepsilon {\Delta t}}{{\Delta x}} -{\Delta t} R(x_0,I^{n+1}). &{} \end{array} \right. \end{aligned}$$

    Since \(I^{n+1}\geqslant I_m/2\), and \(I\mapsto R(x_i,I)\) is decreasing for all \(i\in {{\mathbb {Z}}}\), the choice \({\underline{b}}_{n+1}= {\underline{b}}_n - {\Delta t} H({\underline{a}},{\underline{a}}) - {\Delta t} K\) yields (ii) thanks to (A3).

  • The second inequality in (iv) is a consequence of the bound from below of \((u^{n+1}_i)_{i\in {{\mathbb {Z}}}}\), as well as the one for \(I^{n+1}\). Indeed, considering \(\varepsilon \in (0,\varepsilon _1)\), the definition of \(I^{n+1}\) yields

    $$\begin{aligned} I_m/2\leqslant I^{n+1} \leqslant {\Delta x} \sum \limits _{i\in {{\mathbb {Z}}}} \psi (x_i) {\textrm{e}}^{-u^{n+1}_i/\varepsilon }, \end{aligned}$$

    and for an integer \(N\geqslant 1\), which will be determined later, the following inequality holds true

    $$\begin{aligned} I_m/2\leqslant {\Delta x}\sum \limits _{|i|< N} \psi (x_i)\; {\textrm{e}}^{-u^{n+1}_i/\varepsilon }+ {\Delta x} \sum \limits _{|i|\geqslant N} \psi (x_i)\; {\textrm{e}}^{-\left( {\underline{b}}_{N_t}+ {\underline{a}} |x_i-x_0|\right) /\varepsilon }, \end{aligned}$$

    because of (ii). In both terms, we use (A1), and since \(x_i-x_0=i{\Delta x}\), we have

    $$\begin{aligned} I_m/2 \leqslant (2N-1){\Delta x}\;\psi _M \;{\textrm{e}}^{-\min \limits _{i\in {{\mathbb {Z}}}} u^{n+1}_i/\varepsilon } + 2{\Delta x}\; \psi _M\; {\textrm{e}}^{-\left( {\underline{b}}_{N_t}+{\underline{a}}N{\Delta x}\right) /\varepsilon }\sum \limits _{i\geqslant 0} {\textrm{e}}^{-{\underline{a}} i {\Delta x}/\varepsilon }. \end{aligned}$$

    Therefore, N is chosen such that \({\underline{b}}_{N_t}+{\underline{a}}N{\Delta x}\geqslant 1\). Note that this choice is independent of n, and that it depends only on the assumptions, and \({\Delta x}\). Hence, the previous inequality can be simplified as

    $$\begin{aligned} I_m/2\leqslant (2N-1) {\Delta x}\;\psi _M\; {\textrm{e}}^{-\min \limits _{i\in {{\mathbb {Z}}}}u^{n+1}_i/\varepsilon } + \frac{2{\Delta x}\;\psi _M \;{\textrm{e}}^{-1/\varepsilon }}{1- {\textrm{e}}^{-{\underline{a}}{\Delta x}/\varepsilon } }, \end{aligned}$$

    and \(\varepsilon _2\) can be defined, as a function of the parameters arising in the assumptions and of \({\Delta x}\), but independently of n such that

    $$\begin{aligned} \forall \varepsilon \in (0,\varepsilon _2),\; \frac{2\;{\Delta x} \;\psi _M\;{\textrm{e}}^{-1/\varepsilon }}{1- {\textrm{e}}^{-{\underline{a}}{\Delta x}/\varepsilon } }\leqslant I_m/4, \end{aligned}$$

    so that for \(0<\varepsilon <\min (\varepsilon _1,\varepsilon _2)\), the second inequality in (iv) is satisfied, with

    $$\begin{aligned} c_M = \max \left\{ -\ln \left( \frac{I_m}{4(2N-1) {\Delta x}\psi _M}\right) , c_M^{\textrm{in}}\right\} . \end{aligned}$$

    Once again, it is worth noticing that this choice is independent of n.

  • The previous results yield the inequality \(I^{n+1}\leqslant 2I_M\) in (iii). Indeed, thanks to (iv), \(u^n_i\geqslant c_m \varepsilon \) for all \(i\in {{\mathbb {Z}}}\), and because of the monotonicity of (15), this implies that

    $$\begin{aligned} \forall i\in {{\mathbb {Z}}}, \; {\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _i \geqslant c_m \varepsilon . \end{aligned}$$

    Moreover the bound from below (ii) satisfied by \(u^{n+1}=(u^{n+1}_i)_{i\in {{\mathbb {Z}}}}\), ensures that there exists an index \(k\in {{\mathbb {Z}}}\) such that \(u^{n+1}_k=\min _{i\in {{\mathbb {Z}}}} u^{n+1}_i\leqslant c_M \varepsilon \). The definition of \(u^{n+1}_i\) at line (14a) yields

    $$\begin{aligned} {\Delta t} R(x_k, I^{n+1}) = {\mathcal {M}}_{\Delta t}^\varepsilon \left( u^n \right) _k - u^{n+1}_k\geqslant -(c_M - c_m)\varepsilon \geqslant {\Delta t} R(x_k,I_M) -(c_M-c_m) \varepsilon , \end{aligned}$$

    where the last inequality has been obtained considering that \(R(x_k,I_M)\leqslant 0\), according to (A2). One can conclude similarly as above, using (A3) to write

    $$\begin{aligned} {\Delta t} R(x_k,I^{n+1}) \geqslant {\Delta t} R(x_k,2I_M) +\frac{{\Delta t} I_M}{K}-\varepsilon (c_M-c_m). \end{aligned}$$

    Denoting \(\varepsilon _3 = {\Delta t} I_M / (K(c_M-c_m))\), the previous inequality states that for all \(\varepsilon \in (0,\varepsilon _3)\), \(R(x_k,I^{n+1})\geqslant R(x_k,2I_M)\). Once again, we emphasize the fact that \(\varepsilon _3\) is defined once for all since it depends only on the assumptions and on \({\Delta t}\), and it is independent of n. The second inequality in (iii) follows, since R is decreasing with respect to I.

  • The first inequality in (iv) is a consequence of the previous result. Indeed, thanks to the definition of \(I^{n+1}\) in (14b) and to (A1), one has

    $$\begin{aligned} \forall i\in {{\mathbb {Z}}},\; \psi _m \;{\Delta x}\; {\textrm{e}}^{-u^{n+1}_i/\varepsilon } \leqslant {\Delta x}\sum \limits _{i\in {{\mathbb {Z}}}} \psi (x_i) {\textrm{e}}^{-u^{n+1}_i/\varepsilon }=I^{n+1}\leqslant 2I_M. \end{aligned}$$

    It gives that \(\forall i\in {{\mathbb {Z}}}\), \(u^{n+1}_i\geqslant c_m\varepsilon \), where \( c_m=\min \left\{ -\ln \left( \frac{2I_M}{\psi _m {\Delta x}} \right) , c_m^{\textrm{in}}\right\} \), depends only on the constants defined in the assumptions and on \({\Delta x}\), and is independent of n.

  • Eventually, Lemma 3.1 yields that \({\mathcal {M}}_{\Delta t}^\varepsilon (u^n)\) enjoys \(L_n\)-Lipschitz property. The \(L_{n+1}\)-Lipschitz bound (i) of \(u^{n+1}\) is then a consequence of (iii) and (A3).

Eventually, we denote \(\varepsilon _0=\min (\varepsilon _1,\varepsilon _2,\varepsilon _3)\), such that Lemma 3.2 holds.

B Proof of Lemma 4.2

In this appendix, we prove Lemma 4.2. We start by proving (i)-(iii) and (iv) by induction. Let \(n\in [\![ 0,N_t-1]\!]\). Suppose that (i)-(iii) are true for \(v_{\Delta t}(t_n,\cdot )\), as it is the case for the initial data \(v^{\textrm{in}}\) thanks to (A5)-(A6). Let \(n\in [\![ 0,N_t-1]\!]\) and \(s\in (0,{\Delta t}]\). In what follows, we show that \(v_{\Delta t}(t_n+s,\cdot )\) satisfies (i)-(iii), and that \({J}_{\Delta t}(t_{n+1})\) satisfies (iv):

  • Let j realize the minimum of \((v_{\Delta t}(t_n,x_i))_{i\in {{\mathbb {Z}}}}\). Hence \(v_{\Delta t}(t_n,x_j)=0\) thanks to (12c), and the definition of H in (5) yields

    $$\begin{aligned} {\mathcal {M}}_{\Delta t}\left( v_{\Delta t}(t_n,\cdot ) \right) (x_j)=0. \end{aligned}$$

    Coming back to (12a), we have

    $$\begin{aligned} -{\Delta t} R\left( x_j, {J}_{\Delta t}(t_{n+1})\right) = v_{\Delta t}(t_{n+1},x_j) \geqslant 0, \end{aligned}$$

    and we obtain that \(I_m\leqslant {J}_{\Delta t}(t_{n+1})\) thanks to (A2) and (A3).

  • Notice that since (\(\textrm{CFL}_0\)) is satisfied, the first step (\({\mathcal {M}}_s^{0}\)) of the scheme (\(\textrm{S}_0\)) is monotonic. Hence, Lemma 4.1 gives

    $$\begin{aligned} \forall x\in {{\mathbb {R}}}, \; {\underline{a}}|x-x_0|+{\underline{b}}_{t_n}-sH({\underline{a}},{\underline{a}}) \leqslant {\mathcal {M}}_s^{0}\left( v_{\Delta t}(t_n,\cdot )\right) (x), \end{aligned}$$

    so that \(v_{\Delta t}(t_n+s,x) \geqslant {\underline{a}}|x-x_0| + {\underline{b}}_{t_n} - sH({\underline{a}},{\underline{a}}) - sR(x,J_{\Delta t}(t_n+s))\). The monotonicity of \(R(x,\cdot )\) as well as the lower bound for \(J_{\Delta t}(t_n+s)\) yield

    $$\begin{aligned} {\underline{a}}|x-x_0| +{\underline{b}}_{t_n+s} \leqslant v_{\Delta t}(t_n+s,x), \end{aligned}$$

    with \({\underline{b}}_{t_n+s} ={\underline{b}}_{t_n}-sH({\underline{a}},{\underline{a}}) - sK\), thanks to (A3).

  • Since \(v_{\Delta t}(t_n,x_i)\geqslant 0\) for all \(i\in {{\mathbb {Z}}}\), Lemma 4.1 yields that

    $$\begin{aligned} \forall i\in {{\mathbb {Z}}},\; {\mathcal {M}}_{\Delta t}\left( v_{\Delta t}(t_n,\cdot ) \right) (x_i)\geqslant 0. \end{aligned}$$

    Consider then \(k\in {{\mathbb {Z}}}\) such that

    $$\begin{aligned} v_{\Delta t}(t_{n+1}, x_k)=\min \limits _{i\in {{\mathbb {Z}}}}v_{\Delta t}(t_{n+1},x_i) = 0. \end{aligned}$$

    Note that such a k exists, thanks to the previous step of the proof. We obtain

    $$\begin{aligned} {\Delta t} R\left( x_k, {J}_{\Delta t}(t_{n+1})\right) = {\mathcal {M}}_{\Delta t}\left( v_{\Delta t}(t_n,\cdot ) \right) (x_k)\geqslant 0, \end{aligned}$$

    thanks to (12a). The inequality \({J}_{\Delta t}(t_{n+1})\leqslant I_M\) is then a consequence of assumptions (A2) and (A3). The bounds for \(J_{\Delta t}\) in (iv) follow, since it is constant on \((t_n,t_{n+1}]\).

  • Once again, the monotonicity of the first step (\({\mathcal {M}}_s^{0}\)) of scheme (\(\textrm{S}_0\)), yields

    $$\begin{aligned} \forall x\in {{\mathbb {R}}}, {\mathcal {M}}_s^{0}\left( v_{\Delta t}(t_n,\cdot )\right) (x)\leqslant {\overline{a}}|x-x_0|+{\overline{b}}_{t_n}, \end{aligned}$$

    so that property (iii) is proved with \({\overline{b}}_{t_n+s} = {\overline{b}}_{t_n}+sK,\) thanks to (A3).

  • Similarly, \(v_{\Delta t}(t_n+s,\cdot )\) is \(L_{t_n+s}\)-Lipschitz continuous thanks to Lemma 4.1 and (A3).

The Lipschitz-in-time property (ii) is a consequence of (i). We now show that \({J}_{\Delta t}(t_{n+1})\geqslant {J}_{\Delta t}(t_n)\). Recalling that \(J_{\Delta t}\) is constant on \((0,{\Delta t}]\) and that it is not defined at \(t=0\), we then suppose that \(n\in [\![ 1,N_t-1]\!]\). Considering an index j such that \(v_{\Delta t}(t_n,x_j)=\min _{i\in {{\mathbb {Z}}}} v_{\Delta t}(t_n,x_{i})=0\), (12a) yield

$$\begin{aligned} R(x_j, {J}_{\Delta t}(t_{n+1})) \leqslant 0, \end{aligned}$$

as previously. Let us now consider the previous step of the scheme, at the same index j. As this part of the proof only uses points of the grid, we use rather the formulation (\(\textrm{S}_0\)) for the sake of simplicity. We have,

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{v^n_j-v^{n-1}_j}{{\Delta t}} + H\left( \frac{v^{n-1}_j-v^{n-1}_{j-1}}{{\Delta x}}, \frac{v^{n-1}_{j+1}-v^{n-1}_j}{{\Delta x}}\right) + R(x_j,J^n)=0 \\ \displaystyle v^n_j=\min \limits _{i\in {{\mathbb {Z}}}} v^n_i=0. \end{array} \right. \end{aligned}$$
(43)

Since all the \(v^{n-1}_i\) for \(i\in {{\mathbb {Z}}}\) are nonnegative, one has

$$\begin{aligned} \frac{v^{n-1}_j-v^{n-1}_{j-1}}{{\Delta x}} \leqslant \frac{v^{n-1}_j}{{\Delta x}}, \;\;\;\;\textrm{and}\;\;\;\; \frac{v^{n-1}_{j+1}-v^{n-1}_j}{{\Delta x}}\geqslant \frac{-v^{n-1}_j}{{\Delta x}}. \end{aligned}$$

Moreover, because H is increasing with respect to its first variable and decreasing with respect to the second one, the following inequality holds

$$\begin{aligned} H\left( \frac{v^{n-1}_j-v^{n-1}_{j-1}}{{\Delta x}}, \frac{v^{n-1}_{j+1}-v^{n-1}_j}{{\Delta x}}\right) \leqslant H\left( \frac{v^{n-1}_j}{{\Delta x}}, \frac{-v^{n-1}_j}{{\Delta x}}\right) = \left( \frac{v^{n-1}_j}{{\Delta x}} \right) ^2, \end{aligned}$$

where the last equality comes from the expression of H, see (5). Once injected in (43), we obtain

$$\begin{aligned} R(x_j,J^n) \geqslant \frac{v^{n-1}_j}{{\Delta t}}\left( 1-\frac{{\Delta t}}{{\Delta x}^2}\;v^{n-1}_j \right) , \end{aligned}$$

and the right hand side of this inequality is positive. Indeed, thanks to the Lipschitz-in-time property (ii), we have

$$\begin{aligned} \left| \frac{v^n_j-v^{n-1}_j}{{\Delta t}}\right| = \frac{v^{n-1}_j}{{\Delta t}}\leqslant L_T^2+K, \end{aligned}$$

and the condition (\(\textrm{CFL}_0\)) yields the result. To conclude, let us remark that

$$\begin{aligned} R\left( x_j, {J}_{\Delta t}(t_{n+1})\right) \leqslant 0 \leqslant R\left( x_j,J^n\right) =R\left( x_j,{J}_{\Delta t}(t_n)\right) , \end{aligned}$$
(44)

and use the fact that R is decreasing with respect to its second variable. The monotonicity of \(J_{\Delta t}\) in (v) follows immediately since it is constant on the interval \((t_n,t_{n+1}]\).

Let us emphasize the fact that the above proof strongly relies on considerations on the minimum of \((v^n_j)_j\). This bears similarities with [34], where the relation \(R({\overline{x}}(t), J(t))=0\), with \({\overline{x}}(t)= \arg \min v(t,\cdot )\), is used to study J. In the discrete setting, (44) is the equivalent of this relation.

C Proof of Lemma 4.6

In this appendix, we prove the second point of Lemma 4.6.

Step (i). Since \(\psi (t^*,{x^*},\tau ^*,{\xi ^*}) \geqslant \psi (t^*,0,\tau ^*,0)\), we have

$$\begin{aligned} \alpha \frac{{\textrm{e}}^{ t^*}}{2}({x^*}^2+{\xi ^*}^2) \leqslant v^k_{\Delta t}(\tau ^*,0)-v^k_{\Delta t}(\tau ^*,{\xi ^*}) +v^k(t^*,{x^*})-v^k(t^*,0). \end{aligned}$$

Then, Lemma 4.5-(i) gives

$$\begin{aligned} \alpha \frac{{\textrm{e}}^{ t^*}}{2}\max \left\{ |{x^*}|,|{\xi ^*}| \right\} ^2 \leqslant L_T |{\xi ^*}|+L_T|{x^*}|, \end{aligned}$$

which yields

$$\begin{aligned} \alpha {\textrm{e}}^{ t^*}\max \left\{ |{\xi ^*}|,|{x^*}| \right\} \leqslant 4L_T. \end{aligned}$$
(45)

Step (ii). We proceed as in the previous step. Comparing the values of \(\psi \) at \((t^*,{x^*},\tau ^*,{\xi ^*})\) and \((t^*,{x^*},\tau ^*,{x^*})\), we obtain

$$\begin{aligned} \frac{({x^*}-{\xi ^*})^2}{2{\Delta x}^{1/2}}\leqslant v^k_{\Delta t}(\tau ^*,{x^*}) - v^k_{\Delta t}(\tau ^*,{\xi ^*}) + \alpha \frac{{\textrm{e}}^{t^*}}{2}({x^*}^2-{\xi ^*}^2), \end{aligned}$$

and Lemma 4.5-(i) and (45) give

$$\begin{aligned} \; |t^*-\tau ^*|\leqslant 2(L_T^2+K){\Delta t}^{1/2}, \; \; |x^*-\xi ^*|\leqslant 10L_T{\Delta x}^{1/2}. \end{aligned}$$
(46)

Note that the bound for \(|\tau ^*-t^*|\) is obtained similarly, starting from \(\psi (t^*,{x^*},\tau ^*,{\xi ^*})\geqslant \psi (t^*,{x^*},t^*,{\xi ^*})\) and using Lemma 4.5-(ii).

Step (iii). We aim to show that \(t^*\leqslant 2(L_T^2+K){\Delta t}^{1/2}\), provided that \(\sigma \) is appropriately chosen. We argue by contradiction, and suppose that \(t^*> 2(L_T^2+K){\Delta t}^{1/2}\). It implies that \(\tau ^*>0\), thanks to (46). Let us start by considering

$$\begin{aligned} (t,x) \mapsto \psi (t,x,\tau ^*,\xi ^*) =v^k(t,x) - \varphi (t,x), \end{aligned}$$

on \([0,T[\times {{\mathbb {R}}}\), with

$$\begin{aligned} \varphi (t,x)= & {} v^k_{\Delta t}(\tau ^*,\xi ^*)+ \left( \sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T\right) t + \frac{(x-\xi ^*)^2}{2{\Delta x}^{1/2}}\\ {}{} & {} + \frac{(t-\tau ^*)^2}{2{\Delta t}^{1/2}} +\alpha \frac{{\textrm{e}}^{ t}}{2}(x^2+{\xi ^*}^2) + \frac{\alpha }{T-t}. \end{aligned}$$

It admits a maximum, precisely at \((t^*,x^*)\), with \(t^*\in (0,T)\). Since \(v^k\) is the viscosity solution of (19), we deduce

$$\begin{aligned} \partial _t \varphi (t^*,x^*) + {\mathcal {H}}\left( \nabla _x \varphi (t^*,x^*)\right) + R(x^*, J^k_0(t^*)) \leqslant 0, \end{aligned}$$

that is

$$\begin{aligned} \sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T&+ \frac{t^*-\tau ^*}{{\Delta t}^{1/2}} +\alpha \frac{{\textrm{e}}^{ t^*}}{2}({x^*}^2+{\xi ^*}^2)+\frac{\alpha }{(T-t^*)^2}\nonumber \\&+ {\mathcal {H}}\left( \frac{x^*-\xi ^*}{{\Delta x}^{1/2}} + \alpha {\textrm{e}}^{ t^*}x^*\right) + R(x^*,J^k_0(t^*))\leqslant 0, \end{aligned}$$
(47)

where \({\mathcal {H}}\) is defined in (3). Next, let us consider

$$\begin{aligned} (\tau ,\xi )\mapsto \psi (t^*,x^*,\tau ,\xi ), \end{aligned}$$

on \([0,T]\times {{\mathbb {R}}}\). As previously, it admits a maximum, precisely at \((\tau ^*,\xi ^*)\), so that for all \((\tau ,\xi )\in [0,T]\times {{\mathbb {R}}}\)

$$\begin{aligned} v^k_{\Delta t}(\tau ,\xi ) \geqslant w(\tau ,\xi ) + k^*, \end{aligned}$$
(48)

with

$$\begin{aligned} w(\tau ,\xi )&= -\frac{(x^*-\xi )^2}{2{\Delta x}^{1/2}}-\frac{(t^*-\tau )^2}{2{\Delta t}^{1/2}} -\alpha \frac{{\textrm{e}}^{ t^*}}{2}\xi ^2,\\ k^*&= v_{\Delta t}^k(\tau ^*,\xi ^*) + \frac{(x^*-\xi ^*)^2}{2{\Delta x}^{1/2}} +\frac{(t^*-\tau ^*)^2}{2{\Delta t}^{1/2}} + \alpha \frac{{\textrm{e}}^{ t^*}}{2}{\xi ^*}^2. \end{aligned}$$

Remark that \(\tau ^*=t_{n^*}+s^*\) with \(n^*\in [\![ 0,N_t-1]\!]\) and \(s^*\in (0,{\Delta t}]\). The previous inequality yields

$$\begin{aligned} v^k_{\Delta t}(t_{n^*},\xi ^*) \geqslant w(t_{n^*},\xi ^*) + k^*. \end{aligned}$$
(49)

The next step consists in applying the scheme (20) to this inequality. To do so, one has to make sure that

$$\begin{aligned} |w(t_{n^*},\xi ^*\pm {\Delta x})-w(t_{n^*}, \xi ^*)|\leqslant (14L_T+1){\Delta x}, \end{aligned}$$
(50)

so that (\(\textrm{CFL}_0\)) ensures that scheme (\({\mathcal {M}}_s^{0}\)) enjoys monotonicity. From the expression of \(w(\tau ,\xi )\), we have

$$\begin{aligned} \left| \frac{w(t_{n^*},{\xi ^*}) - w(t_{n^*},{\xi ^*}\pm {\Delta x})}{{\Delta x}} \right| \leqslant \alpha {\textrm{e}}^{ t^*} |{\xi ^*}| + \frac{{\Delta x}^{1/2}}{2} + \frac{|{x^*}-{\xi ^*}|}{{\Delta x}^{1/2}} +\alpha \frac{{\textrm{e}}^{ T}}{2}{\Delta x}. \end{aligned}$$

Hence, if \({\Delta x}\) is chosen small enough, (50) holds, thanks to (45) and (46). Since the ratio \({\Delta t}/{\Delta x}\) is fixed, this condition on \({\Delta x}\) implies that the result holds for all \({\Delta t}\leqslant {\Delta t}_0\), for some \({\Delta t}_0>0\). Since (\(\textrm{CFL}_0\)) is satisfied, the first step of the scheme (\({\mathcal {M}}_s^{0}\)) is monotonic and can hence be applied to the inequality (48), using (49). As \({\mathcal {M}}_{s^*}\) commutes with constants, it gives

$$\begin{aligned}{} & {} {\mathcal {M}}_{s^*}(v_{\Delta t}^k(t_{n^*},\cdot ))(\xi ^*) - s^*R(\xi ^*, J_{\Delta t}^k(t_{n^*}+s^*)) \geqslant {\mathcal {M}}_{s^*}(w(t_{n^*},\cdot ))(\xi ^*)\\ {}{} & {} \quad +k^* - s^* R(\xi ^*,J_{\Delta t}^k(t_{n^*}+s^*)), \end{aligned}$$

that is

$$\begin{aligned}&v_{\Delta t}^k(\tau ^*, \xi ^*) \geqslant w(t_{n^*},\xi ^*) -s^* H\\ {}&\quad \left( \frac{w(t_{n^*},\xi ^*) - w(t_{n^*},\xi ^*-{\Delta x})}{{\Delta x}},\frac{w(t_{n^*},\xi ^*+{\Delta x})-w(t_{n^*},\xi ^*)}{{\Delta x}}\right) \\&\qquad +k^* - s^* R(\xi ^*, J_{\Delta t}^k(\tau ^*)). \end{aligned}$$

The latter yields counterpart of (47)

$$\begin{aligned} 0&\leqslant H\left( \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}} - \alpha {\textrm{e}}^{ t^*} {\xi ^*} + \frac{{\Delta x}^{1/2}}{2}+\alpha \frac{{\textrm{e}}^{ t^*}}{2} {\Delta x}, \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}}-\alpha {\textrm{e}}^{ t^*}{\xi ^*}-\frac{{\Delta x}^{1/2}}{2} -\alpha \frac{{\textrm{e}}^{ t^*}}{2}{\Delta x} \right) \nonumber \\&+ R(\xi ^*, J_{\Delta t}^k(\tau ^*)) +\frac{t^*-\tau ^* + s^*/2}{{\Delta t}^{1/2}}. \end{aligned}$$
(51)

Inequalities (47) and (51) are now gathered, so that

$$\begin{aligned}&\sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T+ {\frac{t^*-\tau ^*}{{\Delta t}^{1/2}}} +\alpha \frac{{\textrm{e}}^{ t^*}}{2}({x^*}^2+{\xi ^*}^2)+{\frac{\alpha }{(T-t^*)^2}} + H\left( \frac{x^*-\xi ^*}{{\Delta x}^{1/2}} + \alpha {\textrm{e}}^{ t^*}x^*\right) + R(x^*,J^k_0(t^*))\\ \leqslant&H\left( \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}} - \alpha {\textrm{e}}^{t^*} {\xi ^*} + \frac{{\Delta x}^{1/2}}{2}+\alpha \frac{{\textrm{e}}^{ t^*}}{2} {\Delta x}, \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}}-\alpha {\textrm{e}}^{ t^*}{\xi ^*}-\frac{{\Delta x}^{1/2}}{2} -\alpha \frac{{\textrm{e}}^{ t^*}}{2}{\Delta x} \right) \\&+ R(\xi ^*, J_{\Delta t}^k(\tau ^*)) +{\frac{t^*-\tau ^*}{{\Delta t}^{1/2}}} + \frac{s^*/2}{{\Delta t}^{1/2}}, \end{aligned}$$

and hence

$$\begin{aligned}&\sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T +\alpha \frac{{\textrm{e}}^{ t^*}}{2}({x^*}^2+{\xi ^*}^2) + H\left( \frac{x^*-\xi ^*}{{\Delta x}^{1/2}} + \alpha {\textrm{e}}^{ t^*}x^*, \frac{x^*-\xi ^*}{{\Delta x}^{1/2}} + \alpha {\textrm{e}}^{ t^*}x^*\right) \nonumber \\&- H\left( \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}} - \alpha {\textrm{e}}^{ t^*} {\xi ^*} + \frac{{\Delta x}^{1/2}}{2}+\alpha \frac{{\textrm{e}}^{ t^*}}{2} {\Delta x}, \frac{x^*-{\xi ^*}}{{\Delta x}^{1/2}}-\alpha {\textrm{e}}^{ t^*}{\xi ^*}-\frac{{\Delta x}^{1/2}}{2} -\alpha \frac{{\textrm{e}}^{ t^*}}{2}{\Delta x} \right) \nonumber \\ \leqslant&\; R(\xi ^*, J_{\Delta t}^k(\tau ^*))-R(x^*,J^k_0(t^*)) + \frac{{\Delta t}^{1/2}}{2}, \end{aligned}$$
(52)

since \({\mathcal {H}}\) defined in (3) and the numerical Hamiltonian satisfy \(H(p,p)={\mathcal {H}}(p)\) for any \(p\in {{\mathbb {R}}}\). An upper bound for the right hand side is obtained from (A3), and from the k-Lipschitz regularity of \(J_0^k\) in Lemma 4.4

$$\begin{aligned} R(\xi ^*, J_{\Delta t}^k(\tau ^*))-R(x^*,J^k_0(t^*)) \leqslant K|{\xi ^*}-{x^*}| + K \Vert J_{\Delta t}^k-J_0^k\Vert _\infty + Kk|t^*-\tau ^*|, \end{aligned}$$

that can, once again, be estimated using (46) and the fact that the ratio \({\Delta t}/{\Delta x}\) is fixed. On the other hand, the Lipschitz property of H gives a lower bound for the left hand side of (52). Indeed, all the arguments of the functions H in the inequality are bounded in absolute value by \(14L_T+1\). It yields

$$\begin{aligned}&\sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T +\alpha \frac{{\textrm{e}}^{ t^*}}{2}({x^*}^2+{\xi ^*}^2) -2{\mathcal {C}}_H \left( \alpha {\textrm{e}}^{ t^*} (|{x^*}|+|{\xi ^*}|) + \frac{{\Delta x}^{1/2}}{2} + \alpha \frac{{\textrm{e}}^{ T}}{2}{\Delta x} \right) \\ \leqslant&\;{\mathcal {C}}(k)\left( {\Delta t}^{1/2} +{\Delta x}^{1/2}+ \Vert J_{\Delta t}^k-J_0^k\Vert _\infty \right) , \end{aligned}$$

where \({\mathcal {C}}(k)\) is a constant depending on k, and on the parameters K and \(L_T\). We remark now that the left-hand side of the inequality is bounded from below independently of \(|{x^*}|\) and \(|{\xi ^*}|\). Hence, as \(x\mapsto x^2 - 4 {\mathcal {C}}_H x\) is minimal at \(x=2{\mathcal {C}}_H\), and as its minimum is equal to \(-4{\mathcal {C}}_H^2\),

$$\begin{aligned} \sigma \leqslant \sigma +4{\mathcal {C}}_H^2\alpha {\textrm{e}}^T - 4{\mathcal {C}}_H^2 \alpha {\textrm{e}}^{ t^*} \leqslant {\mathcal {C}}(k) \left( {\Delta t}^{1/2} +{\Delta x}^{1/2}+ \Vert J_0^k-J_{\Delta t}^k\Vert _\infty \right) +\alpha {\mathcal {C}}_H {\textrm{e}}^{ T} {\Delta x}, \end{aligned}$$

and \(\sigma =\sigma ({\Delta t},k)\) is defined so that the previous inequality cannot hold, and that \(\sigma ({\Delta t},k)\rightarrow _{{\Delta t}\rightarrow 0} 0\) when k is fixed, as does the right hand side of the inequality. Because of \(\Vert J^k_0 - J^k_{\Delta t}\Vert _\infty \), there is no indication for the speed of the convergence \(\sigma ({\Delta t},k)\rightarrow _{{\Delta t}\rightarrow 0} 0\) when k is fixed. Indeed, Lemma 4.4-(iii) is obtained by using a compactness argument, which does not give a quantitative estimate.

D Proof of Lemma 5.1

In this appendix, we prove Lemma 5.1. The proof is done by induction. The initial data \(u^0=(u^0_i)_{i\in {{\mathbb {Z}}}}\) enjoys the properties of Lemma (5.1). Let \(\varepsilon >0\) be fixed, and let us suppose that the items (i)-(ii) are satified by \(u^n=(u^n_i)_{i\in {{\mathbb {Z}}}}\) for a given \(n\in [\![ 0,N_t-1]\!]\), and prove that \(I^{n+1}\) and \(u^{n+1}=(u^{n+1}_i)_{i\in {{\mathbb {Z}}}}\) are well defined, and satisfy (i)-(ii)-(iii).

First of all, let us remark that \(I^{n+1}\) is solution of \(\Phi (I)=0\), with

$$\begin{aligned} \Phi (I)=I-{\Delta x}\sum \limits _{i\in {{\mathbb {Z}}}} \psi (x_i) {\textrm{e}}^{-{\mathcal {M}}^\varepsilon _{\Delta x}(u^n)_i/\varepsilon }{\textrm{e}}^{{\Delta t} R(x_i,I)/\varepsilon }, \end{aligned}$$
(53)

where \({\mathcal {M}}^\varepsilon _{\Delta x}\) is defined in (14). Thanks to the first point of Lemma 3.1, and because of (\(\text {CFL}_\varepsilon \)),

$$\begin{aligned} \forall i \in {{\mathbb {Z}}}, \; {\mathcal {M}}^\varepsilon _{\Delta t}(u^n)_i\geqslant {\underline{a}}|x_i-x_0| + {\underline{\beta }}_n - {\Delta t} H({\underline{a}},{\underline{a}})\geqslant {\underline{a}}|x_i-x_0| + {\underline{\beta }}_{N_t}, \end{aligned}$$
(54)

so that the sum in (53) is well-defined for all \(I\in {{{\mathbb {R}}}}\). Indeed, for fixed \(I\in {{\mathbb {R}}}\), \(x\mapsto R(x,I)\) is bounded, thanks to the remark after (A3), and the coercivity of \({\mathcal {M}}_{\Delta t}^\varepsilon (u^n)\) makes the series in (53) convergent. Since \(\Phi \) is a difference between an increasing and a decreasing function, there exists a unique \(I^{n+1}\in {{\mathbb {R}}}\) such that \(\Phi (I^{n+1})=0\). Therefore, \(u^{n+1}\) is uniquely determined too. Moreover, the inequality \(\Phi (I)\leqslant I\) immediately yields that \(I^{n+1}\geqslant 0\). As \(R(x,\cdot )\) is decreasing for all x,

$$\begin{aligned} \forall i\in {{\mathbb {Z}}}, \; u^{n+1}_i\geqslant {\mathcal {M}}^\varepsilon _{\Delta t}(u^n)_i -{\Delta t} R(x_i,0), \end{aligned}$$

which gives the lower estimate in (ii), with \({\underline{\beta }}_{n+1}={\underline{\beta }}_n - {\Delta t} H({\underline{a}},{\underline{a}}) - {\Delta t} \Vert R(\cdot ,0)\Vert _\infty .\)

Let us suppose that \(I>I_M\), with \(I_M\) defined in (A2). A bound from below for I, comes from

$$\begin{aligned} \Phi _-= {\Delta x}\sum \limits _{i\in {{\mathbb {Z}}}} \psi (x_i) {\textrm{e}}^{-{\mathcal {M}}^\varepsilon _{\Delta x}(u^n)_i/\varepsilon }{\textrm{e}}^{{\Delta t} R(x_i,I)/\varepsilon }\leqslant {\Delta x}\psi _M\sum \limits _{i\in {{\mathbb {Z}}}} {\textrm{e}}^{-{\mathcal {M}}^\varepsilon _{\Delta x}(u^n)_i/\varepsilon }, \end{aligned}$$

where we also used (A1). Now, remark that \({\mathcal {M}}^\varepsilon _{\Delta t}(u^n)_i \geqslant {\underline{a}}|x_i-x_0| + {\underline{\beta }}_{N_t}, \) and hence

$$\begin{aligned} \Phi _-&\leqslant 2 \psi _M {\textrm{e}}^{-{\underline{\beta }}_{N_t}/\varepsilon } {\Delta x} \sum \limits _{i\in {{\mathbb {N}}}} {\textrm{e}}^{-{\underline{a}}i{\Delta x}/\varepsilon }. \end{aligned}$$

As a consequence, if \(I>I_M\),

$$\begin{aligned} \Phi (I)\geqslant I-2 \psi _M {\textrm{e}}^{-{\underline{\beta }}_{N_t}/\varepsilon } {\Delta x} \frac{1}{1-{\textrm{e}}^{-a{\Delta x}/\varepsilon }} \underset{{\Delta x}\rightarrow 0}{\longrightarrow }I- \frac{2\varepsilon }{a} \psi _M {\textrm{e}}^{-{\underline{\beta }}_{N_t}/\varepsilon }. \end{aligned}$$
(55)

Hence, there exists \({\Delta x}_0>0\) and \(I_{M'}>0\) such that for all \({\Delta x}\leqslant {\Delta x}_0\), \(\Phi (I_{M'})>0\). Since \(\Phi \) is increasing, \(I^{n+1}\leqslant I_{M'}\). Eventually, Lemma 3.1 yields (i), and

$$\begin{aligned} \forall i\in {{\mathbb {Z}}}, \;{\mathcal {M}}^\varepsilon _{\Delta t}(u^n)_i\leqslant {\overline{a}}|x_i-x_0|+ {\overline{\beta }}_n + 2\varepsilon {\overline{a}} \frac{{\Delta t}}{{\Delta x}}, \end{aligned}$$

where (\(\text {CFL}_\varepsilon \)) gives (ii).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calvez, V., Hivert, H. & Yoldaş, H. Concentration in Lotka–Volterra parabolic equations: an asymptotic-preserving scheme. Numer. Math. 154, 103–153 (2023). https://doi.org/10.1007/s00211-023-01362-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-023-01362-y

Mathematics Subject Classification

Navigation