1 Introduction

In this work, we present criteria for non-existence of global solutions (that we will frequently refer to as finite time blow-up) to a stochastic partial differential equation (SPDE) model of chemotaxis on \(\mathbb {R}^2\). The model we consider,

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathop {}\!\textrm{d}u_t = \left( \Delta u_t -\chi \nabla \cdot (u_t\nabla c_t)\right) \mathop {}\!\textrm{d}t + \sqrt{2\gamma } \sum _{k=1}^\infty \nabla \cdot \left( \sigma _k u_t \right) \circ d W^k_t,&{}\text { on }\mathbb {R}_+\times \mathbb {R}^2,\\ -\Delta c_t = u_t, &{}\text { on } \mathbb {R}_+\times \mathbb {R}^2,\\ u|_{t=0}=u_0 \in \mathcal {P}(\mathbb {R}^2), &{}\text { on }\mathbb {R}^2, \end{array}\right. } \nonumber \\ \end{aligned}$$
(1.1)

is based on the parabolic-elliptic Patlak–Keller–Segel model of chemotaxis (\(\gamma =0\)) with the addition of a stochastic transport term (\(\gamma >0\)), where \(\{W^k\}_{k\ge 1}\) is a family of i.i.d. standard Brownian motions on a filtered probability space, \((\Omega ,\mathcal {F},(\mathcal {F}_t)_{t\ge 0},\mathbb {P})\), satisfying the usual assumptions. Here \(\mathcal {P}(\mathbb {R}^2)\) denotes the set of probability measures on \(\mathbb {R}^2\). We will give detailed assumptions on the vector fields \(\sigma _k:\mathbb {R}^2\rightarrow \mathbb {R}^2\) below (see (H1)–(H3)), but for now simply stipulate that they are assumed to be divergence free and such that \(\varvec{\sigma }:=\{\sigma _k\}_{k\ge 1}\in \ell ^2(\mathbb {Z};L^\infty (\mathbb {R}^2))\).

The noiseless model (\(\gamma =0\)) is a well-known PDE system modeling chemotaxis: the collective movement of a population of cells (represented by its time-space density u) in the presence of an attractive chemical substance (represented by its time-space concentration c). The chemical sensitivity is encoded by the parameter \(\chi >0\). The main particularity of the model is that solutions may become unbounded in finite time even though the total mass is preserved. This is the so-called blow-up in finite time, and it occurs depending on the spatial dimension of the problem and the size of the parameter \(\chi \). In particular, on \(\mathbb {R}^2\) blow-up occurs in finite time for \(\chi >8\pi \), at \(t=\infty \) for \(\chi =8\pi \) see [2] and global existence holds for \(\chi <8\pi \), see for example the survey by [32]. For results in other dimensions, we refer to [4, 20, 30, 32].

Since the scenario described by the noiseless model often occurs within an external environment, it is natural to take into account additional environmental effects. In some cases, this can be done by coupling additional equations into the system, such as the Navier–Stokes equations of fluid mechanics [27, 37, 38]. With particular relevance to our work, we note the results of [22, 23] where it was shown that transport by sufficiently strong relaxation enhancing flows can have a regularizing effect on the Keller–Segel equation. However, for both modeling and analysis purposes it is also relevant to study the effect of random environments. These either model a rough background, accumulated errors in measurement or emergent noise from micro-scale phenomena not explicitly considered.

The noise introduced in (1.1) is related to stochastic models of turbulence, [6, 8, 24, 26] and we refer to the monograph by [12] for a broader overview of its relevance to SPDE models. Noise satisfying either our assumptions, or closely related ones, has been applied in a number of related settings; interacting particle systems, [7, 11, 15]; regularization, stabilization and enhanced mixing of general parabolic and transport PDE, [14, 17, 19], and with particular applications to the Keller–Segel and Navier–Stokes equations among others in [13, 16, 18].

The motivation of the present work is to understand the persistence of blow-up in the case of stochastic chemotactic models driven by conservative noise. Our main result is that if \( \chi >\ (1+\gamma )8\pi \) then finite time blow-up occurs almost surely, while if \( \chi >(1+ \gamma V[u_0] C_{\varvec{\sigma }})8\pi \) then finite time blow-up occurs with positive probability. Here \(V[u_0]\) denotes one half the spatial variance of the initial data and \(C_{\varvec{\sigma }}\) indicates a type of Lipschitz norm of the vector fields \(\varvec{\sigma }\) and measures the spatial decorrelation of the noise. We refer to (2.2) for a precise definition. Furthermore, if \(\chi \) satisfies either of the above conditions and blow-up does occur then it must do so before a deterministic time \(T^*>0\), (see Theorem 2.8).

Note that when \(\gamma =0\), we recover the usual conditions for blow-up of the deterministic equation, see [32].

Three interesting regimes emerge from our criteria, on the one hand, if we let \(C_{\varvec{\sigma }}\) increase to infinity, the second condition becomes the first and blow-up must occur almost surely, albeit for larger and larger \(\chi \). On the other hand, when \(C_{\varvec{\sigma }}\) is arbitrarily small (which is the case for spatially homogeneous noise) one again recovers the deterministic criterion. However, in the third regime, where the noise and initial variance are reciprocally of the same order, i.e., \(V[u_0] C_{\varvec{\sigma }}<1\), we are only able to show blow-up with positive probability. It is an interesting question, that we leave for future work, to obtain more information on the probability of blow-up in this case. See Remark 2.9 for a longer discussion of these points.

The study of blow-up of solutions to SPDEs is a large topic of which we only mention some examples. It was shown by [3] that additive noise can eliminate global well-posedness for stochastic reaction–diffusion equations, while a similar statement has been shown for both additive and multiplicative noise in the case of stochastic nonlinear Schrödinger equations by [9, 10]. In addition, non-uniqueness results for stochastic fluid equations have been studied by [21] and [35].

In the case of SPDE models of chemotaxis, the study of blow-up phenomena has begun to be considered and we mention here two very recent works, by [16] and [28]. In [16], the authors show that under a particular choice of the vector fields, \(\varvec{\sigma }\), a similar model to (1.1) on \(\mathbb {T}^d\) for \(d=2,\,3\) enjoys delayed blow-up with \(1-\varepsilon \) after choosing \(\gamma \) and \(\varvec{\sigma }\) w.r.t. \(\chi \) and \(\varepsilon \in (0,1)\). In [28], the authors study global well-posedness and blow-up of a conservative model similar to (1.1) with a constant family of vector fields \(\sigma _k(x) = \sigma \) and a single common Brownian motion. Translating their parameters into ours, they establish global well-posedness of solutions to (1.1), with \(\sigma _k(x)\equiv 1\) and for \(\chi <8\pi \), as well as finite time blow-up when \(\chi >(1+\gamma )8\pi \).

The main contribution of this paper is the above-mentioned blow-up criterion for an SPDE version of the Keller–Segel model in the case of a spatially inhomogeneous noise term. To the best of our knowledge, this is a new result. An interesting point is that, unlike the deterministic criterion, it relates the chemotactic sensitivity with the initial variance, regularity and intensity of the noise term. In addition, we close the gap in [28], as in the case of constant vector fields we show that finite time blow-up occurs for \(\chi >8\pi \) (see Remark 2.9). In addition, we show that \(\chi >(1+\gamma )8\pi \) cannot be a sharp blow-up threshold for all sufficiently regular initial data.

Our technique of proof follows the deterministic approach by tracking a priori the evolution in time of the spatial variance of solutions to (1.1). We derive an SDE satisfied by this quantity which we analyze both pathwise and probabilistically to obtain criterion for blow-up.

Notation

  • For \(n\ge 1\) and \(p\in [1,\infty )\) (resp. \(p=\infty \)), we write \(L^p(\mathbb {R}^2;\mathbb {R}^n)\) for the spaces of p integrable (resp. essentially bounded) \(\mathbb {R}^n\)-valued functions on \(\mathbb {R}^2\).

    For \(\alpha \in \mathbb {R}\), we write \(H^\alpha (\mathbb {R}^2;\mathbb {R}^n)\) for the inhomogeneous Sobolev spaces of order \(\alpha \)—a full definition and some useful facts are given in Appendix A.

    For \(k\ge 0\) and \(\alpha \in (0,1)\), we write \(C^k(\mathbb {R}^2;\mathbb {R}^n)\) for the k continuously differentiable maps and \(\mathcal {C}^{k,\alpha }(\mathbb {R}^2;\mathbb {R}^n)\) for the k continuously differentiable maps with \(\alpha \) Hölder continuous \(k^{\text {th}}\) derivatives.

    When the context is clear, we remove notation for the target space, simply writing \(L^p(\mathbb {R}^2)\), \(H^\alpha (\mathbb {R}^2)\). We equip these spaces with the requisite norms writing \(\Vert \,\cdot \,\Vert _{L^p},\, \Vert \,\cdot \,\Vert _{H^\alpha }\) removing the domain as well when it will not cause confusion.

  • We write \(\mathcal {P}(\mathbb {R}^2)\) for the space of probability measures on \(\mathbb {R}^2\) and for \(m\ge 1\), \(\mathcal {P}_m(\mathbb {R}^2)\) for the space of probability measures with m finite moments. By an abuse of notation we write, for example, \(\mathcal {P}(\mathbb {R}^2)\cap L^p(\mathbb {R}^2)\) to indicate the space of probability measures with densities in \(L^p(\mathbb {R}^2)\).

  • For \(\mu \in \mathcal {P}(\mathbb {R}^2)\) and when they are finite, we define the following quantities:

    $$\begin{aligned} C[\mu ]&:= \int _{\mathbb {R}^2} x \mathop {}\!\textrm{d}\mu (x),\\ V[\mu ]&:= \frac{1}{2}\int _{\mathbb {R}^2}|x-C[\mu ]|^2 \mathop {}\!\textrm{d}\mu (x) = \frac{1}{2}\int _{\mathbb {R}^2}|x|^2\mathop {}\!\textrm{d}\mu (x)- \frac{1}{2}|C[\mu ]|^2. \end{aligned}$$

    Note that \(V[\mu ]\) is one half the usual variance, we define it in this way for computational ease.

  • For \(T>0\), X a Banach space, and \(p\in [1,\infty )\) (resp. \(p=\infty \)), we write \({L^p_TX:=L^p([0,T];X)}\) for the space of p-integrable (resp. essentially bounded) maps \(f:[0,T]\rightarrow X\). Similarly we write \(C_TX:=C([0,T];X)\) for the space of continuous maps \({f:[0,T]\rightarrow X}\), which we equip with the supremum norm \({\Vert f\Vert _{C_TX}:= \sup _{t\in [0,T]}\Vert f\Vert _X}\). We define the function space \(S_T:= C_TL^2(\mathbb {R}^2)\cap L^2_TH^1(\mathbb {R}^2)\).

  • We write \(\nabla \) for the usual gradient operator on Euclidean space while for \(k\ge 2\), \(\nabla ^k\) denotes the matrix of k-fold derivatives. We denote the divergence operator by \(\nabla \cdot \) and we write \(\Delta := \nabla \cdot \nabla \) for the Laplace operator.

  • If we write \(a\,\lesssim \, b\), we mean that the inequality holds up to a constant which we do not keep track of. Otherwise we write \(a\,\le \, C b\) for some \(C >0\) which is allowed to vary from line to line.

  • Given \(a,\,b\in \mathbb {R}\), we write \(a\wedge b :=\min \{a,b\}\) and \(a\vee b :=\max \{a,b\}\).

Plan of the paper In Sect. 2, we give the precise assumptions on the noise term and formulate our main result. Then, in Sect. 3 we establish some important properties of weak solutions to (1.1) which are made use of in Sect. 4 where we prove our main theorem. Appendix A is devoted to a brief recap of the fractional Sobolev spaces on \(\mathbb {R}^2\) along with some useful properties. Appendix B gives a sketch proof for the equivalence between (1.1) and a comparable Itô equation. Finally, in Appendix C, for the readers convenience, we provide a relatively detailed proof of local existence of weak solutions in the sense of Definition 2.4.

2 Main result

Before stating our main results, we reformulate (1.1) into a closed form and state our standing assumptions on the noise.

It is classical that c is uniquely defined up to a harmonic function, hence it can be written as \(c = K *u\) with \( K(x) = -\frac{1}{2\pi }\ln \left( |x|\right) \). Therefore, from now on, for \(t>0\), we work with the expression

$$\begin{aligned} \nabla c_t(x):= \nabla K *u_t (x) = -\frac{1}{2\pi } \int _{\mathbb {R}^2} \frac{x-y}{|x-y|^2} u_t(y)\mathop {}\!\textrm{d}y. \end{aligned}$$
(2.1)

Throughout we fix a complete, filtered, probability space, \((\Omega ,\mathcal {F},(\mathcal {F}_t)_{t\ge 0},\mathbb {P})\), satisfying the usual assumptions and carrying a family of i.i.d Brownian motions \(\{W^k\}_{k\ge 1}\). Furthermore, we consider a family of vector fields \(\varvec{\sigma }:= \{\sigma _k\}_{k\ge 1}\), satisfying the following assumptions.

  1. (H1)

    For \(k\ge 1\), \(\sigma _k:\mathbb {R}^2 \rightarrow \mathbb {R}^2\) are measurable and such that \( \sum _{k=1}^\infty \Vert \sigma _k\Vert _{L^\infty }^2 <\infty .\)

  2. (H2)

    For every \(k\ge 1\), \(\sigma _k \in C^2(\mathbb {R}^2;\mathbb {R}^2)\) and \( \nabla \cdot \sigma _k =0.\)

  3. (H3)

    Defining \(q:\mathbb {R}^2\times \mathbb {R}^2 \rightarrow \mathbb {R}^2 \otimes \mathbb {R}^2\) by

    $$\begin{aligned} q^{ij}(x,y)= \sum _{k=1}^\infty \sigma _k^i(x)\sigma _k^j(y),\quad \forall \, i,j =1,\ldots ,d, \, x,y\in \mathbb {R}^2; \end{aligned}$$
    1. (a)

      The mapping \((x,y)\mapsto q(x,y)=:Q(x-y)\in \mathbb {R}^2 \otimes \mathbb {R}^2\) depends only on the difference \(x-y\).

    2. (b)

      \(Q(0)=q(x,x)=\text {Id}\) for any \(x \in \mathbb {R}^2\).

    3. (c)

      We have \(Q \in C^2(\mathbb {R}^2;\mathbb {R}^2\otimes \mathbb {R}^2)\) and \( \sup _{x\in \mathbb {R}^2} |\nabla ^2Q(x)|<\infty .\)

Remark 2.1

For \(\varvec{\sigma }\) satisfying Assumption (H3), it is possible to show that the quantity

$$\begin{aligned} C_{\varvec{\sigma }}:= \sup _{x\ne y \in \mathbb {R}^2} \sum _{k=1}^\infty \frac{|\sigma _k(x)-\sigma _k(y)|^2}{|x-y|^2} \end{aligned}$$
(2.2)

is finite. See [7, Rem. 4] for details. Note that due to (H3)-(b) one cannot re-scale \(\varvec{\sigma }\) so as to remove \(\gamma \) from (1.1).

Remark 2.2

It is important to note that one can instead specify the covariance matrix Q first. In fact, due to [25, Thm. 4.2.5] any matrix-valued map \(Q:\mathbb {R}^{2}\times \mathbb {R}^2\rightarrow \mathbb {R}^{2}\otimes \mathbb {R}^2\) satisfying the analogue of (2.2),

$$\begin{aligned} \sup _{x\ne y \in \mathbb {R}^2}\sum _{i=1}^2 \frac{q^{ii}(x,x)-2q^{ii}(x,y)+ q^{ii}(y,y)}{|x-y|^2} <\infty \end{aligned}$$

can be expressed as a family of vector fields \(\{\sigma _k\}_{k\ge 1}\) satisfying (H1)–(H3).

Analysis and presentations of vector fields satisfying these assumptions can be found in [7, 19, Sec. 5] and [11, 15]. For the reader’s convenience, we give an explicit example here in the spirit of Remark 2.2, based on [7, Ex. 5], but adapted to our precise setting.

Example 2.3

Let \(f\in L^1(\mathbb {R}_+)\) be such that \(\int _{\mathbb {R}_+} rf(r) \mathop {}\!\textrm{d}r =\pi ^{-1}\) and \(\int _{\mathbb {R}^2}|u|^ 2f(|u|)\mathop {}\!\textrm{d}u <\infty \). Then, let \(\Pi : \mathbb {R}\rightarrow M_{2\times 2}(\mathbb {R})\) be the \(2\times 2\)-matrix-valued map defined by,

$$\begin{aligned} \Pi (u) = (1-p)\text {Id} + (2p-1) \frac{u\otimes u}{|u|^2}, \quad \text {for } p\in [0,1]. \end{aligned}$$

Then, we define the covariance function,

$$\begin{aligned} Q(z):= \int _{\mathbb {R}^2} \begin{pmatrix} \cos (u\cdot z)&{}-\sin (u\cdot z)\\ \sin (u \cdot z)&{} \cos (u \cdot z) \end{pmatrix} \Pi (u) f(|u|) \mathop {}\!\textrm{d}u. \end{aligned}$$

Property (H3) (a) is satisfied by definition, after setting \(q(x,y):= Q(x-y)\). Since

$$\begin{aligned} Q(0)= \int _{\mathbb {R}^2} \Pi (u)f(|u|)\mathop {}\!\textrm{d}y, \end{aligned}$$

property (H3) (b) is easily checked by moving to polar coordinates, making use of elementary trigonometric identities and the normalization \(\int _{\mathbb {R}_+} r f(r)\mathop {}\!\textrm{d}r = \pi ^{-1}\). Finally, (H3) (c) can be checked by a straightforward computation using smoothness of the trigonometric functions and the moment assumption on f.

We now define our notion of weak solutions.

Definition 2.4

Let \(\chi , \gamma >0\). Then, given \(u_0 \in \mathcal {P}(\mathbb {R}^2)\cap L^2(\mathbb {R}^2)\), we say that a weak solution to (1.1) is a pair \((u,\bar{T})\) where

  • \(\bar{T}\) is an \(\{\mathcal {F}_t\}_{t\ge 0}\) stopping time taking values in \(\mathbb {R}_+\cup \{\infty \}\),

  • For \(T< \bar{T}\), u is an \(S_T:=C_TL^2 \cap L^2_T H^1\)-valued random variable such that

    $$\begin{aligned} \mathbb {E}\left[ \Vert u\Vert ^2_{L^\infty _TL^1}+\Vert u\Vert ^2_{L^\infty _T L^2} + \Vert u\Vert ^2_{L^2_TH^1}\right] <\infty . \end{aligned}$$

In addition, for any \(t \in [0,T]\), \(\phi \in H^1(\mathbb {R}^2)\), \(\mathbb {P}\)-a.s. the following identities hold,

$$\begin{aligned} \begin{aligned} \langle u_t,\phi \rangle =&\langle u_0,\phi \rangle -\int _0^t \left( \langle \nabla u_s,\nabla \phi \rangle -\chi \langle u_s (\nabla K*u_s),\nabla \phi \rangle \right) \mathop {}\!\textrm{d}s \\&- \sqrt{2 \gamma } \sum _{k\ge 1} \int _0^t \langle \sigma _k u_s,\nabla \phi \rangle \circ \mathop {}\!\textrm{d}W_s^k.\\ \end{aligned} \end{aligned}$$
(2.3)

In Appendix C, we detail a standard argument to show that there exists a deterministic, positive time \(T>0\) such that (uT) is a weak solution in the above sense. This is due to the particular structure of the noise and we stress that in general the maximal time of existence may be random.

Applying the standard Itô-Stratonovich correction, one can prove the following remark, a sketch is given in Appendix B.

Remark 2.5

Let \((u,\bar{T})\) be a solution, in the sense of Definition 2.4, to (1.1). Then it also holds that \((u,\bar{T})\) is a solution to the following Itô equation: For every \(\phi \in H^1(\mathbb {R}^2)\), \(t\in [0,\bar{T}]\), \(\mathbb {P}\)-a.s.

$$\begin{aligned} \begin{aligned} \langle u_t,\phi \rangle = \,&\langle u_0,\phi \rangle -\int _0^t \left( (1+\gamma )\langle \nabla u_t,\nabla \phi \rangle - \chi \langle u_s(\nabla K*u_s),\nabla \phi \rangle \right) \mathop {}\!\textrm{d}s\\&{-}\sqrt{2\gamma } \sum _{k\ge 1} \int _0^t \langle \sigma _k u_s,\nabla \phi \rangle \mathop {}\!\textrm{d}W_s^k, \end{aligned} \end{aligned}$$
(2.4)

Remark 2.6

It follows from Definition 2.4 and the standard chain rule, obeyed by the Stratonovich integral, that for u a weak, Stratonovich solution to (1.1) and \(F \in C^3(L^2(\mathbb {R}^2);\mathbb {R})\),

$$\begin{aligned} \begin{aligned} F[u_t] =&F[u_0] + \int _0^t DF[u_s][\Delta u_s -\chi \nabla \cdot (u_s\nabla K*u_s )]\mathop {}\!\textrm{d}s \\&+ \sum _{k=1}^\infty \int _0^t DF[u_s][\nabla \cdot (\sigma _k u_s)]\circ \mathop {}\!\textrm{d}W^k_s, \end{aligned} \end{aligned}$$
(2.5)

where \(DF[u_s][\varphi ]\) denotes the Gateaux derivative of \(F[u_s]\) in the direction \(\varphi \in H^{1}(\mathbb {R}^2)\). An equivalent Itô formula for nonlinear functional of (2.4) also holds, see for example [31, Sec. 2].

Remark 2.7

Note that under assumption (H1), for any \(T>0\) and any weak solution on [0, T], the stochastic integral is well defined as an element of \(L^2(\Omega \times [0,T];L^2(\mathbb {R}^2))\subset L^2(\Omega \times [0,T];H^{-1}(\mathbb {R}^2))\), since for any \(t\in (0,T]\), we have

$$\begin{aligned} \mathbb {E}\left[ \sum _{k=1}^\infty \int _0^t \Vert \nabla \cdot (\sigma _k(x)u_s(x))\Vert ^2_{L^2}\mathop {}\!\textrm{d}s\right]&=\mathbb {E}\left[ \sum _{k=1}^\infty \int _0^t \Vert \sigma _k(x)\cdot \nabla u_s(x)\Vert ^2_{L^2}\mathop {}\!\textrm{d}s\right] \\&\,\le \, \sum _{k=1}^\infty \Vert \sigma _k\Vert _{L^\infty }^2 \mathbb {E}\left[ \int _0^t \Vert \nabla u_s\Vert ^2_{L^2}\mathop {}\!\textrm{d}s\right] <\infty . \end{aligned}$$

We are ready to state our main result.

Theorem 2.8

(Blow-up in finite time) Let \(\chi ,\,\gamma >0\) and let \(u_0\in \mathcal {P}_{2}(\mathbb {R}^2)\cap L^2(\mathbb {R}^2)\) be such that \(\int x u_0(x)\mathop {}\!\textrm{d}x =0\). Assume \(\varvec{\sigma }=\{\sigma _k\}_{k\ge 1}\) satisfy (H1)-(H3). Let \((u,{\bar{T}})\) be a weak solution to (1.1). Then

  1. (i)

    Under the condition

    $$\begin{aligned} \chi > (1+\gamma )8\pi , \end{aligned}$$
    (2.6)

    we have

    $$\begin{aligned} \mathbb {P}({\bar{T}} < T^*_1 )=1, \end{aligned}$$

    for \(T^*_1:= \frac{4\pi V[u_0]}{\chi -(1+\gamma )8\pi }\).

  2. (ii)

    Under the condition

    $$\begin{aligned} \chi >(1+ \gamma V[u_0] C_{\varvec{\sigma }})8\pi \end{aligned}$$
    (2.7)

    we have

    $$\begin{aligned} \mathbb {P}({\bar{T}} < T^*_2 )>0, \end{aligned}$$

    for \(T^*_2:= \frac{\log (\chi -8\pi ) - \log \left( \chi -V[u_0]8\pi \gamma C_{\varvec{\sigma }} -8\pi \right) }{2\gamma C_{\varvec{\sigma }}}\).

Remark 2.9

  • If \(V[u_0]C_{\varvec{\sigma }}>1\) and \(\chi \) satisfies (2.7), then \(\chi \) also satisfies (2.6), in which case blow-up occurs almost surely before \(T^*_1\). This has relevance to the setting of [16] in which a model similar to (1.1) is considered on \(\mathbb {T}^d\) for \(d=2,\,3\) where formally \(C_{\varvec{\sigma }}\) can be taken arbitrarily large.

  • In the case \(C_{\varvec{\sigma }}=0\), which corresponds to noise that is independent of the spatial variable, criterion (2.7) becomes \(\chi >8\pi \) which is exactly the criterion for blow-up of solutions to the deterministic PDE. Applying Theorem 2.8, one would only recover blow-up with positive probability in this case. However, using the spatial independence of the noise we can instead implement a change of variables, setting \(v(t,x):= u(t,x-\sqrt{2\gamma }\sigma W_t)\). It follows from the Leibniz rule that v solves a deterministic version of the PDE with viscosity equal to one. Hence, it blows up in finite time with probability one for \(\chi >8\pi \). Note that in [28] a similar model was treated, among others, with spatially homogeneous noise and positive probability of blow-up was shown only for \(\chi >(1+\gamma )8\pi \).

  • Observe that the second half of Theorem 2.8 demonstrates that (2.6) cannot be a sharp threshold for almost sure global well-posedness of (1.1) for all initial data (or all families of suitable vector fields \(\{\sigma _k\}_{k\ge 1}\)). Given any \(8\pi<\chi <(1+\gamma )8\pi \), initial data \(u_0\) (resp. family of vector fields \(\{\sigma _k\}_{k\ge 1}\)) one can always choose suitable vector fields (resp. initial data) such that \(\chi >(1+\gamma V[u_0]C_{\varvec{\sigma }})8\pi \) so that there is at least a positive probability that solutions cannot live for all time. However, the results of this paper leave open any quantitative information on this probability.

Remark 2.10

If we set \(T^*:= T^*_1 \wedge T^*_2\), then it is possible to show that \(T^*\) respects the ordering of \( V[u_0]C_{\varvec{\sigma }}\) and 1. That is,

$$\begin{aligned} T^* = {\left\{ \begin{array}{ll} \frac{\log (\chi -8\pi ) - \log \left( \chi -V[u_0]8\pi \gamma C_{\varvec{\sigma }} -8\pi \right) }{2\gamma C_{\varvec{\sigma }}}, &{} V[u_0]C_{\varvec{\sigma }}<1,\\ \frac{4\pi V[u_0]}{\chi -(1+\gamma )8\pi }, &{} V[u_0]C_{\varvec{\sigma }}>1. \end{array}\right. } \end{aligned}$$

As mentioned before, in the PDE case blow-up occurs, for \(\chi >8\pi \), and weak solutions cannot exist beyond \(T^* = \frac{4\pi V[u_0]}{\chi -8\pi }\). It follows that in all parameter regions both the threshold for \(\chi \) and definition of \(T^*\) in Theorem 2.8 agree with the equivalent quantities in the limit \(\gamma \rightarrow 0\).

The proof of Theorem 2.8 is completed in Sect. 4 after establishing some preliminary results in Sect. 3. The central point is to analyze an SDE satisfied by \(t\mapsto V[u_t]\).

3 A priori properties of weak solutions

The following lemma demonstrates that the expression \(\nabla c_t:= \nabla K *u_t\) is well-defined Lebesgue almost everywhere.

Lemma 3.1

Let \((u,\bar{T})\) be a weak solution to (1.1) in the sense of Definition 2.4. Then, there exists a \(C>0\) such that for all \(t\in (0,\bar{T}]\),

$$\begin{aligned} \Vert \nabla c_t\Vert _{L^\infty }\le C \Vert u_t\Vert ^{\frac{1}{4}}_{L^1} \Vert u_t\Vert ^{\frac{1}{2}}_{L^2}\Vert u_t\Vert ^{\frac{1}{4}}_{H^1}. \end{aligned}$$
(3.1)

Proof

First, applying [29, Lem. 2.5] with \(q=3\) gives,

$$\begin{aligned} \Vert \nabla c_u\Vert _{L^\infty } \lesssim \Vert u\Vert ^{\frac{1}{4}}_{L^1} \Vert u\Vert ^{\frac{3}{4}}_{L^3}. \end{aligned}$$
(3.2)

Interpolation between \(L^2(\mathbb {R}^2)\) and \(L^\infty (\mathbb {R}^2)\) gives,

$$\begin{aligned} \Vert u\Vert _{L^3}\,\le \, \Vert u\Vert ^{\frac{2}{3}}_{L^2}\Vert u\Vert ^{\frac{1}{3}}_{L^\infty }. \end{aligned}$$
(3.3)

Combined with the embedding \(H^1(\mathbb {R}^2)\hookrightarrow \mathcal {C}^{0,0}(\mathbb {R}^2)\) (see Lemma A.3), the required estimate is obtained. \(\square \)

Remark 3.2

Note that the choice of \(q=3\) in the proof of Lemma 3.1 and the resulting exponents are essentially arbitrary, the only restriction being that a non-zero power of \(\Vert u_t\Vert _{L^p}\) for some \(p \in [1,2)\) must be included in the right-hand side. The choice of \(L^1\) is convenient since we will shortly demonstrate that \(\frac{\mathop {}\!\textrm{d}}{\mathop {}\!\textrm{d}t}\Vert u_t\Vert _{L^1}=0\) for all weak solutions.

Remark 3.3

Exploiting symmetries of the kernel K, (2.1) and following [36], we can write the advection term of (2.3) in a different form that will become useful later on. We note that,

$$\begin{aligned} \langle u_s\nabla c_s,\nabla \phi \rangle = \iint _{\mathbb {R}^{4}} u_s(x)\nabla _xK(x-y)\cdot \nabla \phi (x)u_s(y) \mathop {}\!\textrm{d}y\mathop {}\!\textrm{d}x. \end{aligned}$$
(3.4)

Renaming the dummy variables in the double integral and applying Fubini’s theorem, we also have

$$\begin{aligned} \langle u_s\nabla c_s,\nabla \phi \rangle = \iint _{\mathbb {R}^{4}} u_s(y)\nabla _yK(y-x)\cdot \nabla \phi (y)u_s(x) \mathop {}\!\textrm{d}y\mathop {}\!\textrm{d}x. \end{aligned}$$
(3.5)

Combining (3.4) and (3.5) gives

$$\begin{aligned}{} & {} \langle u_s\nabla c_s,\nabla \phi \rangle \\{} & {} \quad = \frac{1}{2} \iint _{\mathbb {R}^{4}} u_s(x) u_s(y) (\nabla _xK(x-y)\cdot \nabla \phi (x) + \nabla _yK(y-x)\cdot \nabla \phi (y))\mathop {}\!\textrm{d}y\mathop {}\!\textrm{d}x. \end{aligned}$$

Therefore, in view of (2.1) we may re-write \(\langle u_s\nabla c_s,\nabla \phi \rangle \) as

$$\begin{aligned} \begin{aligned} \langle u_s\nabla c_s,\nabla \phi \rangle =&-\frac{1}{ 4\pi } \iint _{\mathbb {R}^{4}} \frac{(\nabla \phi (x)-\nabla \phi (y))\cdot (x-y)}{|x-y|^{2}}u_s(x)u_s(y) \mathop {}\!\textrm{d}y \mathop {}\!\textrm{d}x \end{aligned} \end{aligned}$$
(3.6)

In order to prove our main result, we will need to manipulate the zeroth, first, and second moments of weak solutions. To do so, we define a family of radial, cut-off functions, indexed by \(\varepsilon \in (0,1)\) such that for some \(C>0\)

$$\begin{aligned} \Psi _\varepsilon (x) = {\left\{ \begin{array}{ll} 1, &{} \text { for } |x|< \varepsilon ^{-1},\\ 0, &{} \text { for } |x|>2\varepsilon ^{-1}, \end{array}\right. } \quad \Vert \nabla \Psi _\varepsilon \Vert _{L^\infty }\,\le \, C \varepsilon , \quad \Vert \nabla ^2 \Psi _\varepsilon \Vert _{L^\infty } \,\le \, C\varepsilon ^2. \end{aligned}$$
(3.7)

For any family of cut-off functions satisfying (3.7), it is straightforward to show that there exists a \(C>0\) such that

$$\begin{aligned} \begin{aligned} \sup _{\begin{array}{c} x\in \mathbb {R}^2\\ \varepsilon \in (0,1) \end{array}}|\nabla ^2(x\Psi _\varepsilon (x))|\,\,\le \, C, \quad \sup _{\begin{array}{c} x\in \mathbb {R}^2\\ \varepsilon \in (0,1) \end{array}}|\nabla ^2(|x|^2\Psi _\varepsilon (x))|\,\,\le \, C. \end{aligned} \end{aligned}$$
(3.8)

Note also that since \(\text {supp}(\nabla \Psi _\varepsilon ) = \text {supp}(\Delta \Psi _\varepsilon ) = B_{2\varepsilon ^{-1}}(0){\setminus } B_{\varepsilon ^{-1}}(0)\), then

$$\begin{aligned} \Vert \nabla \Psi _\varepsilon \Vert _{L^2}\,\le \, C \varepsilon ^{1/2} \quad \text { and } \quad \Vert \Delta \Psi _\varepsilon \Vert _{L^2} \,\le \, C \varepsilon ^{3/2}. \end{aligned}$$
(3.9)

We start with sign and mass preservation.

Proposition 3.4

Let \((u,\bar{T})\) a weak solution to (1.1). If \(u_0\ge 0\), then \(\mathbb {P}\)-a.s.

  1. (i)

    \(u_t\ge 0\) for all \(t\in [0,\bar{T})\),

  2. (ii)

    \(\Vert u_t\Vert _{L^1}= \Vert u_0\Vert _{L^1} =1\) for all \(t\in [0,\bar{T})\).

Proof

Let us define

$$\begin{aligned} S[u_t]= \int u_t(x) u^-_t(x)\,\mathop {}\!\textrm{d}x = \Vert u_t^-\Vert ^2_{L^2}, \end{aligned}$$

on \(L^2(\mathbb {R}^2)\), where \(u_t^- = u_t \mathbbm {1}_{\{u_t<0\}}\). The computations below can be properly justified by first defining an \(H^1\) approximation of the indicator function, obtaining uniform bounds in the approximation parameter using that \(u\in H^1\) and then passing to the limit using dominated convergence. For ease of exposition, we work directly with \(S[u_t]\) keeping these considerations in mind so that the following calculations should only be understood formally.

Applying (2.5) gives

$$\begin{aligned} \begin{aligned} S[u_t]&= S[u_0] + 2\int _0^t \int _{\{u_s<0\}} \nabla u_s(x)\cdot \left( -\nabla u_s(x) +\chi \nabla c_s(x)u_s(x) \right) \,\mathop {}\!\textrm{d}x\,\mathop {}\!\textrm{d}s \\&\quad + \sqrt{2\gamma }\sum _{k=1}^\infty \int _0^t \int _{\{ u_s<0 \}}u_s(x)\nabla \cdot (\sigma _ku_s(x))\,\mathop {}\!\textrm{d}x \circ d W^k_s. \end{aligned} \end{aligned}$$
(3.10)

Regarding the stochastic integral term, using that \(\nabla \cdot \sigma _k =0\), \(u_s \big |_{\partial \{u_s<0 \}} =0\) and integrating by parts, we have

$$\begin{aligned} \int _{\{ u_s<0 \}}u_s(x)\nabla \cdot (\sigma _ku_s(x))\,\mathop {}\!\textrm{d}x =-\frac{1}{2}\int _{\{ u_s<0 \}}\nabla \cdot (u^2_s(x) \sigma _k) \,\mathop {}\!\textrm{d}x =0. \end{aligned}$$

Regarding the finite variation integral,

$$\begin{aligned}&\int _0^t \int _{\{u_s<0\}} \nabla u_s(x)\cdot (-\nabla u_s(x) +\chi \nabla c_s(x)u_s(x) )\,\mathop {}\!\textrm{d}x\,\mathop {}\!\textrm{d}s\\&=-\int _0^t\int _{\{u_s<0\}} |\nabla u_s(x)|^2\mathop {}\!\textrm{d}x + \chi \int _0^t\int _{\{u_s<0\}} \nabla u_s(x)\cdot u_s(x)\nabla c_s(x)\mathop {}\!\textrm{d}x, \end{aligned}$$

we apply Young’s inequality in the second term, to give

$$\begin{aligned} \chi \int _{\{u_s<0\}} \nabla u_s(x)\cdot u_s(x)\nabla c_s(x)\mathop {}\!\textrm{d}x \,&\le \, \frac{\chi }{2\varepsilon } \int _{\{u_s<0\}} |\nabla u_s(x)|^2\mathop {}\!\textrm{d}x\\&\quad + \Vert \nabla c_s\Vert ^2_{L^\infty } \frac{\varepsilon \chi }{2}\int _{\{u_s<0\}} |u_s(x)|^2\mathop {}\!\textrm{d}x. \end{aligned}$$

So choosing \(\varepsilon = \frac{\chi }{4}\), we have

$$\begin{aligned}&-\int _{\{u_s<0\}} |\nabla u_s(x)|^2\mathop {}\!\textrm{d}x + \chi \int _{\{u_s<0\}} \nabla u_s(x)\cdot u_s(x)\nabla c_s(x)\mathop {}\!\textrm{d}x \\&\quad \,\le \, -\frac{1}{2}\int _{\{u_s<0\}} |\nabla u_s(x)|^2\mathop {}\!\textrm{d}x + \frac{\Vert \nabla c_s\Vert ^2_{ L^\infty }\chi ^2}{8} \int _{\{u_s<0\}} |u_s(x)|^2\mathop {}\!\textrm{d}x \end{aligned}$$

Putting all this together in (3.10) and using that \(\nabla u_s \in L^2(\mathbb {R}^2)\) for almost every \(s\in [0,\bar{T}]\),

$$\begin{aligned} S[u_t]&\,\le \, S[u_0] + \frac{\chi ^2}{4} \int _0^t \Vert \nabla c_s\Vert ^2_{ L^\infty }S[u_s]\mathop {}\!\textrm{d}s. \end{aligned}$$

So, having in mind Lemma 3.1 and applying Grönwall’s inequality, we almost surely have

$$\begin{aligned} S[u_t] \,\le \, S[u_0] \exp \left( \frac{C\chi ^2}{4} \Vert u\Vert ^{\frac{1}{4}}_{L^\infty _{\bar{T}}L^1} \Vert u\Vert ^{\frac{3}{4}}_{L^2_{\bar{T}}H^1} T \right) , \end{aligned}$$

where C is the constant from Lemma 3.1. Since \(S[u_0] =0\), it follows that \(\mathbb {P}\)-a.s. \(S[u_t]=0\) for all \(t\in [0,\bar{T})\) which shows the first claim.

To show the second claim, for \(\varepsilon \in (0,1)\), we define \(M_\varepsilon [u_t]:= \int _{\mathbb {R}^2} \Psi _\varepsilon (x)u_t(x)\mathop {}\!\textrm{d}x\), where the cut-off functions \(\Psi _\varepsilon \) are given in (3.7). Using the weak form of the equation and integrating by parts where necessary we see that

$$\begin{aligned} \begin{aligned} M_\varepsilon [u_t]&= M_\varepsilon [u_0] + (1+\gamma )\int _0^t \int _{\mathbb {R}^2} \Delta \Psi _\varepsilon (x) u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}s\\&\quad - \chi \int _0^t \int _{\mathbb {R}^2} \nabla \Psi _\varepsilon (x) \cdot \nabla c_s(x)u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}s\\&\quad - \sqrt{2\gamma } \sum _{k=1}^\infty \int _0^t \int _{\mathbb {R}^2} \nabla \Psi _\varepsilon (x) \cdot (u_s(x)\sigma _k(x))\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}W^k_s. \end{aligned} \end{aligned}$$
(3.11)

Applying the Cauchy–Schwartz inequality, the fact that the Itô integral disappears under the expectation and in view of (3.9), there exists a \(C>0\) such that

$$\begin{aligned}&\mathbb {E}\left[ \int _{\mathbb {R}^2}\Psi _\varepsilon (x)u_t(x)\mathop {}\!\textrm{d}x\right] \,\le \, \int _{\mathbb {R}^2} \Psi _\varepsilon (x)u_0(x)\mathop {}\!\textrm{d}x \\&\quad +(1+\gamma ) C\varepsilon ^{3/2} \mathbb {E}\left[ \Vert u\Vert _{L^\infty _TL^2}\right] + \chi C\varepsilon ^{1/2}\mathbb {E}\left[ \Vert \nabla c\Vert _{L^2_TL^\infty }\Vert u\Vert _{L^\infty _TL^2}\right] . \end{aligned}$$

Applying Fatou’s lemma,

$$\begin{aligned} \mathbb {E}\left[ \int _{\mathbb {R}^2} u_t(x) \mathop {}\!\textrm{d}x\right]&\,\le \, \liminf _{\varepsilon \rightarrow 0} \mathbb {E}\left[ \int _{\mathbb {R}^2}\Psi _\varepsilon (x)u_t(x)\,\mathop {}\!\textrm{d}x\right] \,\le \, \int _{\mathbb {R}^2} u_0(x)\mathop {}\!\textrm{d}x. \end{aligned}$$

Hence, \(\int _{\mathbb {R}^2} u_t(x)\mathop {}\!\textrm{d}x<\infty \) \(\mathbb {P}\)-a.s. for every \(t\in [0,\bar{T})\). We may now apply dominated convergence to each term in (3.11). In particular, stochastic dominated convergence is used for the last term on the right-hand side. Thus, to obtain almost sure convergence all the limits should be taken up to a suitable subsequence. Finally, noting that \(\Delta \Psi _\varepsilon \) and \(\nabla \Psi _\varepsilon \) converge to zero pointwise almost everywhere, we conclude

$$\begin{aligned} M[u_t] =\int _{\mathbb {R}^2}u_t(x)\mathop {}\!\textrm{d}x = \int _{\mathbb {R}^2} u_0(x)\mathop {}\!\textrm{d}x. \end{aligned}$$

In combination with the first statement of the lemma, this proves the second claim. \(\square \)

The following corollary to Proposition 3.4 will be crucial to obtaining our central contradiction in the proof of Theorem 2.8.

Corollary 3.5

Let \((u,\bar{T})\) be a weak solution to (1.1). Then for any \(f:\mathbb {R}^2\rightarrow \mathbb {R}\) such that \(f>0\) Lebesgue almost everywhere and any \(t\in [0,\bar{T})\),

$$\begin{aligned} \int _{\mathbb {R}^2} f(x)u_t(x)\mathop {}\!\textrm{d}x >0 \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

We first show that any weak solution must have positive support. Let us fix an almost sure realization of the solution, then chose any \(t\in [0,\bar{T})\) and assume for a contradiction that, \(u_t(\omega )\) is supported on a set of zero measure. However, since \(\Vert u_t(\omega )\Vert _{L^1}=1\), we find that for any \(p> 1\),

$$\begin{aligned} 1= \int _{\mathbb {R}^2} u_t(x,\omega )\mathop {}\!\textrm{d}x \,\le \, \left( \int _{\mathbb {R}^2}|u_t(x,\omega )|^p\mathop {}\!\textrm{d}x \right) ^{\frac{1}{p}}\left( \int _{\text {supp}(u_t(\omega ))}1\,\mathop {}\!\textrm{d}x \right) ^{\frac{p-1}{p}} = 0, \end{aligned}$$

which is a contradiction. Since f is assumed to be strictly positive, Lebesgue almost surely, the conclusion follows. \(\square \)

In the following proposition, we derive the evolution for the center of mass and the variance of a weak solution to (1.1).

Proposition 3.6

Let us assume that \(u_0\in L^2(\mathbb {R}^2)\cap \mathcal {P}(\mathbb {R}^2)\) is such that \(V[u_0] <\infty \). Then for any weak solution to (1.1) in the sense of Definition 2.4, \(\mathbb {P}\)-a.s. for any \(t\in [0,\bar{T})\),

$$\begin{aligned}&C[u_t]= -\sqrt{2\gamma } \sum _{k=1}^\infty \int _0^t \int _{\mathbb {R}^2} \sigma _k(x)u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}W^k_s, \end{aligned}$$
(3.12)
$$\begin{aligned} V[u_t]&=V[u_0]+ \left( 2 (1+\gamma )- \frac{\chi }{4\pi }\right) t - \frac{1}{2}|C[u_t]|^2\nonumber \\&\quad -\sqrt{2\gamma } \sum _{k\ge 1} \int _0^t \int _{\mathbb {R}^2} x \cdot \sigma _k(x)u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}W^k_s \end{aligned}$$
(3.13)

Proof

Without loss of generality, we may assume that \(C[u_0] = \int _{\mathbb {R}^2}x u_0(x)\mathop {}\!\textrm{d}x =0\). Indeed, given a non-centred initial condition \(\tilde{u}_0\) with \(C[\tilde{u}_0]=c\ne 0\) one may redefine \({C[u_t]:= \int _{\mathbb {R}^2}(x-c)u_t(x)\mathop {}\!\textrm{d}x}\) whose evolution along weak solutions to (1.1) will again, using the argument given below, satisfy the identity (3.12). The rest of our analysis therefore holds without further change.

Let \(p\in \{1,2\}\) and we use the convention that for \(p=2\), \(x^p:= |x|^2\). Since \(x^p\Psi _{\varepsilon }(x)\) is an \(H^1(\mathbb {R}^2)\) function, we may apply (2.4) along with Remark 3.3 and integrate by parts where necessary to give that

$$\begin{aligned}&\int x^p\Psi _\varepsilon (x)u_t(x)\mathop {}\!\textrm{d}x \nonumber \\&\quad = \int _{\mathbb {R}^2} x^p\Psi _\varepsilon (x)u_0(x)\mathop {}\!\textrm{d}x +(1+\gamma )\int _0^t\int _{\mathbb {R}^2} \Delta (x^p\Psi _\varepsilon (x))u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}s \nonumber \\&\qquad -\frac{\chi }{4\pi }\int _0^t \iint _{\mathbb {R}^4} \frac{(\nabla (x^p\Psi _\varepsilon (x))-\nabla (y^p\Psi _\varepsilon (y)))\cdot (x-y)}{|x-y|^2}u_s(x)u_s(y)\,\mathop {}\!\textrm{d}y\,\mathop {}\!\textrm{d}x \,\mathop {}\!\textrm{d}s \\&\qquad - \sqrt{2\gamma } \sum _{k\ge 1}\int _0^t \int _{\mathbb {R}^2} \nabla (x^p\Psi _\varepsilon (x))\cdot \sigma _k(x)u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}W^k_s. \nonumber \end{aligned}$$
(3.14)

From (3.8), it follows that uniformly across \(x\in \mathbb {R}^2\) and \(\varepsilon \in (0,1)\), \(\Delta (x^p\Psi _\varepsilon (x))\) is bounded and \(\nabla (x^p\Psi _\varepsilon (x))\) is Lipschitz continuous. Hence, using that \(\Vert u_t\Vert _{L^1}=1\) for all \(t\in [0,\bar{T}]\) there exists a \(C>0\) such that, for all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb {E}\left[ \int _{\mathbb {R}^2}x^p\Psi _\varepsilon (x)u_t(x)\,\mathop {}\!\textrm{d}x\right] \,\le \, \int _{\mathbb {R}^2} x^p \Psi _\varepsilon (x)u_0(x)\mathop {}\!\textrm{d}x + \left( (1+\gamma ) + \frac{\chi }{4\pi } \right) tC. \end{aligned}$$

Note that we may directly apply Lebesgue’s dominated convergence to the initial data term, since \(|x^p\Psi _\varepsilon (x)u_0(x)|\,\le \, |x^pu_0(x)|\) where the latter is assumed to be integrable. Now, let us for the moment take only \(p=2\). Applying Fatou’s lemma,

$$\begin{aligned} \mathbb {E}\left[ \int _{\mathbb {R}^2} |x|^2 u_t(x) \mathop {}\!\textrm{d}x\right]&\,\le \, \liminf _{\varepsilon \rightarrow 0} \mathbb {E}\left[ \int _{\mathbb {R}^2}|x|^2\Psi _\varepsilon (x)u_t(x)\,\mathop {}\!\textrm{d}x\right] \\&\,\le \, \int _{\mathbb {R}^2} |x|^2 u_0(x)\mathop {}\!\textrm{d}x + \left( (1+\gamma ) + \frac{\chi }{4\pi } \right) tC<\infty . \end{aligned}$$

Hence, \(\int _{\mathbb {R}^2} |x|^2 u_t(x)\mathop {}\!\textrm{d}x<\infty \) \(\mathbb {P}\)-a.s. From Proposition 3.4, \(u_t\) is a probability measure on \(\mathbb {R}^2\), so we have the bound

$$\begin{aligned} \int _{\mathbb {R}^2} |x| u_t(x)\mathop {}\!\textrm{d}x \,\le \, \left( \int _{\mathbb {R}^2} |x|^2 u_t(x)\mathop {}\!\textrm{d}x\right) ^{1/2}. \end{aligned}$$

It follows that for \(p\in \{1,2\}\), \(\int _{\mathbb {R}^2} x^p u_t(x)\mathop {}\!\textrm{d}x<\infty \) \(\mathbb {P}\)-a.s. Since by definition we also have,

$$\begin{aligned} |x^p\Psi _\varepsilon (x)u_t(x)| \,\le \, |x^pu_t(x)|, \end{aligned}$$

as in the proof of Proposition 3.4, we may apply dominated convergence in each integral of (3.14). Using that for \(p\in \{1,2\}\) and Lebesgue almost every \(x,\,y\in \mathbb {R}^2\)

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \Delta (x^p\Psi _\varepsilon (x))&= 0, \\ \lim _{\varepsilon \rightarrow 0} \nabla (x^p\Psi _\varepsilon (x)-y^p\Psi _{\varepsilon }(y))&= {\left\{ \begin{array}{ll} 0, &{} \text { if } p= 1, \\ 2(x-y), &{} \text { if } p=2, \end{array}\right. } \\ \lim _{\varepsilon \rightarrow 0} \nabla (x^p\Psi _\varepsilon (x))&={\left\{ \begin{array}{ll} 1,&{}\text { if } p=1, \\ 2x, &{} \text { if } p=2. \end{array}\right. } \end{aligned}$$

we directly find the claimed identities for \(C[u_t]\) and \(\frac{1}{2}\int _{\mathbb {R}^2}|x|^2u_t(x)\mathop {}\!\textrm{d}x\). To conclude it only remains to note that

$$\begin{aligned} V[u_t] = \frac{1}{2} \int _{\mathbb {R}^2} |x|^2u_t(x)\mathop {}\!\textrm{d}x - \frac{1}{2}|C[u_t]|^2. \end{aligned}$$

\(\square \)

4 Proof of Theorem 2.8

We will prove both statements by demonstrating that in each case the a priori properties of any weak solution proved in Proposition 3.4 will be violated at some finite time, either almost surely or with positive probability. Furthermore, we will make use of the identities shown in Proposition 3.6. Notice that our proofs of both of these propositions rely heavily on assumptions (H1)–(H3).

To prove (i) let us assume that given \(u_0 \in \mathcal {P}_2(\mathbb {R}^2)\cap L^2(\mathbb {R}^2)\) and any associated weak solution \((u,\bar{T})\), it holds that \(\mathbb {P}(\bar{T}=\infty )>0\) and let us choose any \(\omega \in \{\bar{T}=\infty \}\). We may in addition assume that \(\omega \) is a member of the full measure set where the solution lies in \(C_TL^2(\mathbb {R}^2)\cap L^2_TH^1(\mathbb {R}^2)\) for any \(T>0\) and the set where the Itô integral is well defined. Applying (3.13) of Proposition 3.6, for any \(0<t<\infty \) and the above \(\omega \) we have that

$$\begin{aligned} \begin{aligned} V[u_t](\omega ) =&V[u_0] + \left( 2(1+\gamma ) - \frac{\chi }{4\pi }\right) t - \frac{1}{2}\left| C[u_t]\right| ^2(\omega )\\&- \sqrt{2\gamma }\sum _{k\ge 1} \left( \int _0^t \int _{\mathbb {R}^2} x\cdot \sigma _k(x)u_s(x) \mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}W_s^k\right) (\omega ). \end{aligned} \end{aligned}$$
(4.1)

Now since the stochastic integral is by definition a local martingale, by the Dambis–Dubin–Schwarz theorem [34, Ch. V. Thm.  1.6], there exists a random time change \(t\mapsto q(t)\), with q(t) being the quadratic variation of the local martingale at time t, and a real-valued Brownian motion B such that for all t in the range of q,

$$\begin{aligned} V[u_t](\omega )=\, V[u_0] + \left( 2(1+\gamma ) -\frac{\chi }{4\pi }\right) t -\frac{1}{2}|C[u_t]|^2(\omega )-\sqrt{2\gamma } B_{q(t)(\omega )}(\omega ). \end{aligned}$$

Either the range of q is \([0,\infty )\) or q is bounded. If \(\omega \) is such that the first case holds, then there exists a \(T>0\) such that \(\sqrt{2\gamma }B_{q(T)(\omega )}(\omega )=V[u_0]\), at which point, since by assumption \(\left( 2(\gamma +1)-\frac{\chi }{4\pi }\right) T-\frac{1}{2}|C[u_t]|^2(\omega )<0\) for any \(T>0\), we have

$$\begin{aligned} V[u_T](\omega )<0. \end{aligned}$$

The latter is in contradiction with Corollary 3.5. Alternatively, if \(\omega \) is such that \(t\mapsto q(t)(\omega )\) is bounded, then there exists a \(B_\infty (\omega )\in \mathbb {R}\) such that \(\lim _{t\nearrow \infty }B_{q(t)(\omega )}(\omega ) = B_\infty (\omega )\), see cite[Ch. 5, Prop. 1.8]. In which case, for all \(t>0\),

$$\begin{aligned} V[u_t](\omega ) \,{} & {} \le \, V[u_0] +\left( 2(1+\gamma ) -\frac{\chi }{4\pi }\right) t\\{} & {} \quad -\frac{1}{2}|C[u_t]|^2(\omega )+\sqrt{2\gamma }\left( |B_\infty |(\omega ) + |B_\infty -B_{q(t)}|(\omega )\right) . \end{aligned}$$

Since the final term vanishes for large \(t>0\) and using again the fact that \(2(1+\gamma ) -\frac{\chi }{4\pi }<0\) there exists a \(T>0\) sufficiently large such that once more

$$\begin{aligned} V[u_T](\omega ) <0. \end{aligned}$$

Again this contradicts Corollary 3.5 and as such our initial assumption that \(\mathbb {P}(\bar{T}=\infty )>0\) must be false. Hence, \(\mathbb {P}(\bar{T}<\infty )=1\).

Finally, taking the expectation on both sides of (4.1) we see that for

$$\begin{aligned} t\ge T^*_1:= \frac{4\pi V[u_0]}{\chi -(1+\gamma )8\pi } \end{aligned}$$

one has

$$\begin{aligned} \mathbb {E}[V[u_t]] \,\le \, 0. \end{aligned}$$

Since \(V[u_t]\) is non-negative, we must in addition have \(\mathbb {P}(\bar{T}< T^*_1) =1\).

To prove (ii) let us instead assume that given any suitable initial data, for the associated weak solution one has \(\mathbb {P}(\bar{T}=\infty )=1\). Now taking expectations on both sides of (3.13), we have

$$\begin{aligned} \mathbb {E}\left[ V[u_t]\right]&= V[u_0] +\left( 2(1+\gamma ) -\frac{\chi }{4\pi }\right) t - \frac{1}{2}\mathbb {E}\left[ \left| C[u_t]\right| ^2\right] . \end{aligned}$$
(4.2)

Using (3.12) and Itô’s isometry, we see that

$$\begin{aligned} \mathbb {E}\left[ \left| C[u_t]\right| ^2\right]&= 2\gamma \sum _{k=1}^\infty \int _0^t\mathbb {E}\left[ \left| \int \sigma _k(x) u_s(x)\mathop {}\!\textrm{d}x \right| ^2\right] \mathop {}\!\textrm{d}s \nonumber \\&=2\gamma \int _0^t\mathbb {E}\left[ \left( \iint \sum _{k=1}^\infty \sigma _k(x)\cdot \sigma _k(y) u_s(y) u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y \right) \right] \mathop {}\!\textrm{d}s \end{aligned}$$
(4.3)

where we exchanged summation and expectation using dominated convergence and recalling that \(u_s\in \mathcal {P}(\mathbb {R}^2)\) \(\mathbb {P}\)-a.s. for all \(s\in [0,T]\) and applying (H1).

Now the game is to estimate \(\sum _{k=1}^\infty \sigma _k(x)\cdot \sigma _k(y)\) from below, remembering that \(u_t\ge 0\) \(\mathbb {P}\)-a.s. for all \(t\in [0,T]\). From Remark 2.1,

$$\begin{aligned} \sum _{k=1}^\infty |\sigma _k(x)-\sigma _k(y)|^2 \,\le \, C_{\varvec{\sigma }}|x-y|^2,\quad \text { for all }x,\,y \in \mathbb {R}. \end{aligned}$$

As a direct consequence and in view of (H3)-b)

$$\begin{aligned} C_{\varvec{\sigma }} |x-y|^2 \ge \sum _{k=1}^\infty |\sigma _k(x)-\sigma _k(y)|^2&= \sum _{k=1}^\infty [|\sigma _k(x)|^2 -2\sigma _k(x)\cdot \sigma _k(y)+ |\sigma _k(y)|^2] \\&= 2\text {Tr}(Q(0))-2\sum _{k=1}^\infty \sigma _k(x)\cdot \sigma _k(y)\\&= 4 -2\sum _{k=1}^\infty \sigma _k(x)\cdot \sigma _k(y). \end{aligned}$$

Rearranging the above inequality gives

$$\begin{aligned} \sum _{k=1}^\infty \sigma _k(x)\cdot \sigma _k(y) \ge 2 - \frac{1}{2}C_{\varvec{\sigma }}|x-y|^2. \end{aligned}$$
(4.4)

Plugging (4.4) into (4.3) and the fact that \(u_t\in \mathcal {P}(\mathbb {R}^2)\) \(\mathbb {P}\)-a.s. for all \(t\in [0,T]\), we find

$$\begin{aligned} \mathbb {E}[|C[u_t]|^2]&\ge 4\gamma \int _0^t \mathbb {E}\left[ \iint u_s(x)u_s(y)\mathop {}\!\textrm{d}x\mathop {}\!\textrm{d}y \right] \mathop {}\!\textrm{d}s \\&\quad - C_{\varvec{\sigma }} \gamma \int _0^t \mathbb {E}\left[ \iint |x-y|^2 u_s(x)u_s(y)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y \right] \mathop {}\!\textrm{d}s\\&= 4\gamma t - C_{\varvec{\sigma }} \gamma \int _0^t \mathbb {E}\left[ \iint |x-y|^2 u_s(x)u_s(y)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y \right] \mathop {}\!\textrm{d}s. \end{aligned}$$

The integrand in the second term can easily be rewritten as

$$\begin{aligned} \iint |x-y|^2 u_s(y)u_s(x)\mathop {}\!\textrm{d}x \mathop {}\!\textrm{d}y&= 4V[u_s]. \end{aligned}$$

Thus, we establish the lower bound

$$\begin{aligned} \begin{aligned} \mathbb {E}[|C[u_t]|^2] \ge 4\gamma t - 4\gamma C_{\varvec{\sigma }} \int _0^t\mathbb {E}[V[u_s]]\,\mathop {}\!\textrm{d}s. \end{aligned} \end{aligned}$$
(4.5)

Therefore, inserting (4.5) into (4.2), we find

$$\begin{aligned} \mathbb {E}[ V[u_t]]&\,\le \, V[u_0] + \left( 2-\frac{\chi }{4\pi }\right) t + 2\gamma C_{\varvec{\sigma }} \int _0^t \mathbb {E}[V[u_s]]\,\mathop {}\!\textrm{d}s. \end{aligned}$$

Applying Grönwall lemma,

$$\begin{aligned} \mathbb {E}[V[u_t]]&\,\le \, V[u_0] + \left( 2-\frac{\chi }{4\pi }\right) t +2\gamma C_{\varvec{\sigma }} \int _0^t \left( V[u_0] + \left( 2-\frac{\chi }{4\pi }\right) s\right) e^{(t-s)2\gamma C_{\varvec{\sigma }} }\,\mathop {}\!\textrm{d}s. \end{aligned}$$

Evaluating the integrals,

$$\begin{aligned} \mathbb {E}[V[u_t]] \,\le \, \left( V[u_0] - \frac{1}{2\gamma C_{\varvec{\sigma }}} \left( \frac{\chi }{4\pi }-2\right) \right) e^{2\gamma C_{\varvec{\sigma }} t} +\frac{1}{2\gamma C_{\varvec{\sigma }}}\left( \frac{\chi }{4\pi }-2\right) , \end{aligned}$$

which implies that if \(\chi >8\pi \left( 1+\gamma C_{\varvec{\sigma }} V[u_0]\right) \) and \(t\ge T^*_2\), for

$$\begin{aligned} T^*_2:= \frac{1}{2\gamma C_{\varvec{\sigma }}} \left( \log (\chi -8\pi ) - \log \left( \chi -V[u_0]8\pi \gamma C_{\varvec{\sigma }} -8\pi \right) \right) , \end{aligned}$$

then

$$\begin{aligned} \mathbb {E}[V[u_t]]\,\le \, 0. \end{aligned}$$

This is again in contradiction with Corollary 3.5 which shows \(\mathbb {P}\)-a.s. positivity of \(V[u_t]\) for u a weak solution to (1.1). Hence, our initial assumption must have been false and so for any weak solution the probability of global existence must be strictly less than 1. Furthermore, using again the fact that \(V[u_t]\) is almost surely non-negative, we must in fact have \(\mathbb {P}(\bar{T}< T^*_2)>0\)\(\square \)