1 Introduction

Observability and null-controllability results for (non-)autonomous Cauchy problems are relevant especially in the field of control theory of partial differential equations and have recently attracted a lot of attention in the literature. Here, the most common approach towards final-state observability is a so-called Lebeau-Robbiano strategy, which combines a suitable uncertainty principle with a corresponding dissipation estimate for the evolution family describing the evolution of the system, see (essUCP) and (DE) below, respectively. Certain null-controllability results can then be inferred from final-state observability via a standard duality argument, see, e.g., [4] for more information and also [9] for an holistic overview of duality theory for control systems.

Such a Lebeau-Robbiano strategy has been considered, for instance, in [1, 2, 4, 8, 10, 11, 13, 14], see also [5] for a review of other related results in this context. The two most general results in this direction so far are [4, Theorem 3.3] and [1, Theorem 13], each highlighting different aspects and exhibiting certain advantages and disadvantages over the other, both with regard to hypotheses and the asserted conclusion, see the discussion below. The aim of the present work is to present a unified extension of both mentioned results, taking the best of each, thus allowing to apply the Lebeau-Robbiano strategy to a broader range of observation problems and, at the same time, providing a streamlined proof.

2 Lebeau-Robbiano strategy for non-autonomous observation problems

For the reader’s convenience, let us fix the following notational setup.

Hypothesis 2.1

Let X and Y be Banach spaces, \(T>0\), \(E \subseteq [0,T]\) be measurable with positive Lebesgue measure, and \((U(t,s))_{0\le s\le t \le T}\) be an exponentially bounded evolution family on X. Let \(C:[0,T]\rightarrow {\mathcal {L}}(X,Y)\) be essentially bounded on E such that \([0,T] \ni t\mapsto \Vert C(t) U(t,0) x_0\Vert _Y\) is measurable for all \(x_0\in X\).

Here, we denote by \({\mathcal {L}}(X,Y)\) the space of bounded operators from X to Y. Also recall that \((U(t,s))_{0\le s\le t \le T} \subseteq {\mathcal {L}}(X) {:}{=}{\mathcal {L}}(X,X)\) is called an evolution family of bounded operators on X if

$$\begin{aligned} U(s,s) = {{\,\textrm{Id}\,}}\quad \text { and }\quad U(t,s)U(s,r) = U(t,r) \quad \text { for }\ 0\le r\le s \le t \le T.\nonumber \\ \end{aligned}$$
(2.1)

It is called exponentially bounded if there exist \(M \ge 1\) and \(\omega \in {\mathbb {R}}\) such that for all \(0 \le s \le t \le T,\) we have the bound \(\Vert U(t,s) \Vert _{{\mathcal {L}}(X)} \le M \mathrm e^{\omega (t - s)}\).

Evolution families are oftentimes used to describe the evolution of non-autonomous Cauchy problems, see, e.g., [4, Section 2] and the references cited therein. The family \((C(t))_{t\in [0,T]}\) in the mapping \(t\mapsto \Vert C(t) U(t,0) x_0\Vert _Y\) can be understood as observation operators through which the state of the system is observed at each time \(t \ge 0\). In the context of \(L^p\)-spaces, these are often chosen as multiplication operators by characteristic functions for some (time-dependent) sensor sets, see, e.g., Example 2.5 below.

The following theorem now covers and extends all known previous results in this context, see the discussion below.

Theorem 2.2

Assume Hypothesis 2.1. Let \((P_\lambda )_{\lambda >0}\) be a family in \({\mathcal {L}}(X)\) such that for some constants \(d_0,d_1,\gamma _1 > 0,\) we have

$$\begin{aligned} \forall \lambda > 0, \forall x\in X :\ \Vert P_\lambda x\Vert _{X} \le d_0 \mathrm e^{d_1 \lambda ^{\gamma _1}} \mathop {\mathrm {ess\,inf}}\limits \bigl \{ E \ni \tau \mapsto \Vert C(\tau ) P_\lambda x \Vert _{Y} \bigr \}. \end{aligned}$$
(essUCP)

Suppose also that for some constants \(d_2\ge 1\), \(d_3,\gamma _2,\gamma _3 >0\) with \(\gamma _2 > \gamma _1\) and \(\gamma _4 \ge 0,\) we have

$$\begin{aligned} \begin{aligned}&\forall \lambda > 0, \forall 0\le s < t\le T, \forall x\in X :\\&\Vert (\textrm{Id}-P_\lambda ) U(t,s) x \Vert _{X} \le d_2 \max \bigl \{ 1, (t - s)^{-\gamma _4} \bigr \} \mathrm e^{-d_3 \lambda ^{\gamma _2} (t-s)^{\gamma _3}} \Vert x\Vert _{X}. \end{aligned} \end{aligned}$$
(DE)

Then, there exists a constant \(C_{\textrm{obs}} > 0\) such that for each \(r \in [1,\infty ]\) and all \(x_0\in X,\) we have the final-state observability estimate

$$\begin{aligned} \Vert U(T,0) x_0 \Vert _{X} \le C_\textrm{obs}{\left\{ \begin{array}{ll} \left( \int _E \Vert C(t)U(t,0) x_0 \Vert _Y^r \,{{\text {d}}\!{t}} \right) ^{1/r}, &{} r < \infty ,\\ \mathop {\mathrm {ess\,sup}}\limits _{t\in E} \Vert C(t)U(t,0)x_0 \Vert _{Y}, &{} r=\infty . \end{array}\right. } \end{aligned}$$
(OBS)

Moreover, if for some interval \((\tau _1,\tau _2) \subseteq [0,T]\) with \(\tau _1 < \tau _2\) we have \(|(\tau _1,\tau _2) \cap E| = \tau _2 - \tau _1\), then, depending on the value of r, the constant \(C_{\textrm{obs}}\) can be bounded as

$$\begin{aligned} C_\textrm{obs}\le \frac{C_1}{(\tau _2-\tau _1)^{1/r}} \exp \Biggl ( \frac{C_2}{(\tau _2-\tau _1)^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} + C_3T \Biggr ), \end{aligned}$$
(2.2)

with the usual convention \(1/r = 0\) if \(r=\infty \), where \(C_1,C_2,C_3 > 0\) are constants not depending on r, T, E, \(\tau _1\), or \(\tau _2\).

The above theorem represents the established Lebeau-Robbiano strategy, in which an uncertainty principle (essUCP), here with respect to the given family \((P_\lambda )_{\lambda >0}\) and uniform only on the subset \(E \subseteq [0,T]\), and a corresponding dissipation estimate in the form (DE) are used as an input; it should be emphasized that the requirement \(\gamma _1 < \gamma _2\) is essential here. The output in the form of (OBS) then constitutes a so-called final-state observability estimate for the evolution family \((U(t,s))_{0\le s\le t \le T} \subseteq {\mathcal {L}}(X)\) with respect to the family \((C(t))_{t\in [0,T]}\) of observation operators. The corresponding constant \(C_\textrm{obs}\) in (OBS) is called observability constant. An explicit form of the constants \(C_1,C_2,C_3\) in (2.2) is given in Remark 3.1 below for easier reference.

2.1 Discussion and extensions

We first comment on two minor extensions of Theorem 2.2.

Remark 2.3

(1) It becomes clear from the proof, see (3.7) below, that instead of the polynomial blow-up in the dissipation estimate (DE) for small differences \(t-s\) one can also allow a certain (sub-)exponential blow-up. More precisely, one may replace the term \(\max \{1,(t-s)^{-\gamma _4}\}\) in estimate (DE) by a factor of the form \(\exp (c(t-s)^{-\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}})\) with some constant \(c > 0\).

(2) If \(|(\tau _1,\tau _2) \cap E| = \tau _2 - \tau _1\) for some interval \((\tau _1,\tau _2) \subseteq [0,T]\) with \(\tau _1 < \tau _2\), then the dissipation estimate (DE) is actually needed only for \(\tau _1< s < t \le \tau _2\), cf. part (1) of Remark 3.2 below.

Let us now compare Theorem 2.2 to earlier results in the literature.

Remark 2.4

(1) In the particular case where \([\tau _1,\tau _2] = [0,T]\), the bound on \(C_\textrm{obs}\) in (2.2) is completely consistent, except perhaps for some minor differences in the explicit form of the constants \(C_1,C_2,C_3\) in Remark 3.1 below, with all bounds obtained earlier for \(E = [0,T]\) in [3, 8, 14] in the autonomous case (i.e. \(C(t) \equiv C\) and the evolution family being actually a semigroup) and in [4] in the non-autonomous case.

(2) Theorem 2.2 covers [4, Theorem 3.3], while allowing a polynomial blow-up for small differences \(t - s\) in the dissipation estimate (DE). Such a blow-up has first been considered in [1, Theorem 13], but under much more restrictive assumptions, see item (4) below. Moreover, in contrast to [4, Theorem 3.3], Theorem 2.2 requires the uncertainty relation (essUCP) only on a subset of [0, T] of positive measure, instead of the whole interval [0, T], and thus allows far more general families of observation operators. These families also need to be uniformly bounded only on this measurable subset and not on the whole interval [0, T].

(3) The results from [3, Theorem A.1] and [8, Theorem 2.1] formulate a variant of our Theorem 2.2 in the autonomous case with \(E = [0,T]\), but assume (essUCP) and (DE) only for \(\lambda > \lambda ^*\) with some \(\lambda ^* \ge 0\). Our current formulation with the whole range \(\lambda > 0\), just as in [4, Theorem 3.3], is not really a restriction to that. Indeed, by a change of variable, one may then simply consider the family \((P_{\lambda +\lambda ^*})_{\lambda >0}\) instead, with a straightforward adaptation of the parameters \(d_0\) and \(d_1\) in (essUCP). In this sense, Theorem 2.2 completely covers [3, Theorem A.1] and [8, Theorem 2.1].

(4) Let \(\tau _1 \in [0,T)\) be such that \(| (\tau _1,T) \cap E | > 0\). By a change of variable, namely via considering \(C(\cdot + \tau _1)\) on \([0,T-\tau _1]\) and the evolution family \(U(t+\tau _1,s+\tau _1)_{0\le s<t\le T-\tau _1}\), one may replace U(T, 0), U(t, 0), and E in (OBS) by \(U(T,\tau _1)\), \(U(t,\tau _1)\), and \((\tau _1,T) \cap E\) respectively; note that (essUCP) and (DE) then remain valid with the same constants. In this sense, Theorem 2.2 entirely covers [1, Theorem 13], while leaving the Hilbert space setting and not requiring strong continuity or contractivity of the evolution family. At the same time, our bound on \(C_\textrm{obs}\) in (2.2) contains an additional prefactor \(1/(\tau _2-\tau _1)^{1/r}\) in front of the exponential term, which significantly changes the asymptotics of the estimate as \(\tau _2-\tau _1\) (and thus also T) gets large. Such improved asymptotics has proved extremely useful in the past, for instance, when considering homogenization limits as in [14].

In order to support the above comparison, we briefly revisit [4, Theorem 4.8] in the following example.

Example 2.5

Let \({\mathfrak {a}}\) be a uniformly strongly elliptic polynomial of degree \(m \ge 2\) in \({\mathbb {R}}^d\) with coefficients \(a_\alpha \in L^\infty ([0,T])\), that is, \({\mathfrak {a}}:[0,T] \times {\mathbb {R}}^d \rightarrow {\mathbb {C}}\) with

$$\begin{aligned} {\mathfrak {a}}(t,\xi ) = \sum _{\begin{array}{c} |\alpha |\le m \end{array}} a_\alpha (t) (\mathrm i\xi )^\alpha ,\quad t \in [0,T],\ \xi \in {\mathbb {R}}^d, \end{aligned}$$

such that for some \(c > 0,\) we have

$$\begin{aligned} {\mathrm {Re\,}}\sum _{|\alpha | = m} a_\alpha (t,\xi ) \ge c|\xi |^m,\quad t \in [0,T],\ \xi \in {\mathbb {R}}^d. \end{aligned}$$

Let \(p \in [1,\infty ]\). It was shown in [4, Theorem 4.4] that there is an exponentially bounded evolution family \((U_p(t,s))_{0\le s\le t\le T}\) in \(L^p({\mathbb {R}}^d)\) associated to \({\mathfrak {a}}\). Let \((\Omega (t))_{t\in [0,T]}\) be a family of measurable subsets of \({\mathbb {R}}^d\) such that the mapping \([0,T] \times {\mathbb {R}}^d \ni (t,x) \mapsto {\textbf{1}}_{\Omega (t)}(x)\) is measurable. Then, \(\Vert {\textbf{1}}_{\Omega (\cdot )}U_p(\cdot ,0)u_0 \Vert _{L^p({\mathbb {R}}^d)}\) is for all \(u_0 \in L^p({\mathbb {R}}^d)\) measurable on [0, T] by [4, Lemma 4.7], so that Hypothesis 2.1 with \(C(t) = {\textbf{1}}_{\Omega (t)}\) is satisfied for every choice of measurable \(E \subseteq [0,T]\) with positive measure. Moreover, a dissipation estimate as in (DE) with \(\gamma _2 = m\) (but without blow-up, i.e., \(\gamma _4 = 0\)) was established in the proof of [4, Theorem 4.8] with \(P_\lambda \) being some smooth frequency cutoffs. It remains to consider a corresponding essential uncertainty principle (essUCP).

Suppose that the family \((\Omega (t))_{t\in [0,T]}\) of subsets is uniformly thick on E in the sense that there are \(L,\rho > 0\) such that for all \(x \in {\mathbb {R}}^d\) and all \(t \in E,\) we have

$$\begin{aligned} | \Omega (t) \cap ((0,L)^d + x) | \ge \rho L^d. \end{aligned}$$
(2.3)

Following the proof of [4, Theorem 4.8], we see that an essential uncertainty principle as in (essUCP) holds with \(\gamma _1 = 1 < \gamma _2\), so that Theorem 2.2 can be applied. Here, the set E for which (2.3) needs to hold could be any measurable subset of [0, T] with positive measure, for instance, \(E = [0,T] \setminus {\mathbb {Q}}\) (satisfying \(|E| = T\)) or even some fractal set, say of Cantor-Smith-Volterra type. In particular, this allows for completely arbitrary choices of measurable \(\Omega (t)\) for \(t \notin E\), even \(\Omega (t) = \emptyset \). By contrast, these choices for \(\Omega (t)\) would not be allowed in [4, Theorem 4.8], where (2.3) is required to hold for all \(x \in {\mathbb {R}}^d\) and all \(t \in [0,T]\) and is thus much more restrictive on the choice of \((\Omega (t))_{t\in [0,T]}\).

Remark 2.6

In the situation of Example 2.5 with \(p < \infty \), it was shown in [4, Theorem 4.10] that an observability estimate as in (OBS) can hold with \(r < \infty \) only if the family \((\Omega (t))_{t\in [0,T]}\) is mean thick in the sense that for some \(L,\rho > 0,\) we have

$$\begin{aligned} \frac{1}{T} \mathop \int \limits _0^T | \Omega (t) \cap ((0,L)^d + x) | \,{{\text {d}}\!{t}} \ge \rho L^d \quad \text { for all }\ x \in {\mathbb {R}}^d. \end{aligned}$$

It is easy to see that families which are uniformly thick on a subset of [0, T] of positive measure as in (2.3) are also mean thick in the above sense (with possibly different parameters), but the converse need not be true. A corresponding example in \({\mathbb {R}}\) is the family \((\Omega (t))_{t\in [0,T]}\) with \(\Omega (t) = (0,\infty )\) for \(t \le T/2\) and \(\Omega (t) = (-\infty ,0)\) for \(T/2 < t \le T\). It is yet unclear whether such choices also lead to an observability estimate as in (OBS) or anything similar. In this sense, Example 2.5 and Theorem 2.2 still leave a gap between necessary and sufficient conditions on the family \((\Omega (t))_{t\in [0,T]}\) towards final-state observability.

3 Proof of Theorem 2.2

Our proof of Theorem 2.2 is a streamlined adaptation of earlier approaches, especially of that from [4] and its predecessors [14, 8], and [1]. It avoids the interpolation argument in [4] and is thus much more direct and, at the same time, requires an uncertainty relation only on a measurable subset of [0, T] of positive measure.

Proof of Theorem 2.2

Let us fix \(x_0 \in X\). For \(0 \le t \le T,\) we abbreviate

$$\begin{aligned} F(t) {:}{=} \Vert U(t,0)x_0 \Vert _X,\quad G(t) {:}{=} \Vert C(t) U(t,0)x_0 \Vert _Y. \end{aligned}$$

By Hölder’s inequality, we clearly have

$$\begin{aligned} \Vert G(\cdot ) \Vert _{L^1(E)} \le |E|^{1-\frac{1}{r}} \Vert G(\cdot ) \Vert _{L^r(E)} \end{aligned}$$
(3.1)

with the usual convention \(1/\infty = 0\). Hence, estimate (OBS) for \(r > 1\) follows from the one for \(r = 1\) by multiplying the corresponding constant \(C_\textrm{obs}\) by \(|E|^{1-\frac{1}{r}} \le \max \{ 1, |E| \}\). It therefore suffices to show (OBS) for \(r = 1\), which in the new notation reads

$$\begin{aligned} F(T) \le C_\textrm{obs}\mathop \int \limits _E G(t) \,{{\text {d}}\!{t}}. \end{aligned}$$
(3.2)

Upon possibly removing from E a set of measure zero, we may assume without loss of generality that (essUCP) holds with \(\mathop {\mathrm {ess\,inf}}\limits \) replaced by \(\inf \) and that \(C(\cdot )\) is uniformly bounded on E. Let us then show that there exist constants \(c_1,c_2 > 0\) such that for all \(0 \le s < t \le T\) with \(t \in E\) and all \(\varepsilon \in (0,1),\) we have

$$\begin{aligned} F(t) \le c_1 \exp \biggl ( c_2 \Bigl ( \frac{1}{t-s} \Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}} \biggr ) \bigl ( \varepsilon ^{-1} G(t) + \varepsilon F(s) \bigr ). \end{aligned}$$
(3.3)

To this end, let \(\varepsilon \in (0,1)\) and fix \(0 \le s < t \le T\) with \(t \in E\). For \(\lambda > 0,\) we introduce

$$\begin{aligned} F_\lambda {:}{=} \Vert P_\lambda U(t,0)x_0 \Vert _X,\quad F_\lambda ^\perp {:}{=} \Vert (\text {Id}- P_\lambda )U(t,0)x_0 \Vert _X, \end{aligned}$$

as well as

$$\begin{aligned} G_\lambda {:}{=} \Vert C(t)P_\lambda U(t,0)x_0 \Vert _Y,\quad G_\lambda ^\perp {:}{=} \Vert C(t)(\text {Id}- P_\lambda )U(t,0)x_0 \Vert _Y. \end{aligned}$$

The uncertainty relation (essUCP) and the uniform boundedness of \(C(\cdot )\) on E then give

$$\begin{aligned} F_\lambda \le d_0 \mathrm e^{d_1 \lambda ^{\gamma _1}} G_\lambda \quad \text { and }\quad G_\lambda ^\perp \le \Vert C(\cdot ) \Vert _{E,\infty } F_\lambda ^\perp \quad \text { for all }\ \lambda > 0, \end{aligned}$$

where \(\Vert C(\cdot )\Vert _{E,\infty } {:}{=} \sup _{\tau \in E} \Vert C(\tau )\Vert _{{\mathcal {L}}(X,Y)} < \infty \). Since by the triangle inequality \(F(t) \le F_\lambda + F_\lambda ^\perp \) and \(G_\lambda \le G(t) + G_\lambda ^\perp \), the latter implies that for all \(\lambda > 0\), we have

$$\begin{aligned} F(t)\le & {} d_0 \mathrm e^{d_1\lambda ^{\gamma _1}} \bigl ( G(t) + \Vert C(\cdot )\Vert _{E,\infty } F_\lambda ^\perp \bigr ) + F_\lambda ^\perp \nonumber \\\le & {} \mathrm e^{d_1\lambda ^{\gamma _1}} \bigl ( d_0 G(t) + (d_0\Vert C(\cdot )\Vert _{E,\infty } + 1)F_\lambda ^\perp \bigr ). \end{aligned}$$
(3.4)

Now, writing \(U(t,0)x_0 = U(t,s)U(s,0)x_0\) by (2.1), we obtain from (DE) with \(x = U(s,0)x_0\) that

$$\begin{aligned} F_\lambda ^\perp \le d_2 \max \{ 1, (t-s)^{-\gamma _4} \} \mathrm e^{-d_3\lambda ^{\gamma _2}(t-s)^{\gamma _3}} F(s), \end{aligned}$$

and inserting this into the preceding estimate (3.4) yields for all \(\lambda > 0\) that

$$\begin{aligned} \begin{aligned} F(t)&\le c_1 \mathrm e^{d_1\lambda ^{\gamma _1}} \max \{ 1, (t-s)^{-\gamma _4} \} \bigl ( G(t) + \mathrm e^{-d_3\lambda ^{\gamma _2}(t-s)^{\gamma _3}} F(s) \bigr )\\&= c_1 \mathrm e^{f(\lambda )} \max \{ 1, (t-s)^{-\gamma _4} \} \bigl ( \mathrm e^{\frac{d_3}{2}\lambda ^{\gamma _2}(t-s)^{\gamma _3}}G(t) + \mathrm e^{-\frac{d_3}{2}\lambda ^{\gamma _2}(t-s)^{\gamma _3}} F(s) \bigr ) \end{aligned}\nonumber \\ \end{aligned}$$
(3.5)

with

$$\begin{aligned} c_1 {:}{=}\max \{ d_0, (d_0\Vert C(\cdot )\Vert _{E,\infty }+1)d_2 \} \ge 1 \quad \text { and }\quad f(\lambda ) {:}{=}d_1\lambda ^{\gamma _1} - \frac{d_3}{2}\lambda ^{\gamma _2}(t-s)^{\gamma _3}.\nonumber \\ \end{aligned}$$
(3.6)

Let us maximize \(f(\lambda )\) with respect to \(\lambda \). In light of \(\gamma _2 > \gamma _1\) by hypothesis, a straightforward calculation reveals that f takes its maximal value on \((0,\infty )\) at the point

$$\begin{aligned} \lambda ^* {:}{=}\Bigr ( \frac{2d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{1}{\gamma _2-\gamma _1}} \Bigl ( \frac{1}{t-s} \Bigr )^{\frac{\gamma _3}{\gamma _2-\gamma _1}} > 0. \end{aligned}$$

Taking into account the relation \(\frac{\gamma _1}{\gamma _2-\gamma _1} + 1 = \frac{\gamma _2}{\gamma _2-\gamma _1}\), we observe that

$$\begin{aligned} \frac{d_3}{2} (\lambda ^*)^{\gamma _2} (t-s)^{\gamma _3}&= \frac{d_3}{2} \Bigl ( \frac{2d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{\gamma _2}{\gamma _2-\gamma _1}} \Bigl ( \frac{1}{t-s} \Bigr )^{\frac{\gamma _2\gamma _3}{\gamma _2-\gamma _1}-\gamma _3}\\&= \frac{d_1\gamma _1}{\gamma _2} \Bigl ( \frac{2d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{\gamma _1}{\gamma _2-\gamma _1}} \Bigl ( \frac{1}{t-s} \Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}} = \frac{d_1\gamma _1}{\gamma _2} (\lambda ^*)^{\gamma _1} . \end{aligned}$$

We may therefore estimate \(f(\lambda )\) as

$$\begin{aligned} f(\lambda ) \le f(\lambda ^*) = d_1 \Bigl ( 1-\frac{\gamma _1}{\gamma _2} \Bigr ) (\lambda ^*)^{\gamma _1} = d_1 \Bigl ( 1-\frac{\gamma _1}{\gamma _2} \Bigr ) \Bigl ( \frac{2d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{\gamma _1}{\gamma _2-\gamma _1}} \Bigl ( \frac{1}{t-s} \Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}. \end{aligned}$$

Moreover, using the elementary bound \(\xi ^\alpha \le \mathrm e^{\alpha \xi }\) for \(\alpha ,\xi > 0\), we have

$$\begin{aligned} \max \{ 1, (t - s)^{-\gamma _4} \} \le \exp \biggl ( \frac{\gamma _4 (\gamma _2 - \gamma _1)}{\gamma _1\gamma _3} \Bigl (\frac{1}{t - s} \Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2 - \gamma _1}} \biggr ). \end{aligned}$$
(3.7)

Inserting this and the preceding bound on \(f(\lambda )\) into (3.5), we arrive for all \(\lambda > 0\) at

$$\begin{aligned} F(t) \le c_1 \exp \biggl ( c_2 \Bigl (\frac{1}{t-s}\Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}} \biggr ) \Bigl ( \mathrm e^{\frac{d_3}{2} \lambda ^{\gamma _2}(t-s)^{\gamma _3}} G(t) + \mathrm e^{-\frac{d_3}{2} \lambda ^{\gamma _2}(t-s)^{\gamma _3}} F(s)\Bigr ) \end{aligned}$$

with

$$\begin{aligned} c_2 {:}{=}d_1 \Bigl (1-\frac{\gamma _1}{\gamma _2}\Bigr )\Bigl ( \frac{2 d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{\gamma _1}{\gamma _2-\gamma _1}} + \frac{\gamma _4 ( \gamma _2 - \gamma _1)}{\gamma _1 \gamma _3}. \end{aligned}$$
(3.8)

We finally choose \(\lambda > 0\) such that \(\varepsilon = \mathrm e^{-\frac{d_3}{2}\lambda ^{\gamma _2}(t-s)^{\gamma _3}}\), which shows that (3.3) is valid; note that indeed neither \(c_1\) nor \(c_2\) depend on s or t.

Let \(q {:}{=}\bigl (\frac{3}{4}\bigr )^{\frac{\gamma _2-\gamma _1}{\gamma _1\gamma _3}} < 1\), and choose by Lebesgue’s differentiation theorem a (right) density point \(\ell \in [0,T) \cap E\) of E in the sense of Appendix A. Proposition A.1 then guarantees that there is a strictly decreasing sequence \((\ell _m)_{m\in {\mathbb {N}}}\) in \((\ell ,T]\) of the form \(\ell _m = \ell + q^{m-1}(\ell _1-\ell )\), \(m \in {\mathbb {N}}\), satisfying

$$\begin{aligned} |(\ell _{m+1},\ell _m) \cap E| \ge \frac{\delta _m}{3},\quad m \in {\mathbb {N}}, \end{aligned}$$
(3.9)

where \(\delta _{m} {:}{=}\ell _m - \ell _{m+1}\), \(m \in {\mathbb {N}}\). It is also easy to see that

$$\begin{aligned} \delta _{m + 1} = q\delta _{m},\quad m \in {\mathbb {N}}. \end{aligned}$$
(3.10)

Since the evolution family \((U(t,s))_{0\le s\le t\le T}\) is exponentially bounded by hypothesis, there exist \(M \ge 1\) and \(\omega \in {\mathbb {R}}\) such that

$$\begin{aligned} F(t) = \Vert U(t,s)U(s,0)x_0 \Vert _X \le M\mathrm e^{\omega (t-s)} F(s) \quad \text { for all }\ 0\le s\le t\le T. \end{aligned}$$

Setting \(\omega _+ {:}{=}\max \{\omega ,0\}\), this in particular implies for each \(m \in {\mathbb {N}}\) and all \(t \in (\ell _{m+1},\ell _m)\) that

$$\begin{aligned} F(\ell _m) \le M\mathrm e^{\omega (\ell _m-t)} F(t) \le M\mathrm e^{\omega _+\delta _1} F(t). \end{aligned}$$
(3.11)

Define

$$\begin{aligned} \xi _m {:}{=}\ell _{m+1} + \frac{\delta _m}{6} \in (\ell _{m+1},\ell _m),\quad m \in {\mathbb {N}}, \end{aligned}$$

which in light of (3.9) satisfies

$$\begin{aligned} | (\xi _m,\ell _m) \cap E |= & {} | (\ell _{m+1},\ell _m) \cap E | - | (\ell _{m+1},\xi _m) \cap E |\nonumber \\\ge & {} \frac{\delta _m}{3} - (\xi _m - \ell _{m+1}) = \frac{\delta _m}{6} > 0. \end{aligned}$$
(3.12)

Combining (3.11) and (3.3) with \(s = \ell _{m+1}\), we obtain for all \(m \in {\mathbb {N}}\), \(t \in (\xi _m,\ell _m) \cap E\), and \(\varepsilon \in (0,1)\) that

$$\begin{aligned} F(\ell _m) \le c_3 \exp \Biggl ( \frac{c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) \bigl ( \varepsilon ^{-1} G(t) + \varepsilon F(\ell _{m+1}) \bigr ) \end{aligned}$$

with

$$\begin{aligned} c_3 {:}{=}Mc_1\mathrm e^{\omega _+\delta _1} \ge 1 \quad \text { and }\quad c_4 {:}{=}c_2 6^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}} > 0, \end{aligned}$$

where we have taken into account that \(t-\ell _{m+1} \ge \xi _m-\ell _{m+1} = \delta _m / 6\). With the particular choice

$$\begin{aligned} \varepsilon = c_3^{-1} q \exp \Biggl ( -\frac{2c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) \in (0,1), \end{aligned}$$

the latter turns into

$$\begin{aligned} F(\ell _m) \le c_3^2 q^{-1} \exp \Biggl ( \frac{3c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) G(t) + q \exp \Biggl ( -\frac{c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) F(\ell _{m+1}). \end{aligned}$$
(3.13)

Observing that

$$\begin{aligned} \exp \Biggl ( \frac{4c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) = \exp \Biggl ( \frac{3c_4}{\delta _{m+1}^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) \end{aligned}$$

by (3.10) and the choice of q, multiplying (3.13) by \(\delta _m\) and rearranging terms yields

$$\begin{aligned} \delta _m \exp \Biggl ( -\frac{3c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) F(\ell _m) - \delta _{m+1} \exp \Biggl ( -\frac{3c_4}{\delta _{m+1}^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) F(\ell _{m+1}) \le c_3^2 q^{-1} \delta _m G(t) \end{aligned}$$

for all \(t \in (\xi _m,\ell _m) \cap E\), \(m \in {\mathbb {N}}\). Taking into account (3.12), integrating the latter with respect to \(t \in (\xi _m,\ell _m) \cap E\) leads to

$$\begin{aligned} \begin{aligned} \delta _m \exp \Biggl ( -\frac{3c_4}{\delta _m^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr )&F(\ell _m) - \delta _{m+1} \exp \Biggl ( - \frac{3c_4}{\delta _{m+1}^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) F(\ell _{m+1})\\ {}&{} \le 6c_3^2 q^{-1} \mathop \int \limits _{\ell _{m+1}}^{\ell _m} {{\textbf {1}}}_E(t) G(t) \,{{\text{ d }}{t}} \end{aligned} \end{aligned}$$

for all \(m \in {\mathbb {N}}\). Note here that the exponential boundedness of the evolution family guarantees that the sequence \((F(\ell _m))_{m\in {\mathbb {N}}}\) is bounded. Since also \(\delta _m \rightarrow 0\) and \(\ell _m \rightarrow \ell \) as \(m \rightarrow \infty \), summing the last inequality over all \(m \in {\mathbb {N}}\) implies by a telescoping sum argument that

$$\begin{aligned} \delta _1 \exp \Biggl ( -\frac{3c_4}{\delta _1^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) F(\ell _1) \le 6c_3^2 q^{-1} \mathop \int \limits _{\ell }^{\ell _1} {\textbf{1}}_E(t) G(t) \,{{\text {d}}\!{t}} \le 6c_3^2 q^{-1} \mathop \int \limits _E G(t) \,{{\text {d}}\!{t}}, \end{aligned}$$

which can be rewritten as

$$\begin{aligned} F(\ell _1) \le 6c_3^2 q^{-1} \delta _1^{-1} \exp \Biggl ( \frac{3c_4}{\delta _1^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) \mathop \int \limits _E G(t) \,{{\text {d}}\!{t}}. \end{aligned}$$

Now, we have \(F(T) \le M\mathrm e^{\omega (T-\ell _1)}F(\ell _1)\) by using once more the exponential boundedness of the evolution family, which shows that (3.2) holds with

$$\begin{aligned} C_\textrm{obs}= 6c_3^2 q^{-1} \delta _1^{-1} \exp \Biggl ( \frac{3c_4}{\delta _1^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} \Biggr ) M\mathrm e^{\omega (T-\ell _1)}. \end{aligned}$$
(3.14)

Finally, suppose that \(| (\tau _1,\tau _2) \cap E | = \tau _2 - \tau _1\) for some interval \((\tau _1,\tau _2) \subseteq [0,T]\) with \(\tau _1 < \tau _2\). We may then simply choose \(\ell = \tau _1\) and \(\ell _1 = \tau _2\) in the above reasoning, leading to \(\delta _1 = (1-q)(\tau _2-\tau _1)\). For \(r \ge 1\), in light of (3.1) with E replaced by \((\tau _1,\tau _2) \cap E\), we conclude that \(C_\textrm{obs}\) in (OBS) can be bounded as in (2.2), which completes the proof. \(\square \)

For organizational purposes, we extract from the above proof the following more explicit bound on the observability constant.

Remark 3.1

In the case where \(| (\tau _1,\tau _2) \cap E | = \tau _2 - \tau _1\) for some interval \((\tau _1,\tau _2) \subseteq [0,T]\) with \(\tau _1 < \tau _2\) in (the proof of) Theorem 2.2, we actually have \(|(\ell _{m+1},\ell _m) \cap E| = \delta _m\) instead of the weaker (3.9). Consequently, one may choose \(\xi _m {:}{=}\ell _{m+1} + \delta _m/2\), resulting in \(|(\xi _m,\ell _m) \cap E| = \delta _m/2\) instead of (3.12). We may therefore replace the numerical factor 6 in (3.14) and the constant \(c_4\) by 2, so that

$$\begin{aligned} C_\textrm{obs}\le \frac{C_1}{(\tau _2-\tau _1)^{1/r}} \exp \Biggl ( \frac{C_2}{(\tau _2-\tau _1)^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}}} + C_3T \Biggr ) \end{aligned}$$

with

$$\begin{aligned} C_1 {:}{=} \frac{2M^3c_1^2}{q(1-q)},\quad C_2 {:}{=}3c_2 \Bigl ( \frac{2}{1-q} \Bigr )^{\frac{\gamma _1\gamma _3}{\gamma _2-\gamma _1}},\quad C_3 {:}{=}3\omega _+, \end{aligned}$$

where

$$\begin{aligned} c_1= & {} \max \{ d_0, (d_0\Vert C(\cdot )\Vert _{E,\infty }+1)d_2 \} \ge 1,\quad \\ c_2= & {} d_1 \Bigl (1-\frac{\gamma _1}{\gamma _2}\Bigr )\Bigl ( \frac{2 d_1\gamma _1}{d_3\gamma _2} \Bigr )^{\frac{\gamma _1}{\gamma _2-\gamma _1}} + \frac{\gamma _4 ( \gamma _2 - \gamma _1)}{\gamma _1 \gamma _3} \end{aligned}$$

are as in (3.6) and (3.8), respectively, and where \(M \ge 1\) and \(\omega _+ = \max \{ \omega , 0 \}\), \(\omega \in {\mathbb {R}}\), are such that \(\Vert U(t,s) \Vert _{{\mathcal {L}}(X)} \le M\mathrm e^{\omega (t-s)}\) for all \(0 \le s \le t \le T\).

Remark 3.2

(1) The proof of Theorem 2.2 actually requires the dissipation estimate (DE) only for \(\ell \le s < t \le \ell _1\).

(2) It is worth to emphasize that in contrast to [1, Theorem 13], our proof of Theorem 2.2 does not require the sequence \((\ell _m)_{m\in {\mathbb {N}}}\) to belong to the set E, but only to \((\ell ,T]\). This relaxed requirement is much easier to satisfy than the stricter one in [1, Proposition 14] and is reviewed in Proposition A.1 in the appendix.