1 The physical model

Within the theory of uniaxial deformations in isothermal viscoelasticity, the dynamics of a homogeneous isotropic linearly viscoelastic solid, occupying a bounded volume \(\Omega \subset {\mathbb {R}}^3\) at rest, is described by the integrodifferential evolution equation (see, e.g., [3, 10, 18, 23, 25, 29, 37])

$$\begin{aligned} {\partial _{tt}}u(t)-\kappa (0)\Delta u(t) -\int _{0}^t \kappa '(s)\Delta u(t-s)d s=0, \end{aligned}$$
(1.1)

subject to the homogeneous Dirichlet boundary condition

$$\begin{aligned} u(t)_{|\partial \Omega } = 0. \end{aligned}$$
(1.2)

The unknown \(u = u({\varvec{x}}, t): \Omega \times [0,\infty )\rightarrow {\mathbb {R}}\) describes the axial displacement field relative to the reference configuration \(\Omega \). Throughout the paper, the space variable \({\varvec{x}}\) will be always omitted. The function \(\kappa \) appearing in the convolution integral has the form

$$\begin{aligned} \kappa (s)=\kappa _\infty +\int _s^\infty \mu (y)dy, \end{aligned}$$

where \(\kappa _\infty >0\), and the memory kernel \(\mu \) is a (nonnull) nonnegative, decreasing, piecewise smooth, and summable function defined on \({\mathbb {R}}^+=(0,\infty )\). In particular, we allow \(\mu \) to have a finite number of discontinuities (downward jumps). These assumptions imply the convexity of \(\kappa \). The values

$$\begin{aligned} \kappa (0)>\kappa (\infty )>0 \end{aligned}$$

represent the instantaneous elastic modulus, and the relaxation modulus of the material, respectively.

In more generality, in this work we will consider an abstract version of (1.1), obtained by replacing the operator \(-\Delta \) with Dirichlet boundary conditions, acting on the Hilbert space \(L^2(\Omega )\), with a strictly positive selfadjoint linear operator A, acting on some real Hilbert space H, with dense domain \({{\mathfrak {D}}}(A)\subset H\).

1.1 Notation

We agree to denote by \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \) the inner product and norm on H, respectively. We also introduce the Hilbert space

$$\begin{aligned} V={{\mathfrak {D}}}(A^{1/2}), \end{aligned}$$

endowed with the inner product and norm

$$\begin{aligned} \langle u,v\rangle _1=\langle A^{1/2} u,A^{1/2} v\rangle \quad \text {and} \quad \Vert u\Vert _1=\Vert A^{1/2}u\Vert . \end{aligned}$$

Observe that for the particular case of (1.1)–(1.2), the space V is nothing but the Sobolev space \(H_0^1(\Omega )\).

Setting for simplicity \(\kappa (0)=1\), which translates into the constraint

$$\begin{aligned} \ell =\int _0^\infty \mu (s)ds<1, \end{aligned}$$
(1.3)

we end up with the abstract Volterra evolution equation

$$\begin{aligned} \ddot{u}(t)+ A u(t)- \int _0^t \mu (s)Au(t-s) ds=0, \end{aligned}$$
(1.4)

of which the boundary value problem (1.1)–(1.2) is just a particular instance. Nonetheless, the abstract equation (1.4) covers also different models: for example, the one employed in the description of the vibrations of thin viscoelastic rods, namely,

$$\begin{aligned} {\partial _{tt}}u(t)-\nu \Delta \partial _{tt}u(t)-\Delta u(t) +\int _{0}^t \mu (s)\Delta u(t-s)d s=0, \end{aligned}$$

or of viscoelastic plates, namely,

$$\begin{aligned} {\partial _{tt}}u(t)-\nu \Delta \partial _{tt}u(t)+\Delta ^2 u(t) -\int _{0}^t \mu (s)\Delta ^2 u(t-s)d s=0, \end{aligned}$$

with \(\nu >0\) small (see [29, 31]). Both equations can be given the form (1.4) by setting in the first case

$$\begin{aligned} A=-(1-\nu \Delta )^{-1}\Delta , \end{aligned}$$

and in the second one

$$\begin{aligned} A=(1-\nu \Delta )^{-1}\Delta ^2. \end{aligned}$$

Incidentally, the first occurrence of A is (more precisely, extends to) a bounded linear operator on \(L^2(\Omega )\).

The abstract equation (1.4) is complemented with the initial conditions given at the initial time \(t=0\)

$$\begin{aligned} {\left\{ \begin{array}{ll} u(0)=u_0,\\ \dot{u}(0)=v_0, \end{array}\right. } \end{aligned}$$
(1.5)

where \(u_0\) and \(v_0\) are assigned data. It is well known that for all initial data \((u_0,v_0)\in V\times H\), problem (1.4)–(1.5) has a unique weak solution

$$\begin{aligned} u\in {{\mathcal {C}}}^0([0,\infty ),V)\cap {\mathcal C}^1([0,\infty ),H). \end{aligned}$$

Such a solution is actually a strong one, namely,

$$\begin{aligned} u\in {{\mathcal {C}}}^0([0,\infty ),{{\mathfrak {D}}}(A))\cap {\mathcal C}^1([0,\infty ),V), \end{aligned}$$

whenever the initial data \((u_0,v_0)\) belong to the more regular space \({{\mathfrak {D}}}(A)\times V\). The natural energy of the system is defined by

$$\begin{aligned} {{\textsf{E}}}(t)=\Big (1-\int _0^t\mu (s)ds\Big )\Vert u(t)\Vert ^2_1+\Vert \dot{u}(t)\Vert ^2 +\int _0^t\mu (s)\Vert u(t)-u(t-s)\Vert ^2_1ds, \end{aligned}$$

and it can be interpreted as the sum of the mechanical energy

$$\begin{aligned} {{\textsf{E}}}_0(t)=\Big (1-\int _0^t\mu (s)ds\Big )\Vert u(t)\Vert ^2_1+\Vert \dot{u}(t)\Vert ^2, \end{aligned}$$
(1.6)

and the memory energy

$$\begin{aligned} {{\textsf{E}}}_1(t)=\int _0^t\mu (s)\Vert u(t)-u(t-s)\Vert ^2_1ds, \end{aligned}$$
(1.7)

which is ultimately responsible for the dissipation. Note that, for large t

$$\begin{aligned} {{\textsf{E}}}_0(t) \approx (1-\ell )\Vert u(t)\Vert ^2_1 + \Vert \dot{u}(t)\Vert ^2. \end{aligned}$$

The dissipativity of (1.4) is witnessed by the fact that the \({{\textsf{E}}}(t)\) is a decreasing function of t (see, e.g., [8, 10, 18]; but see also the next formula (1.11)).

Equation (1.4) is sometimes referred to as a model of (linear) viscoelasticity of Volterra type. A different and perhaps more accurate model is the one of viscoelasticity with infinite memory, which amounts to considering the convolution integral on the whole \({\mathbb {R}}^+\), to wit,

$$\begin{aligned} \ddot{u}(t)+ A u(t)- \int _0^\infty \mu (s)Au(t-s) ds=0. \end{aligned}$$
(1.8)

In this case, however, in order to compute the convolution integral one has to know the values of u for all times before the actual time t. Accordingly, being the initial time conventionally set at \(t=0\), the function u for negative times is assumed to be a known initial datum, which need not solve the equation. Indeed, (1.8) is assumed to hold only for \(t>0\). As pointed out in the seminal work [12] (see also [22]), one way to handle (1.8) is to introduce the auxiliary variable

$$\begin{aligned} \eta ^t(s)=u(t)-u(t-s),\quad t\ge 0,\,\,s>0, \end{aligned}$$

which keeps track of the past history of u, and is assigned as an initial datum at \(t=0\). Hence, assuming the initial condition

$$\begin{aligned} \eta ^0(s)=\eta _0(s), \end{aligned}$$

where \(\eta _0=\eta _0(s)\) is an assigned datum, we immediately see that

$$\begin{aligned} \eta ^t(s)= {\left\{ \begin{array}{ll} u(t)-u(t-s) &{} 0<s\le t,\\ \eta _0(s-t)+u(t)-u_0 &{} s>t. \end{array}\right. } \end{aligned}$$
(1.9)

With this choice, and recalling (1.3), equation (1.8) transforms into

$$\begin{aligned} \ddot{u}(t)+ (1-\ell )A u(t)+ \int _0^{\infty }\mu (s)A\eta ^t(s) ds=0. \end{aligned}$$
(1.10)

Calling

$$\begin{aligned} {{\mathcal {M}}}=L^2_\mu ({\mathbb {R}}^+,V) \end{aligned}$$

the Lebesgue space of square summable V-valued functions on \({\mathbb {R}}^+\) with respect to the measure \(\mu (s)ds\), we introduce the Hilbert space

$$\begin{aligned} {{\mathcal {H}}}=H\times V\times {{\mathcal {M}}}, \end{aligned}$$

endowed with the product norm

$$\begin{aligned} \Vert (u,v,\eta )\Vert ^2_{{\mathcal {H}}}=(1-\ell )\Vert u\Vert _1^2 +\Vert v\Vert ^2+\int _0^\infty \mu (s)\Vert \eta (s)\Vert _1^2 ds. \end{aligned}$$

Then, system (1.9)–(1.10) is known to generate a strongly continuous linear semigroup S(t) of solutionsFootnote 1 acting on \({{\mathcal {H}}}\), in the sense of [15, 35]. This means that, for any initial datum \({\varvec{u}}_0=(u_0,v_0,\eta _0)\in {{\mathcal {H}}}\), there is a unique solution

$$\begin{aligned} {\varvec{u}}(t)=(u(t),\dot{u}(t),\eta ^t)=S(t){\varvec{u}}_0 \in {{\mathcal {C}}}([0,\infty ),{{\mathcal {H}}}). \end{aligned}$$

Remark 1.1

The infinitesimal generator of S(t) is the linear operator \({{\mathbb {A}}}\) on \({{\mathcal {H}}}\) acting as

$$\begin{aligned} {{\mathbb {A}}}(u,v,\eta )= \textstyle \big ( v, -A\big ((1-\ell )u+\int _0^\infty \mu (s) \eta (s) d s\big ), T\eta +v\big ). \end{aligned}$$

with domain

$$\begin{aligned} \textstyle {{\mathfrak {D}}}({{\mathbb {A}}})=\big \{ (u,v,\eta )\in {{\mathcal {H}}}:\, v\in V,\, (1-\ell ) u+\int _0^{\infty }\mu (s)\eta (s)d s\in {{\mathfrak {D}}}(A),\, \eta \in {{\mathfrak {D}}}(T)\big \}. \end{aligned}$$

Here, T is the infinitesimal generator of the right-translation semigroup on \({{\mathcal {M}}}\), defined as

$$\begin{aligned} (T\eta )(s)=-\eta '(s)\quad \text {with domain}\quad {{\mathfrak {D}}}(T) =\{\eta \in {{{\mathcal {M}}}}:\,\eta '\in {\mathcal M},\,\,\eta (0)=0\}, \end{aligned}$$

where the prime denotes the distributional derivative. Accordingly, system (1.9)–(1.10) can be equivalently viewed as the differential equation on \({{\mathcal {H}}}\)

$$\begin{aligned} \frac{d}{dt} {\varvec{u}}(t)={{\mathbb {A}}}{\varvec{u}}(t). \end{aligned}$$

The energy at time t corresponding to the initial datum \({\varvec{u}}_0\) reads

$$\begin{aligned} {{\textsf{F}}}(t)=\Vert S(t){\varvec{u}}_0\Vert ^2_{{\mathcal {H}}}=(1-\ell )\Vert u(t)\Vert _1^2 +\Vert \dot{u}(t)\Vert ^2+\int _0^\infty \mu (s)\Vert \eta ^t(s)\Vert _1^2 ds, \end{aligned}$$

and is a decreasing function of t, for it satisfies (for any regular initial datum) the differential inequality

$$\begin{aligned} \frac{d}{d t}{{\textsf{F}}}(t) =\int _0^\infty \mu '(s)\Vert \eta ^t(s)\Vert ^2_1 d s +\sum _{n}[\mu (s_n^+)-\mu (s_n^-)]\Vert \eta ^t(s_n)\Vert _1^2, \end{aligned}$$
(1.11)

where \(s_n\) are the discontinuity points (if any) of \(\mu \) (see [12, 33]). In the semigroup language, this means that S(t) is a contraction semigroup, i.e., the operator norm of the semigroup is less than or equal to 1 for all \(t\ge 0\). The analysis of S(t) and its asymptotic features has been carried out by several authors since the Seventies of the last century. To mention just a few of them, we quote the works [2, 12, 19,20,21, 30, 32,33,34] (stability and exponential stability issues), [4, 13] (spectral properties), [6, 7] (models with time dependent kernels), [14, 16, 17] (modeling aspects), [26,27,28] (semigroup approach).

The link between the semigroup S(t) and the original Volterra equation (1.4) is expressed by the following proposition.

Proposition 1.2

The solution u(t) to (1.4) with initial data \((u_0,v_0)\in V\times H\) coincides with the first component of the solution \({\varvec{u}}(t)=S(t){\varvec{u}}_0\) to (1.9)–(1.10) with initial datum \({\varvec{u}}_0=(u_0,v_0,u_0)\).

Indeed, defining the function

$$\begin{aligned} G(t)=\int _0^\infty \mu (t+s) A\big [\eta _0(s)-u_0\big ]ds, \end{aligned}$$

it is readily seen that system (1.9)–(1.10) can be equivalently written as

$$\begin{aligned} \ddot{u}(t)+ A u(t)- \int _0^t \mu (s)Au(t-s) ds=G(t). \end{aligned}$$
(1.12)

This is nothing but (1.4), in presence of a time-dependent forcing term. Notice that the function G is identically zero whenever \({\varvec{u}}_0=(u_0,v_0,u_0)\). In this case, we recover viscoelasticity of Volterra type. Besides, observing that the third (or memory) component \(\eta ^t\) of \({\varvec{u}}(t)\) now reads

$$\begin{aligned} \eta ^t(s)= {\left\{ \begin{array}{ll} u(t)-u(t-s) &{} 0<s\le t,\\ u(t) &{} s>t, \end{array}\right. } \end{aligned}$$

we obtain (as it should be) the equality \({{\textsf{E}}}(t)={{\textsf{F}}}(t)\).

2 The Decay rate of the energy

2.1 Exponential stability

Being S(t) a contraction semigroup, either its operator norm is always equal to 1, or it goes to zero as \(t\rightarrow \infty \), and the decay is necessarily (at least) of exponential type (see [15, 35]). In terms of the energy \({{\textsf{F}}}\), this means that there exist constants \(M\ge 1\) and \(\omega >0\) such that, for all initial data in \({{\mathcal {H}}}\),

$$\begin{aligned} {{\textsf{F}}}(t)\le M{{\textsf{F}}}(0)e^{-\omega t}. \end{aligned}$$
(2.1)

When (2.1) holds, the semigroup is said to be exponentially stable. The issue of the exponential stability of S(t) has attracted the attention of several authors since the appearance of [12], and it has been first proved within the sufficient condition that the memory kernel \(\mu \), besides our general hypotheses, satisfies the differential inequality

$$\begin{aligned} \mu '(s)+\delta \mu (s)\le 0, \end{aligned}$$
(2.2)

for some \(\delta >0\) (see [17, 26, 27, 30]). Later, in [2], it has been shown that a necessary condition in order for S(t) to be exponentially stable is that

$$\begin{aligned} \mu (t+s)\le C e^{-\delta t}\mu (s), \end{aligned}$$
(2.3)

for some \(C\ge 1\) and \(\delta >0\), which turns out to be equivalent to (2.2) when \(C=1\). The necessary and sufficient condition has been established only in the more recent work [34], where the following theorem is proved.

Theorem 2.1

If A is an unbounded operator, then the semigroup S(t) is exponentially stable if and only if

  1. (i)

    condition (2.3) holds; and

  2. (ii)

    the kernel \(\mu \) is not completely flat, that is, the set \(\{s>0:\mu '(s)<0\}\) has positive measure.

Remark 2.2

If the operator A is bounded, the result is slightly different: exponential stability occurs if and only if (i) holds, with the only exception of a certain class of flat kernels, called resonant, where trajectories with conserved energy arise (see [2, 34]).

Summarizing, with regard to the exponential stability of the semigroup the picture is nowadays completely clear. It is worth noting that (2.3) implies an exponential decay of the kernel \(\mu \) of rate \(\delta \) (at least). The situation is a little bit more complicated if we restrict our attention to the decay of the energy \({{\textsf{E}}}\) of the Volterra equation (1.4). Here, lacking a semigroup structure, decay types other than exponential are possible. Without claiming to give the result in full detail, roughly speaking what happens is that the energy \({{\textsf{E}}}\) and the kernel \(\mu \) share the same decay type, but this occurs up to the decay of exponential type (see [5, 8]). For faster decays, no results are available, as remarked in [5].

2.2 Superstability

We now come to the core of the present work, that is, the existence of nontrivial trajectories which converge to zero faster than any exponential. To this end, we define the (exponential) decay rate of the energyFootnote 2 of the semigroup S(t) to be the nonnegative number

$$\begin{aligned} \omega _\star =\sup \big \{\omega \ge 0:\, (2.1)\, \hbox {holds for some}\ M=M(\omega )\big \}. \end{aligned}$$

Thus, S(t) is exponentially stable if and only if \(\omega _\star >0\), whereas if \(\omega _\star =0\) then it has unitary operator norm for all t.

Definition 2.3

We say that S(t) is superstable if \(\omega _\star =\infty \).

Remark 2.4

As shown in [4], it is actually possible to give a quantitative version of Theorem 2.1, which says that if (2.1) holds for some \(\omega >0\), then condition (2.3) is necessarily satisfied for some \(\delta \ge \omega \).

This motivates the following definition.

Definition 2.5

The memory kernel \(\mu \) is superexponential if condition (2.3) holds for every \(\delta >0\).

Accordingly, in view of the remark above, if S(t) is superstable, then the memory kernel \(\mu \) is superexponential. Paradigmatic examples of superexponential kernels are \(\mu (s)=e^{-s^2}\) and any compactly supported kernel (complying with the general assumptions of the preceding section).

The main question now is:

Are there superexponential \(\mu \) which render the corresponding S(t) superstable?

The answer to this question is negative, and this is perhaps the main result of the very recent paper [4]. From a physical viewpoint, this fact somehow tells something about the persistence of the elastic force versus viscous effects. Notwithstanding, it might as well be possible that there do exist individual nontrivial trajectories that go to zero in a superexponential fashion. In light of (1.12), the trajectories which are more likely to decay fast are the ones corresponding to \(G\equiv 0\), i.e., the solutions to the Volterra equation (1.4). Our claim is that, for any nonzero initial data, the corresponding \({{\textsf{E}}}\) cannot decay arbitrarily fast. More precisely, we have the following conjecture.

Conjecture 2.6

For any given memory kernel \(\mu \), there exists a structural constant \(\varkappa >0\) with the following property: for any \((u_0,v_0)\ne (0,0)\), the corresponding \({{\textsf{E}}}\) fulfills the relation

$$\begin{aligned} \limsup _{t\rightarrow \infty }\,{{\textsf{E}}}(t)e^{\omega t}=\infty \quad \text {whenever} \quad \omega >\varkappa . \end{aligned}$$

Unfortunately, an analytic proof of such a conjecture seems to be out of reach. Indeed, although we have refined techniques to study the global behavior of the semigroup, we do not have at disposal significant tools that enable us to discuss in detail the behavior of a single trajectory. This is the point of the story where the Numerics steps in.

3 The numerical approach

3.1 The equation

We assume for simplicity that the domain \({{\mathfrak {D}}}(A)\) of A is compactly embedded into H. This assumption, which is satisfied in the concrete case of the Laplace-Dirichlet operator occurring in viscoelasticity, guarantees that the spectrum of A is made by eigenvalues only. Besides, denoting the eigenvalues of A (each counted with its own multiplicity) by

$$\begin{aligned} 0 < \lambda _1 \le \lambda _2 \le \cdots \le \lambda _j \rightarrow \infty , \end{aligned}$$

the (normalized) eigenvectors \(w_j\) of \(\lambda _j\) form a complete orthonormal basis of H (see, e.g., [38]). This allows us to decompose the solution of (1.4) into the infinite sum

$$\begin{aligned} u(t) = \sum _{j=1}^{\infty } u_j(t)w_j, \end{aligned}$$

where \(u_j(t)\) is the solution to the ordinary differential equation obtained by projecting (1.4) along with the initial data (1.5) onto the eigenspace relative to the eigenvalue \(\lambda _j\), that is (up to a straightforward change of variables)

$$\begin{aligned} \ddot{u}_j(t)+ \lambda _j u_j(t)- \lambda _j \int _0^t \mu (t-s)u_j(s) ds=0, \end{aligned}$$
(3.1)

with initial values

$$\begin{aligned} {\left\{ \begin{array}{ll} u_j(0)=u_{0,j},\\ \dot{u}_j(0) = v_{0,j}, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} u_{0,j}= \langle u_0, w_j\rangle \quad \text {and}\quad v_{0,j} = \langle v_0, w_j\rangle . \end{aligned}$$

By the classical Plancherel Theorem, the energy \({{\textsf{E}}}(t)\) of (1.4) splits into the infinite sum

$$\begin{aligned} {{\textsf{E}}}(t)=\sum _{j=1}^{\infty }{{\textsf{E}}}_j(t), \end{aligned}$$

where

$$\begin{aligned} {{\textsf{E}}}_j(t)=\Big (1-\int _0^t\mu (s)ds\Big )\lambda _j|u_j(t)|^2+|\dot{u}_j(t)|^2 +\lambda _j\int _0^t\mu (s)|u_j(t)-u_j(t-s)|^2ds \end{aligned}$$

is the energy corresponding to the \(j{\textrm{th}}\)-mode. Accordingly,

$$\begin{aligned} {{\textsf{E}}}(t)\ge {{\textsf{E}}}_j(t),\quad \forall j\in {\mathbb {N}}. \end{aligned}$$

This means that the decay rate of the energy of a particular trajectory is certainly smaller than or equal to the decay rate of the fastest (nonzero) mode. Reason why we can shift our focus to the study of the equation (3.1), ruling the evolution of the single mode associated to \(\lambda _j\). Theorem 2.1 implies that, if the memory kernel complies with assumption (2.3), then \({{\textsf{E}}}_j(t)\) decays at least exponentially for every \(j \in {\mathbb {N}}\). The validity of our Conjecture 2.6 translates into the fact that every \({{\textsf{E}}}_j(t)\), for different choices of the initial data, should not decay at an exponential rate higher than a certain threshold \(\varkappa >0\). To this end, we will solve numerically equation (3.1) for different values of \(\lambda _j\) over a time lapse [0, T], with T sufficiently large for the energy to stabilize on a specific decay pattern.

Clearly, we are interested in choosing a kernel \(\mu (s)\) which is superexponential, in the sense of Definition 2.5. Here a problem arises, namely, the fact that after a certain period of time (possibly much smaller than T) a superexponential kernel becomes so small that some approximation issues may occur in the numerical scheme. To overcome this difficulty, we introduce the truncated kernel

$$\begin{aligned} \mu ^{\varepsilon }(s) = {\left\{ \begin{array}{ll} \mu (s) &{}0<s<\frac{1}{{\varepsilon }}, \\ 0 &{}s \ge \frac{1}{{\varepsilon }}, \end{array}\right. } \end{aligned}$$
(3.2)

where \({\varepsilon }>0\) will be chosen suitably small. Then, we consider the problem

$$\begin{aligned} \ddot{u}_{j}^{\varepsilon }(t)+ \lambda _j u_{j}^{\varepsilon }(t)- \lambda _j \int _0^t \mu ^{{\varepsilon }}(t-s)u_{j}^{\varepsilon }(s) ds=0. \end{aligned}$$
(3.3)

The following result, borrowed from [9], ensures that the solutions \(u_{j}^{\varepsilon }\) of converge to \(u_j\) as \({\varepsilon }\rightarrow 0\).

Proposition 3.1

Let \(u_j\) and \(u_{j}^{\varepsilon }\) be solutions to respectively (3.1) and (3.3) with the same initial data \(u_{0,j},v_{0,j}\). Then there exists a constant \(C>0\) such that

$$\begin{aligned} |u_j(t) - u_{j}^{\varepsilon }(t)| + |\dot{u}_j(t) - \dot{u}_{j}^{\varepsilon }(t)| \le C [\omega _{{\varepsilon }} + \sqrt{\omega _{{\varepsilon }}}\,] (|u_{0,j}| + |v_{0,j}|), \end{aligned}$$

where

$$\begin{aligned} \omega _{{\varepsilon }} = \int _0^{\infty }( \mu (s) - \mu ^{{\varepsilon }}(s) )ds. \end{aligned}$$

To summarize, having fixed a superexponential kernel \(\mu (s)\), for suitably chosen T and \({\varepsilon }\) we will perform a numerical simulation of the ordinary integrodifferential equation (where we write just u in place of \(u_{j}^{\varepsilon }\) and \(\lambda \) in place of \(\lambda _j\))

$$\begin{aligned} \ddot{u}(t)+ \lambda u(t)- \lambda \int _0^t \mu ^{{\varepsilon }}(t-s)u(s) ds=0, \quad t \in [0,T], \end{aligned}$$
(3.4)

and study the behavior of the related energy in dependence of \(\lambda \) and of the initial data \(u_0,v_0\).

3.2 The numerical scheme

In this section we briefly describe the approach to numerically approximate a second-order integrodifferential equation of the form

$$\begin{aligned} \ddot{u}(t)=-\lambda u(t) + \lambda \int _{a(t)}^{b(t)}k(t,s)F(u(s)) ds, \quad t \in [0,T], \end{aligned}$$
(3.5)

for given \(\lambda \), and for integration limits a(t) and b(t) such that \(0 \le a(t) \le b(t) \le T\). Notice that (3.4) can be written in the form (3.5) by choosing

$$\begin{aligned} k(t,s)=\mu ^{{\varepsilon }}(t-s), \quad F(u(s))=u(s), \quad a(t)=0, \quad b(t)=t. \end{aligned}$$

Over the years many numerical techniques have been developed to tackle problems of this kind. We recall, among the others, perturbation methods, as well as spectral and Chebyshev collocation methods (see, e.g., [1, 11, 40] and references therein), but the list is far from being exhaustive. Here we employ a numerical approach that suitably combines an ODE solver with an iterative scheme, as proposed in [24].

For a given integer \(N_h\), let \(t_n\), with \(n=0,\ldots , N_h\), be a set of time snapshots within the interval [0, T], with the convention that \(t_0=0\). We define \(y_i\) to be the approximate solution of u(t) at \(t=t_i\), i.e.,

$$\begin{aligned} y_i\approx u(t_i),\quad i=0\ldots , N_h, \end{aligned}$$

and collect the values \(\left\{ y_i\right\} _{i=0, \ldots , N_h}\) into the vector \({\varvec{y}}_n \in {\mathbb {R}}^{N_h+1}\). To compute \({\varvec{y}}_n\) we proceed iteratively, i.e. given \({\varvec{y}}_n^{(k)}\) we compute \({\varvec{y}}_n^{(k+1)}\), for \(k=0,1,2,\ldots \).

To set up the algorithm, we consider an initial guess \({\varvec{y}}_n^{(0)}\). This is obtained by solving numerically (3.5) and neglecting the integral term on the right hand-side, by using for example, the variable-step, variable-order Adams-Bashforth-Moulton (ABM) predictor-corrector scheme (see, e.g., [36, 39]). Next, we compute \({\varvec{y}}_n^{(k+1)}\leftarrow {\varvec{y}}_n^{(k)}\) as described in Algorithm 1.

Notice that in the first step of Algorithm 1, the integral appearing on the right hand side is solved numerically, employing a Lobatto quadrature algorithm. Furthermore, \(\alpha \) is a suitable smoothing parameter, which is is equivalent to a relaxation parameter in the context of iterative methods for linear systems. The iterative algorithm is terminated whenever two consecutive iterates differ up to a prescribed (user-defined) tolerance \(\texttt{TOL}\), i.e.,

$$\begin{aligned} \Vert {\varvec{y}}_n^{(k+1)}-{\varvec{y}}_n^{(k)}\Vert ^2<\texttt{TOL} \end{aligned}$$

or whenever a maximum number of iterations \(\mathtt {N_{it}}\) is reached.

figure a

3.3 Numerical verification

For a fixed value of the tolerance \(\texttt{TOL} = 10^{-8}\), we are interested in validating the numerical scheme, showing that the approximate solution \({\varvec{y}}_n\) converges to the exact solution \(u(t_n)\) as \(N_h \rightarrow \infty \) (or, equivalently, as \(h \rightarrow 0\)). To this end, we consider the following Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{u}(t) = t(1+\sqrt{t})e^{-\sqrt{t}} - (t^2 + t + 1)e^{-t} + \int _{t}^{\sqrt{t}} tsu(s)ds, \\ u(0) = 1, \end{array}\right. } \end{aligned}$$
(3.6)

where \(t \in [0,1]\). This is the test case considered in [24]. For such an equation, the exact solution is known and is equal to

$$\begin{aligned} u(t) = e^{-t}. \end{aligned}$$

We compute the numerical solution \({\varvec{y}}_n\) of (3.6) using an increasingly fine equispaced approximation of the time span [0, 1] made by \(N_h = 50 \cdot 2^j\) intervals, for \(j = 6,8,10.\) Then, for each of the \(N_h\), we evaluate the pointwise error at the discretization nodes

$$\begin{aligned} (\texttt{err}^h)_i = |({\varvec{y}}_n)_i - u(t_i)|, \quad i = 1, \ldots , N_h. \end{aligned}$$

In Fig. 1 we report \(\texttt{err}^h\) at the discretization points for our choices of \(N_h\). As we can see, the numerical performance of the scheme improves as \(N_h\) increases.

Fig. 1
figure 1

Computed \(\texttt{err}^h\) at the discretization points for different choices of \(N_h\)

4 Numerical results

We investigate numerically (3.5) for three different superexponential kernels, namely,

$$\begin{aligned} \mu _1(s)&= e^{-s^2}, \\ \mu _2(s)&= e^{-e^s}, \\ \mu _3(s)&= (1-s)\chi _{[0,1]}(s), \end{aligned}$$

where \(\chi \) denotes the characteristic function. Without claiming to be complete, we believe that these functions provide a somewhat exhaustive overview of the most important features which a superexponential kernel can enjoy. Indeed, \(\mu _1(s)\) is the prototype of superexponential kernel, \(\mu _2(s)\) is a function which decays extremely fast (faster than any function of the form \(e^{-s^p}\), with \(p>0\)), and \(\mu _3(s)\) is superexponential simply because it is compactly supported.

Remark 4.1

We also tested other types of memory kernels, obtaining comparable results. Among others, we mention piecewise continuous kernels exhibiting a finite number of (downward) jumps, as well as kernels possessing a weak singularity at zero, such as

$$\begin{aligned} \mu (s)=k_p\frac{e^{-s^2}}{s^p}\quad \text {or}\quad k_p\frac{(1-s)}{s^p}\chi _{[0,1]}(s), \end{aligned}$$

with \(p\in (0,1)\) and \(k_p\) sufficiently small in order to guarantee that the total mass \(\ell \) is less than 1, in compliance with (1.3).

For all the choices of superexponential kernels \(\mu _i(s) \), \(i=1,2,3\) we consider two test cases.

  • Test Case 1 In the first set of numerical experiments, we fix the initial data \((u_0,v_0)\) and compute numerically the energy \({{\textsf{E}}}_j\) for the choice \(\lambda _j = j\), with \(j = 1,\ldots ,20\). Then, we plot \(\log ({{\textsf{E}}}_j)\) against the time t. If Conjecture 2.6 were true, we would expect to observe the following facts:

    1. 1.

      The single energy profile associated to a certain \(\lambda _j\) is not log-concave. Accordingly, since by Theorem 2.1 we know that the energy decays (at least) exponentially, its profile should eventually become a straight line in the logarithmic plot.

    2. 2.

      The exponential decay rates, which in this context correspond to the slopes of the lines in the logarithmic plot, are bounded from above. This is witnessed by the fact that either such rates reach a maximum (recall that the rate is positive by definition), or they stabilize on a certain value as \(\lambda _j\) increases.

  • Test Case 2 In the second set of numerical experiments, we investigate what happens by changing the initial data. For every choice of \((u_0,v_0)\) we compute the exponential decay rate of \({{\textsf{E}}}_j\) for increasing choices of \(\lambda _j\), and seek for differences in the behavior depending on the initial data. Once again, if Conjecture 2.6 were true, barring minor distinctions, we should observe that for every choice of the initial data the decay rates either reach a maximum or stabilize for large \(\lambda _j\).

All the numerical computations have been performed in MATLAB™, version R2021a. Throughout the section the parameter \(\alpha \) appearing in Algorithm 1 is chosen to be \(\alpha =1/2\).

Remark 4.2

In our Test Cases, we assumed the first eigenvalue \(\lambda _1\) of A to be equal to 1. No difference occurs in the results if one starts from a different (but of course strictly positive) \(\lambda _1\).

Before getting into the details of the results, we highlight a crucial issue. We would like to perform a sufficiently accurate simulation of (3.5), up to a final time T large enough to observe how the energy \({{\textsf{E}}}_j\) behaves asymptotically. However, since we are dealing with exponentially and superexponentially decaying objects, in our computations we need to pay particular attention not to reach machine epsilon precision, as well as to avoid possible round-off errors. In particular, whenever T becomes too large both the energy and the superexponential kernel might become too small, making the simulation inaccurate. This is precisely the reason why we introduced the truncated kernel \(\mu ^{{\varepsilon }}(s)\). Now let \({\varepsilon }>0\) be small enough that

$$\begin{aligned} \mu (1/{\varepsilon }) \le \textsc {L}, \end{aligned}$$

where L is a user-defined tolerance. In practice we choose \(\textsc {L}\) equal to \(10^{-16}\). Now, let \(\mu ^{{\varepsilon }}(s)\) be defined as in (3.2). Then, recalling Proposition 3.1, we have

$$\begin{aligned} \omega _{{\varepsilon }}&= \int _0^{\infty }( \mu (s) - \mu ^{{\varepsilon }}(s) )ds = \int _{1/{\varepsilon }}^{\infty } \mu (s) ds. \end{aligned}$$

In the case of \(\mu _1(s)\), we have

$$\begin{aligned} \int _{1/{\varepsilon }}^{\infty } \mu _1(s) ds = \int _0^{\infty } e^{-(s+1/{\varepsilon })^2}ds \le e^{-1/{\varepsilon }^2} \ell \le \textsc {L}, \end{aligned}$$

where we have used (1.3) in the last inequality. Besides, since \(e^s>s^2\) for every \(s \in {\mathbb {R}}^+\), it is clear that the latter inequality holds also for \(\mu _2(s)\). In light of Proposition 3.1, this translates into an error of the order of \(\sqrt{\textsc {L}}\). As we will see, this numerical error is of a lower order than the computed energy, and therefore it does not affect the final result of the numerical simulations. We do not need to define an \({\varepsilon }\) for \(\mu _3(s)\), as the function is already compactly supported (and the support is reasonably small).

4.1 The kernel \(\varvec{\mu _1}\)

For the first kernel, we choose \({\varepsilon }= 1/6\) and \(T = 12\). In particular, we truncate the function at \(t = 6\). In the first numerical test we work with fixed initial data

$$\begin{aligned} u_0 = 0.1, \quad v_0 = 0.1, \end{aligned}$$
(4.1)

and we compute the energy \({{\textsf{E}}}_j(t)\) for \(\lambda _j = 1,\ldots , 20.\) The tolerance in Algorithm 1 is set at \(\texttt{TOL} = 10^{-8}\) and \(N_h = 2000.\) In Fig. 2 we report the computed \(E_j\) as a function of the time (semi-logarithmic scale). For visualization purposes, we only selected the profiles corresponding to the eigenvalues \(\lambda _j = 2, 5, 9, 14, 20\). We refer to Table 1 for all the (least-square) computed exponential decay rates. We observe that the energy decays exponentially fast for every \(\lambda _j\), but not superexponentially, as highlighted by the fact that none of the decay profiles is log-concave. Furthermore, we see that, as the \(\lambda _j\) increases, the rate settles on a value of about 1.05, whereas the fastest decay, about 1.1, is obtained in correspondence of \(\lambda _{11}\). Also observe that, in accordance with our theoretical expectations, the energy computed numerically is indeed decreasing.

Fig. 2
figure 2

Test Case 1, kernel \(\mu _1(s)\). Computed energy \({{\textsf{E}}}_j\) for initial data given in (4.1)

Table 1 Test Case 1, kernel \(\mu _1(s)\)

In the second numerical investigation (Test Case 2), we perform the same test as before, except that now we try different initial data. The only difference in the numerical parameters is the number of intervals, which is now set at \(N_h = 1600\). In view of the linearity of (3.5), we restrict ourselves to initial data of Euclidean norm equal to 1. Specifically, we choose

$$\begin{aligned} u_0 = \cos \left( \theta _k\right) \quad \text {and}\quad v_0 = \sin \left( \theta _k\right) , \end{aligned}$$
(4.2)

where

$$\begin{aligned} \theta _k = \frac{k\pi }{20}, \quad k=0,\ldots , 20. \end{aligned}$$

For every initial datum, we verify that the energy profile is not log-concave (hence it is log-linear). Then, we compute the decay rate (i.e., the slope) of the energy \({{\textsf{E}}}_j\) for each \(\lambda _j\). In Fig. 3 we can see the plot of the results. To wit, on the x-axis we have the different initial data, while on the y-axis we have \(\lambda _j\). For small \(\lambda _j\), the decay rate is also small, for every choice of the initial data. Then, as we increase \(\lambda _j\), the rate starts to increase and stabilizes on a certain profile. This limit profile attains its minimum for the choice \((u_0,v_0) = (0,1)\).

Fig. 3
figure 3

Test Case 2, kernel \(\mu _1(s)\). Computed decay rate for variable initial data defined as in (4.2)

Fig. 4
figure 4

Test Case 1, kernel \(\mu _2(s)\). Energy \({{\textsf{E}}}_j\) for initial data defined as in (4.1)

4.2 The kernel \(\varvec{\mu _2}\)

Here we take \({\varepsilon }=5/18\) and \(T = 12\). Again, we choose \(\texttt{TOL} = 10^{-8}\), and \(N_h = 2000\) for Test Case 1, whereas \(N_h = 1600\) for Test Case 2. We perform the same set of numerical experiments as before, obtaining similar results. Indeed, in Fig. 4, we see that the energy relative to the initial data in (4.1) has an exponential decay. Moreover, from the results reported in Table 2 there is the evidence that the decay rate settles around 0.38 for large \(\lambda _j\), while Fig. 5 shows that the situation does not change for different initial data. In fact, for this particular kernel, the experimental results exhibit a less accentuated dependence on the initial data.

Table 2 Test Case 1, kernel \(\mu _2(s)\)
Fig. 5
figure 5

Test Case 2, kernel \(\mu _2(s)\) Computed decay rate for variable initial data defined as in (4.2)

4.3 The kernel \(\varvec{\mu _3}\)

The last set of numerical experiments features a compactly supported kernel. As already mentioned, in this case there is no need use Proposition 3.1 and, therefore, to define \(\epsilon \). For these experiments we set \(T=6\), which is already enough for the energy to stabilize. The other numerical parameters are exactly the same as before. Concerning Test Case 1, we see that every single energy profile is, again, log-linear. The main difference with respect to the preceding kernels is that now the exponential decay rate does not stabilize for large \(\lambda _j\). Indeed, it reaches its maximum at \(\lambda = 13\), and then starts to decrease for larger values (see Fig. 6 and Table 3).

Fig. 6
figure 6

Test Case 1, kernel \(\mu _3(s)\). Computed exponential decay rates for initial data defined as in (4.1)

Table 3 Test Case 1, kernel \(\mu _3(s)\)

Such a behavior is also observed in Test Case 2. Here, with very minor differences depending on the initial data, the decay rate decreases as \(\lambda _j\) increases. To confirm this pattern, we actually performed other experiments with the values of \(\lambda _j\) up to 40, without noting significant differences (see Fig. 7).

Fig. 7
figure 7

Test Case 2, kernel \(\mu _3(s)\). Computed decay rate for variable initial data defined as in (4.2)

5 Completely flat kernels

We conclude this work by exploiting our numerical approach to examine an issue that, although not pertinent to the verification of Conjecture 2.6, has an independent interest in the context of the theory of viscoelasticity. As shown in [2], for a completely flat kernel \(\mu \) (i.e., with \(\mu '=0\) almost everywhere) the contraction semigroup S(t) generated by (1.10) is not exponentially stable. This is the case, for instance, if we take the step kernel

$$\begin{aligned} \mu (s)=\chi _{[0,1/2]}(s), \end{aligned}$$

where \(\ell =1/2<1\). On the other hand, we know from [32] that S(t) is exponentially stable for any kernel of the form

$$\begin{aligned} \mu (s)=(1-\rho s)\chi _{[0,1/2]}(s), \quad \rho >0. \end{aligned}$$
(5.1)

We aim to demonstrate, by employing our arguments, that the same feature reflects on the corresponding energy \({{\textsf{E}}}(t)\) of the Volterra equation. To this end, we perform a final numerical simulation by taking \(\mu \) as in (5.1) with

$$\begin{aligned} \rho = 1,\quad \rho = \frac{1}{8},\quad \rho = \frac{1}{32},\quad \rho = 0. \end{aligned}$$

We expect the energy to exponentially decay for \(\rho \ne 0\), but not for \(\rho =0\). We simulate equation (3.5), with initial data (4.1), for the chosen kernels for \(\lambda = 2\), \(N_h = 2000\) and \(T=12\). The plot of the logarithm of the computed energies can be seen in Fig. 8. Notably, the profile is a decreasing line whose angular coefficient tends to zero as \(\rho \rightarrow 0\).

Fig. 8
figure 8

Computed energies \({{\textsf{E}}}_j\) for memory kernels defined as in (5.1)

Fig. 9
figure 9

Kernel \(e^{-s^2}\), mechanical energy

6 Conclusions

For several choices of initial data, we have shown that the (numerical) energy of the solution is never log-concave. Furthermore, its decay rate remains bounded as \(\lambda _j\) increases. Such numerical results provide a strong evidence in support of Conjecture 2.6.

Another interesting question concerns with the role played by each of the components \({{\textsf{E}}}_0(t)\) and \({{\textsf{E}}}_1(t)\) of the energy, defined in (1.6)–(1.7), in the decay rate. One might argue that the lack of superstable trajectories suggested by our numerical simulations is, in a sense, artificial, and only due to the presence of the memory energy \({{\textsf{E}}}_1(t)\), while the “effective” mechanical energy \({{\textsf{E}}}_0(t)\) might still exhibit a superexponential decay. In fact, we believe that a stronger statement than Conjecture 2.6 holds, namely, that \({{\textsf{E}}}_0(t)\) itself cannot decay faster than an exponential. In this case, the numerical analysis is slightly more complicated, since the mechanical energy is not a decreasing function of time any longer, which introduces some technical difficulties in the interpretation of the results.

As an example, we consider the numerical mechanical energy \({{\textsf{E}}}_{0,j}\) (i.e, \({{\textsf{E}}}_j\) without the integral term) for the kernel \(\mu _1(s)\) and initial data \((u_0,v_0)=(0.1,0.1)\). We take \(T = 15\), \(\lambda _j = (8,10,12,14,16,18,20)\) and all the other parameters as in the first experiment, that is,

$$\begin{aligned} {\varepsilon }= \frac{1}{6},\quad \texttt{TOL} = 10^{-8},\quad N_h = 2000. \end{aligned}$$

In Fig. 9 we can see the logarithmic plot of \({{\textsf{E}}}_{0,j}\). Although the behavior of the (logarithm of the) mechanical energy is not as regular as the one of \({{\textsf{E}}}_j(t)\), there is a clear stabilization of the profile for every \(\lambda _j\). Although a single example cannot be taken as an ultimate evidence, this suggests that Conjecture 2.6 holds in the stronger form.