1 Introduction

Let us denote by \(\mathcal {P}\left( \mathbb {C}^{d}\right) \) the set of all polynomials with complex coefficients in the variable \(z=\left( z_{1},\dots ,z_{d}\right) \in \mathbb {C}^{d}\), and by \(\mathcal {P}_{m}\left( \mathbb {C}^{d}\right) \) the subspace of all homogeneous polynomials of degree m. Recall that a polynomial \(P\left( z \right) \) is homogeneous of degree \(\alpha \) if \(P\left( t z \right) =t^{\alpha }P\left( z \right) \) for all \(t>0\) and all \(z \in \mathbb {C}^{d}.\) In particular, the zero polynomial is homogeneous of every degree. Given a polynomial \(P\left( z\right) \), we denote by \(P^{*}\left( z\right) \) the polynomial obtained from \(P\left( z\right) \) by conjugating its coefficients, and by \(P\left( D\right) \) the linear differential operator obtained by replacing the variable \(z_{j}\) with the differential operator \(\frac{\partial }{\partial z_{j}}\).

It is well known that a polynomial \(P\left( z\right) \) of degree k can be written as a sum of homogeneous polynomials \(P_{j}\left( z\right) \) of degree j for \(j=0, \dots ,k,\) so

$$\begin{aligned} P\left( z \right) =P_{k}\left( z \right) +\cdots +P_{0}\left( z \right) , \end{aligned}$$

and we call the homogeneous polynomial \(P_{k}\left( z \right) \) the leading term. Often the alternative notation \(P\left( z \right) =P_{k}\left( z \right) - \cdots - P_{0}\left( z \right) \) will be more convenient, in relation with formulas regarding division of polynomials.

Fischer’s theorem states that if P is a polynomial with leading term \(P_k\), then the following decomposition holds: for every polynomial \(f\in \mathcal {P}\left( \mathbb {C}^{d}\right) \) there exist unique polynomials \(q\in \mathcal {P}\left( \mathbb {C}^{d}\right) \) and \(r\in \mathcal {P}\left( \mathbb {C}^{d}\right) \) such that

$$\begin{aligned} f= P\cdot q+r\text { and }P_k^{*}\left( D\right) r=0. \end{aligned}$$
(1)

To indicate the dependency of q on f and P we will often write \(q = T_P(f)\). H. S. Shapiro studied Fischer decompositions in a wider setting, cf. [17], going beyond the case of polynomials to more general function spaces, in particular, to the space \(E\left( \mathbb {C} ^{d}\right) \) of all entire functions \(f:\mathbb {C}^{d}\rightarrow \mathbb {C} \). It is useful to adopt a notion introduced in [17, p. 522]: suppose that E is a vector space of infinitely differentiable functions \( f:G\rightarrow \mathbb {C}\) (defined on an open subset G of \(\mathbb {C}^{d}\)) that is a module over \(\mathcal {P}\left( \mathbb {C} ^{d}\right) \).

Definition 1

We say that a polynomial P and a differential operator \(Q\left( D\right) \) form a Fischer pair (P, Q) for the space E, if for each \(f\in E\) there exist unique elements \(q\in E\) and \(r\in E\) such that

$$\begin{aligned} f=P\cdot q+r\text { and }Q\left( D\right) r=0. \end{aligned}$$
(2)

We will speak of weak Fischer pairs when the decomposition \(f = P\cdot q+r\) is not assumed to be unique. But the expression “Fischer decomposition” will be used even in the absence of uniqueness.

Shapiro proved in [17, Theorem 1] that the following version of Fischer’s theorem is true when \(\mathcal {P}\left( \mathbb {C}^{d}\right) \) is replaced by \(E\left( \mathbb {C}^{d}\right) \): for every homogeneous polynomial P and every entire function f there exist unique entire functions q and r such that

$$\begin{aligned} f= P \cdot q+r\text { and }P^{*}\left( D\right) r=0. \end{aligned}$$

That is, P and \(P^*\) form a Fischer pair for the entire functions, whenever P is homogeneous. Shapiro conjectured that his theorem held even for non-homogeneous P and arbitrary entire functions, cf. [17, p. 517 (i)]. Below we obtain some partial results for an arbitrary polynomial P with leading term \(P_k\), showing that \((P, P^*_k)\) is a weak Fischer pair for entire functions of sufficiently low order.

The following related conjecture, for the special case where the polynomial under consideration is \( P_k - 1\), has been studied in the literature:

Conjecture: Let P be a nonconstant homogeneous polynomial and define

$$\begin{aligned} F_P \left( q\right) := P^{*}\left( D\right) \left[ \left( P-1\right) q\right] . \end{aligned}$$

Then \(F_{P}\) is a bijection on the set of all entire functions; equivalently, \(P-1\)and \(P^{*}\left( D\right) \) form a Fischer pair for \(E\left( \mathbb {C}^{d}\right) \).

We mention that the equivalence between the bijectivity of the Fischer operator \(F_{P}\) and the fact that \(P-1\) and \(P^{*}\left( D\right) \) form a Fischer pair, is due to Meril and Struppa, cf. [11, Proposition 1].

In [9], D. Khavinson and H.S. Shapiro prove a partial case of the preceding conjecture, restricting the class of entire functions under consideration and assuming uniqueness from the outset, cf. [9, Theorem 3] (we mention that the proof uses Fredholm theory).

The homogeneous polynomials P in [9, Theorem 3] satisfy a certain property called “amenability", defined as follows in [9, p. 464]: a homogeneous polynomial \(P_{k} \in \mathbb {C}[z_1, \dots , z_d]\) of degree \(k > 0\) is said to be amenable if for each \(j\in \{1,\dots , d\}\) there is a multi-index \(\alpha \), with \(|\alpha | = k -1\), such that \(D^\alpha P_k\) is a non-zero constant times \(z_j\) (note however that no polynomial of degree \(k = 1\) is amenable when \(d \ge 2\)). Khavinson and Shapiro observe that amenability holds in the important special case \(P=\left| x\right| ^{2}\), so \(P^{*}\left( D\right) =\Delta \). Furthermore, amenability entails the following bounds, cf. [9, Lemma 11], which represent a weaker assumption: if P is an amenable homogeneous polynomial and f is homogeneous of degree m, that is, \(f \in \mathcal {P}_{m}\left( \mathbb {C}^{d}\right) \), then there is a constant \(C > 0\), independent of m, such that

$$\begin{aligned} \Vert P f\Vert _a \ge C m^{1/2} \Vert P\Vert _a \Vert f\Vert _a, \end{aligned}$$
(3)

where \(\Vert \cdot \Vert _a\) is the norm defined by the apolar inner product (see the next section). Note that the term \( \Vert P\Vert _a\) can be absorbed into the constant C.

Satisfaction of the Khavinson–Shapiro bounds (3) appears to be a much more generic property than amenability. We explore this question in the specific instance where both the dimension and the degree are 2, so \(P\left( z_1,z_2 \right) =a z_1^{2}+ b z_1 z_2 + c z_2^{2}\) (with abc not all 0). While in this case the only amenable polynomials are \( a z_1^{2} + c z_2^{2}\) when \(a, c \ne 0\), and \( b z_1 z_2\) when \(b \ne 0\), we prove that the Khavinson–Shapiro bounds are satisfied precisely when \(4 a c \ne b^2\), cf. Theorem 7 below.

Recalling Bombieri’s inequality

$$\begin{aligned} \Vert P f\Vert _a \ge \Vert P\Vert _a \Vert f\Vert _a, \end{aligned}$$

we see that satisfaction of the Khavinson–Shapiro bounds entails a quantitative strengthening of Bombieri’s bounds, as \(m \rightarrow \infty \).

Let \(P_k\) be a nonzero homogeneous polynomial of degree k. By Fischer’s Theorem, for each homogeneous polynomial \(f_{m}\) of degree m there exist unique polynomials \(T_{P_k}\) and \(r_{m}\), with \(P_k^{*}\left( D\right) r_{m}=0\) and

$$\begin{aligned} f_{m}=P_k\cdot T_{P_k}\left( f_{m}\right) + r_{m}. \end{aligned}$$
(4)

This decomposition is orthogonal with respect to the apolar inner product, since by (11) below

$$\begin{aligned} \langle P_k \cdot T_{P_k}\left( f_{m}\right) ,r_{m}\rangle _a =\langle T_{P_k}\left( f_{m}\right) ,P_k^{*}\left( D\right) r_{m}\rangle _a =0. \end{aligned}$$

By the Pythagorean Theorem, and under the assumption that the Khavinson–Shapiro bounds (3) hold, we obtain

$$\begin{aligned} \left\| f_{m}\right\| _{a}^{2}=\left\| P_k\cdot T_{P_k}\left( f_{m}\right) \right\| _{a}^{2}+\left\| r_{m}\right\| ^{2} \ge \left\| P_k \cdot T_{P_k }\left( f_{m}\right) \right\| _{a}^{2} \ge C^2 m \Vert T_{P_k }\left( f_{m}\right) \Vert _a^2. \end{aligned}$$

This shall be the type of assumption used in the theorem stated next. The main differences with the Khavinson–Shapiro result lie in the fact that arbitrary polynomials \(P = P_{k}-P_{k - 1}- \cdots - P_{0}\) are used for division, instead of polynomials of the form \(P = P_{k}-1\), and instead of bounds of the form \(O(m^{1/2})\) we use \(O(m^{\tau /2})\), where \(\tau \ge 0\).

Next we state the main result of the paper.

Theorem 2

Let \(P_{k}\) be a homogeneous polynomial of degree \(k > 0\) on \(\mathbb {C}^d\), and let us write \(T:=T_{P_{k}}\), where \(T_{P_{k}}\) is defined by (4). Assume that there exist a \(C=C(P_{k})>0\) (C independent of m) and an \(\tau \in \{0, \dots , k\}\), such that for every \(m>0\) and every homogeneous polynomial \(f_{m}\) of degree m,  the following inequality holds:

$$\begin{aligned} \left\| Tf_{m}\right\| _{a}\le \frac{C}{m^{\tau /2}}\left\| f_{m}\right\| _{a} . \end{aligned}$$
(5)

If for \(0\le j<k\) the polynomials \(P_{j}\left( z\right) \) are homogeneous of degree j, and for some \(\beta <k\) and every j with \(\beta<j<k\) we have \(P_{j}=0\), then for every entire function \(f: \mathbb {C}^d\rightarrow \mathbb {C}\) of order \(\rho \), where \(\rho \) satisfies the inequality

$$\begin{aligned} \rho (k-\tau ) < 2(k-\beta ), \end{aligned}$$

there exist entire functions q and r of order \(\le \rho \) with

$$\begin{aligned} f=\left( P_{k}-P_{\beta }- \cdots -P_{0}\right) q+r\text { and }P_{k}^{*}\left( D\right) r=0. \end{aligned}$$

Throughout this paper, given the homogeneous polynomial \(P_k\) we will use the abbreviation \(T:=T_{P_{k}}\).

Note that no statement is made regarding the possible uniqueness of the entire functions q and r. However, we shall see that when \(k=1\), the conjecture is true and furthermore we do have uniqueness, even though the Khavinson–Shapiro bounds do not hold, cf. Theorem 8 below. When the dimension is \(d = 1\), it follows directly from polynomial interpolation that the result also holds, cf. Theorem 9: for every entire function \(f:\mathbb {C}\rightarrow \mathbb {C}\), there exist unique entire functions q and r such that

$$\begin{aligned} f=\left( P_{k}-P_{\beta }- \cdots -P_{0}\right) q+r\text { and }P_{k}^{*}\left( D\right) r=0. \end{aligned}$$

Thus, in the proof of Theorem 2 above, it can always be assumed that \(k, d \ge 2\).

Finally let us mention that conditions of the type (3) had already been used in the work of P. Ebenfelt and H.S. Shapiro about a generalized Cauchy-Kowaleskaya theorem in [4], and the mixed Cauchy problem for differential equations in [5], see also [6, 7]. Let P be a polynomial of degree k,  written as a sum of homogeneous polynomials

$$\begin{aligned} P=P_{k}+P_{k-1}+\cdots +P_{0} \end{aligned}$$

with leading coefficient \(P_{k}\), and let the floor function \(\lfloor x\rfloor \) be the largest integer \(\le x\). Theorem 3.1.1 in [5] (see also remark on p. 259 in [5]) implies that for each entire function f, there exist entire functions q and r such that

$$\begin{aligned} f=P_{k}q+r\text { and }P^{*}\left( D\right) r=0, \end{aligned}$$
(6)

provided, first, that there are constants \(C>0\) and \(\tau \ge 0\) such that

$$\begin{aligned} \left\| P_{k}f_{m}\right\| \ge Cm^{\tau /2}\left\| f_{m}\right\| _{a}, \end{aligned}$$
(7)

and second, that \(P_{j}=0\) for all j with

$$\begin{aligned} \left\lfloor \frac{k+\tau }{2}\right\rfloor<j<k. \end{aligned}$$

Note that (6) can be seen as the dual problem to the one formulated in (1).

2 Basic properties of the apolar inner product

The variant of the Khavinson–Shapiro result obtained here also employs the apolar inner product (introduced during the XIX century within the theory of invariants) also known as Fischer’s inner product, and, with different normalizations, as Bombieri’s inner product (more details can be found in [14]). For easy reference we include some known information about it.

Denote the natural numbers by \(\mathbb {N}_0\) (emphasizing the fact that they include zero) and recall the standard notation for multi-indices: given \(\alpha =\left( \alpha _{1},\dots ,\alpha _{d}\right) \in \mathbb {N}_{0}^{d} \), we write \(z^{\alpha }=z_{1}^{\alpha _{1}}\cdots z_{d}^{\alpha _{d}},\) \(\alpha !=\alpha _{1} ! \cdots \alpha _{d}!\), and \(\left| \alpha \right| =\alpha _{1}+\cdots +\alpha _{d}\).

Let P and Q be polynomials of degree N and M, respectively given by

$$\begin{aligned} P\left( z \right) =\sum _{\alpha \in \mathbb {N}_{0}^{d},\left| \alpha \right| \le N}c_{\alpha }z ^{\alpha }\text { and }Q\left( z\right) =\sum _{\alpha \in \mathbb {N}_{0}^{d},\left| \alpha \right| \le M}d_{\alpha }z^{\alpha }. \end{aligned}$$

The apolar inner product \(\langle \cdot ,\cdot \rangle _{a}\) on \(\mathcal {P}\left( \mathbb {C}^{d}\right) \) is defined by

$$\begin{aligned} \left\langle P,Q\right\rangle _{a}:=\left[ Q^*\left( D\right) P\right] \;(0)=\sum _{\alpha \in \mathbb {N}_{0}^{d}}\alpha !c_{\alpha }\overline{d_{\alpha }}, \end{aligned}$$
(8)

and the associated apolar norm, by \( \left\| f\right\| _{a}=\sqrt{\left\langle f,f\right\rangle _{a}}. \) Note that for the constant polynomial 1 one has

$$\begin{aligned} \left\langle P,1\right\rangle _{a}=P\left( 0\right) . \end{aligned}$$
(9)

A fundamental property of the apolar inner product, is that the adjoint operator of the multiplication operator \(M_{Q}\left( g\right) =Qg\) is the differential operator associated to the polynomial \(Q^{*}.\)

For the reader’s convenience we include the short proof of the following known result.

Proposition 3

The following formulae hold for polynomials PQ and fg:

$$\begin{aligned} \left\langle P,Q\right\rangle _{a} = [Q^{*}\left( D\right) P ] \left( 0\right) = \left\langle Q^{*}\left( D\right) P,1\right\rangle _{a}. \end{aligned}$$
(10)

and

$$\begin{aligned} \left\langle Q^{*}\left( D\right) f,g\right\rangle _{a}=\left\langle f,Q\cdot g\right\rangle _{a} . \end{aligned}$$
(11)

Proof

Regarding (10), the first equality is the definition and the second follows from (9). From (10) we conclude that

$$\begin{aligned} \left\langle Q^{*}\left( D\right) f,g\right\rangle _{a}&=\left\langle g^{*}\left( D\right) Q^{*}\left( D\right) f,1\right\rangle _{a}, \text{ and } \\ \left\langle f,Q\cdot g\right\rangle _{a}&=\left\langle Q^{*}\left( D\right) g^{*}\left( D\right) f,1\right\rangle _{a}. \end{aligned}$$

Since \(g^{*}\left( D\right) Q^{*}\left( D\right) =Q^{*}\left( D\right) g^{*}\left( D\right) \) equation (11) follows. \(\square \)

It is shown in [14, Theorem 38] that if \((P_k,Q)\) is a Fischer pair, where \(P_k\) is the principal part of P and Q is homogeneous, then (PQ) is a Fischer pair. However, from the fact that (PQ) is a Fischer pair we cannot conclude that \((P_k,Q)\) is a Fischer pair, as the following example from [11] shows (cf. also [14, Example 34]): take \(n = 2\), \(P(z_1, z_2) = z_1 - z_2^2\) and \(Q(z_1, z_2) = z_1\). Then it can be checked that P and Q form a Fischer pair, while \(P_k(z_1,z_2) = - z^2\) and Q do not, since they are homogeneous of different degrees. This is impossible by [14, Theorem 36].

3 Apolar norms of products of linear factors

In addition to \(\partial /\partial z_j\), we also use \(D_j\) and \(\partial _j\) to denote the j-th partial derivative. The results in this section will be used later, when comparing amenability to the Khavinson–Shapiro bounds. For \(a,b\in \mathbb {C}^{d}\) we denote by \(\left\langle a,b\right\rangle \) the standard inner product

$$\begin{aligned} \left\langle a,b\right\rangle =a_{1}\overline{b_{1}}+\cdots +a_{d} \overline{b_{d}} = \overline{\langle b, \overline{a}\rangle }. \end{aligned}$$

Given \(b, z \in \mathbb {C}^{d}\), consider the lineal polynomial

$$\begin{aligned} Q_b\left( z\right) :=\left\langle z, \overline{b}\right\rangle =b_{1}z_{1}+\cdots +b_{d}z_{d}, \end{aligned}$$

and write

$$\begin{aligned} L_{\overline{b}} = Q_b^{*}\left( D\right) =\overline{b_{1}}\partial _1+\cdots +\overline{b_{d}}\partial _{d}. \end{aligned}$$

It is easy to see that for any pair of differentiable functions \(f\left( z\right) ,g\left( z \right) \),

$$\begin{aligned} L_{\overline{b}}\left( fg\right) = {\displaystyle \sum _{j=1}^{n}} \overline{b_{j}}\partial _{j}\left( f\cdot g\right) = (L_{\overline{b}}f )\cdot g+f\cdot L_{\overline{b}}g. \end{aligned}$$

In the next result we assume that the nonzero vector c is orthogonal to the vectors \(a_{1},\dots ,a_{M}\) (they may be linearly dependent, or even repeating):

Proposition 4

Assume that \(a_{1},\dots ,a_{M}\in \mathbb {C}^{d}\), where \(a_{k}=\left( a_{k,1},\dots ,a_{k,d}\right) \), and set \(\sigma _{k}\left( z \right) =\left\langle z, \overline{a_{k}} \right\rangle \) for \(k=1,\dots ,M.\) Suppose there exists a vector \(c\in \mathbb {C}^{d}\setminus \{0\}\) such that for \(k=1,\dots ,M\),

$$\begin{aligned} {\displaystyle \sum _{j=1}^{d}} \overline{a_{k,j}}c_{j}=0. \end{aligned}$$

Let g be a univariate polynomial. Then

$$\begin{aligned} \left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \cdot g\left( \left\langle z, \overline{c} \right\rangle \right) \right\| _{a}^{2}=\left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \right\| _{a}^{2}\left\| g\left( \left\langle z, \overline{c} \right\rangle \right) \right\| _{a}^{2}. \end{aligned}$$

Taking \(g\left( t\right) =t^{m}\) we see that

$$\begin{aligned} \left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \cdot \left\langle z, \overline{c} \right\rangle ^{m}\right\| _{a}^{2}=\left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \right\| _{a}^{2}\left\| \left\langle z, \overline{c} \right\rangle ^{m}\right\| _{a}^{2}. \end{aligned}$$

so the product \(\sigma _{1} (z) \cdots \sigma _{M} (z) \) does not satisfy the Khavinson–Shapiro bounds.

Proof

If \(a =\left( a_{1},\dots ,a_{d}\right) \) and c are orthogonal, so \(\langle a, c \rangle = 0\), then

$$\begin{aligned} L_{\overline{a}}\left( g\left( \left\langle z, \overline{c} \right\rangle \right) \right) = {\displaystyle \sum _{j=1}^{d}} \overline{a_{j}} \partial _{j}\left( g\left( \left\langle z, \overline{c} \right\rangle \right) \right) =g^{\prime }\left( \left\langle z, \overline{c} \right\rangle \right) {\displaystyle \sum _{j=1}^{d}} \overline{a_{j}}c_{j}=0. \end{aligned}$$

It follows that

$$\begin{aligned} L_{\overline{a}}\left( f\cdot g\left( \left\langle z, \overline{c} \right\rangle \right) \right) =L_{\overline{a}}f\cdot g\left( \left\langle z, \overline{c} \right\rangle \right) +f\cdot L_{\overline{a}}\left( g\left( \left\langle z, \overline{c} \right\rangle \right) \right) =L_{\overline{a}}f\cdot g\left( \left\langle z, \overline{c} \right\rangle \right) . \end{aligned}$$

Since \(\sigma _{k}\left( c \right) =\left\langle c, \overline{a_{k}}\right\rangle = 0\) for \(k=1,\dots ,M\), we have

$$\begin{aligned} L_{\overline{a_{1}}}\cdots L_{\overline{a_{M}}}\left( \sigma _{1} (z) \cdots \sigma _{M} (z) \cdot g\left( \left\langle z, \overline{c} \right\rangle \right) \right) =L_{\overline{a_{1}}}\cdots L_{\overline{a_{M}}}\left( \sigma _{1} (z) \cdots \sigma _{M} (z) \right) \cdot g\left( \left\langle z, \overline{c} \right\rangle \right) . \end{aligned}$$

Furthermore, we know that

$$\begin{aligned} \left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \cdot g\left( \left\langle z, \overline{c} \right\rangle \right) \right\| _{a}^{2}&=\left\langle L_{\overline{a_{1}}}\cdots L_{\overline{a_{M}}}\left( \sigma _{1} (z) \cdots \sigma _{M} (z) \cdot g\left( \left\langle z, \overline{c} \right\rangle \right) \right) ,g\left( \left\langle z, \overline{c} \right\rangle \right) \right\rangle _{a}\\&=L_{\overline{a_{1}}}\cdots L_{\overline{a_{M}}}\left( \sigma _{1} (z) \cdots \sigma _{M} (z) \right) \left\| g\left( \left\langle z, \overline{c} \right\rangle \right) \right\| _{a}^{2}\\&=\left\| \sigma _{1} (z) \cdots \sigma _{M} (z) \right\| _{a}^{2}\left\| g\left( \left\langle z, \overline{c} \right\rangle \right) \right\| _{a}^{2}. \end{aligned}$$

\(\square \)

4 Observations on amenability and the bounds of Khavinson and Shapiro

Let \(P\in \mathbb {C}[z_1, \dots , z_d]\). We say that P is independent of \(z_j\) if \(\partial /\partial z_j P = 0\). If for no j this holds, then we say that P depends on all the variables. Obviously if \(P_k\) is amenable then it depends on all the variables. The converse is not true when \(d >1\): for \(k= d = 2\), just consider \(P_2 (z_1, z_2) = z_1^2 + z_1 z_2 + z_2^2\); the only possibilities for \(\alpha \) are (1, 0) and (0, 1); clearly, neither of them satisfy the amenability condition. This example also shows that a symmetric polynomial (i.e., invariant under permutations of the variables) may fail to be amenable.

Next we explore some cases of homogeneous polynomials that satisfy, or fail to satisfy, the Khavinson–Shapiro bounds (3). Formula (12) below, where the sum is taken over all multi-indices with non-negative entries, appears in [17, p. 523] (an earlier, and more general statement applying to certain entire functions that include the polynomials, can be found in [13, Theorem 3]; in [18] formula (12) is called Reznick identity). Note that when \(|\alpha | > k\) the corresponding terms are zero, while summing only over the multi-indices with \(|\alpha | = k\) yields Bombieri’s inequality. Refinements of Bombieri’s inequality can be obtained by estimating other terms in the sum

$$\begin{aligned} \Vert P_k f_m \Vert _a^2 = \sum _\alpha \Vert \partial ^\alpha P_k^*(D) f_m \Vert _a^2/\alpha !, \end{aligned}$$
(12)

as we do next in some special cases. Suppose the analogous statement to the Khavinson–Shapiro bounds holds for some positive integer \(\tau \) not necessarily equal to one, i.e., for every homogeneous polynomial \(f_m\) of degree m, there is a constant \(C > 0\), independent of m, such that

$$\begin{aligned} \Vert P f\Vert _a \ge C m^{\tau /2} \Vert f_m\Vert _a. \end{aligned}$$
(13)

Then we call \(\tau \) a Khavinson–Shapiro exponent, and the largest such \(\tau \) the optimal Khavinson–Shapiro exponent.

Theorem 5

Let \(P_{k}: \mathbb {C}^d \rightarrow \mathbb {C}\) be a homogeneous polynomial of degree \(k > 0\). Then:

(1) When the dimension \(d = 1\), then the optimal Khavinson–Shapiro exponent is always equal to k.

(2) When the degree \(k = 1\) and the dimension \(d > 1\), no homogeneous polynomial \(P_1\) satisfies the Khavinson–Shapiro bounds. For \(k > 1\), the optimal Khavinson–Shapiro exponent is bounded by \(k-1\).

(3) When all the exponents of the variables are even (and in particular, so is the total degree k), the homogeneous polynomial \(P_k\) is amenable, and hence it satisfies the Khavinson–Shapiro bounds.

Proof

When \(d = 1\), writing \(P_k(z) = a z^k\) and \(f_m(x) = b z^m\) with \(a, b \ne 0\), we have

$$\begin{aligned} \Vert P_k f_m \Vert _a^2 = | a |^2 \frac{(k + m) !}{m!} \Vert f_m\Vert _a^2. \end{aligned}$$
(14)

For (2), suppose that

$$\begin{aligned} P_1\left( z\right) = \sum _{1 \le i \le d}c_{i} z_{i}, \end{aligned}$$

where some but not all the coefficients \(c_i\) might be 0, and let \(m > 1\). Since \(P_1(D)\) maps the homogeneous polynomials of degree m into those of degree \(m - 1\), and the latter space has a smaller dimension (because \(d > 1\)) it follows that there is a homogeneous polynomial \(f_m \ne 0\) of degree m such that \(P_1(D) f_m = 0\). By the Newman-Shapiro identity (12), a.k.a. Reznick identity, with \(|\alpha | = 1\),

$$\begin{aligned} \Vert P_1 f_m \Vert _a^2 = \Vert P_1 \Vert _a^2 \Vert f_m\Vert _a^2 + \Vert P_1(D) f_m \Vert _a^2 = \Vert f_m\Vert _a^2 \Vert P_1 \Vert _a^2. \end{aligned}$$

For \(k > 1\), the same argument shows that there is a homogeneous polynomial \(f_m \ne 0\) of degree m such that \(P_k(D) f_m = 0\), so the term in (12) corresponding to the multi-index \(\alpha = 0\) vanishes, and hence the optimal Khavinson–Shapiro exponent is bounded by \(k-1\).

Next we check the assertion in 3). For each \(j\in \{1,\dots , d\}\) choose a monomial \(c_\alpha z^\alpha \) with maximal \(\alpha _j\), and let \(\alpha ^\prime \) be obtained from \(\alpha \) by replacing \(\alpha _j\) with \(\alpha _j - 1\), so \(|\alpha ^\prime | = k -1\). Then for some constat \(c \ne 0\) we have \(D^{\alpha ^\prime } c_\alpha z^\alpha = c z_j\). If \(c_\beta z^\beta \) is another monomial with \(\beta _j < \alpha _j\), so \(\beta _j \le \alpha _j -2\), then \(D^{\alpha ^\prime } c_\beta z^\beta = 0\), while if \(\beta _j = \alpha _j\) then for some \(i \ne j\), \(\beta _i < \alpha _i\), so again \(D^{\alpha ^\prime } c_\beta z^\beta = 0\). It follows that \(D^{\alpha ^\prime } P_k\) is a non-zero constant times \(z_j\). \(\square \)

Note that when the remainder \(r = 0\), we have \(f_m = P_k Tf_m\), so by Beauzamy’s inequality (Lemma 13 below) we always have \(\tau \le k\) in (5).

Example 6

By Lemma 13 below, for every \(f_m\) we have

$$\begin{aligned} \Vert P_k f_m \Vert _a^2 \le C m^{k} \Vert f_m\Vert _a^2, \end{aligned}$$

so \(\tau \le k\), and by part 2) of the preceding theorem, if \(d > 1\) it is always possible to find an \(f_m\) for which \(\tau \le k - 1\). The condition

$$\begin{aligned} \left\| Tf_{m}\right\| _{a}\le \frac{C}{m^{\frac{\tau }{2}}}\left\| f_{m}\right\| _{a} \end{aligned}$$
(15)

used in Theorem 2 can be much weaker when applied to specific functions. In the extreme case \( Tf_{m} = 0\) obviously every positive \(\tau \) will work. For a less extreme example, take \(m, \ell \gg k\), and let \(P_{k} \in \mathbb {C}[z_1, z_d]\) be \(z_1^k\). Choosing \(f_m (z_1, z_2):= z_1^k z_2^{m-k} + m^\ell z_2^{m}\), it is clear that we can take \(\tau = \ell \). But as noted above, when the remainder \(r = 0\) we have \(f_m = P_k Tf_m\), so since \(\tau \) is required to work for all homogeneous polynomials, \(\tau \le k\) also in (15), and when \(d > 1\), \(\tau \le k - 1\).

Next we show that if \(k = d = 2\), so \(P\left( z_1,z_2 \right) =a z_1^{2}+ b z_1 z_2 + c z_2^{2}\), then P will satisfy the Khavinson–Shapiro bounds under the condition \(4 a b \ne b^2\), which is distinctly more general than amenability. It is easy to check that for \(k = d = 2\), the only amenable polynomials are \(Q\left( z_1,z_2 \right) = a z_1^{2} + c z_2^{2}\) when \(a, c \ne 0\) and \(R\left( z_1,z_2 \right) = b z_1 z_2\) when \(b \ne 0\).

Theorem 7

Let \(P\left( z_1,z_2 \right) =a z_1^{2}+ b z_1 z_2 + c z_2^{2}\) be a polynomial with complex coefficients abc, not all of them 0. Then the following are equivalent:

(1) P does not satisfy the Khavinson–Shapiro bounds.

(2) \(4 a c = b^2\).

(3) \(P\left( z_1, z_2\right) = \left( r z_1 + s z_2\right) ^{2}\), where the only condition on the complex coefficients r and s is that they cannot simultaneously be 0.

Proof

To obtain (1) \(\implies \) (2), we shall prove an equivalent formulation: if \(4 a b \ne b^2\), then P satisfies the Khavinson–Shapiro bounds.

Given \(P\left( z_1,z_2 \right) =a z_1^{2}+ b z_1 z_2 + c z_2^{2}\), we have

$$\begin{aligned} \partial _1 P^{*}\left( z_1,z_2 \right) =2\overline{a }z_1+\overline{b}z_2\text { and }\partial _2 P^{*}\left( z_1,z_2 \right) =\overline{b}z_1+2\overline{c}z_2. \end{aligned}$$

Let

$$\begin{aligned} f_m (z_1,z_2) = \sum _{0 \le i \le m} c_{i} z_1^i z_2^{m - i}. \end{aligned}$$

A computation (alternatively, use [17, Lemma 3]) shows that

$$\begin{aligned} m \Vert f_m\Vert _a^2 = \Vert \partial _1 f_m\Vert _a^2 +\Vert \partial _2 f_m\Vert _a^2. \end{aligned}$$

Let \(0 \le t \le 1\) satisfy

$$\begin{aligned} t m \Vert f_m\Vert _a^2 = \Vert \partial _1 f_m\Vert _a^2, \text{ so } (1 - t) m \Vert f_m\Vert _a^2 = \Vert \partial _2 f_m\Vert _a^2. \end{aligned}$$
(16)

It follows from (12) that

$$\begin{aligned} \left\| Pf_{m}\right\| _{a}^{2}\ge A_{m},\text { where }A_{m}:= {\displaystyle \sum _{k=1}^{2}} \left\| \left( \partial _k P^{*}\right) \left( D\right) f_{m}\right\| _{a}^{2}. \end{aligned}$$

Now

$$\begin{aligned} \left\| \left( 2\overline{a}\partial _1 +\overline{b}\partial _2\right) f_{m}\right\| _{a}^{2}&=4\left| a\right| ^{2}\left\| \partial _1 f_{m}\right\| _{a}^{2}+\left| b\right| ^{2}\left\| \partial _2 f_{m}\right\| _{a}^{2}\\&\quad +2\overline{a}b\left\langle \partial _1 f_{m} ,\partial _2 f_{m}\right\rangle _{a}+2a\overline{b }\left\langle \partial _2 f_{m},\partial _1 f_{m}\right\rangle _{a}, \end{aligned}$$

and the sum of the last two terms is

$$\begin{aligned} 4{\text {Re}}\left( \overline{a}b\left\langle \partial _1 f_{m},\partial _2 f_{m}\right\rangle _{a}\right) . \end{aligned}$$

The analogous formula holds for \(\left\| \left( \overline{b}\partial _1 +2\overline{c}\partial _2 \right) f_{m}\right\| _{a}^{2}.\) It follows that

$$\begin{aligned} A_{m}&=\left\| \left( 2\overline{a}\partial _1 +\overline{b}\partial _2 \right) f_{m}\right\| _{a}^{2}+\left\| \left( \overline{b}\partial _1 +2\overline{c}\partial _2 \right) f_{m}\right\| _{a}^{2}\\&=\left( 4\left| a\right| ^{2}+\left| b\right| ^{2}\right) \left\| \partial _1 f_{m}\right\| _{a} ^{2}+\left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) \left\| \partial _2 f_{m}\right\| _{a} ^{2}+\\&\quad +4{\text {Re}}\left( \left( \overline{a}b+\overline{b }c\right) \left\langle \partial _1 f_{m}, \partial _2 f_{m}\right\rangle _{a}\right) \end{aligned}$$

Using \({\text {Re}}z\ge -\left| z\right| \) and the Cauchy-Schwarz inequality, we conclude that

$$\begin{aligned} A_{m}\ge & {} \left( 4\left| a\right| ^{2}+\left| b \right| ^{2}\right) \left\| \partial _1 f_{m} \right\| _{a}^{2}\\{} & {} +\left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) \left\| \partial _2 f_{m}\right\| _{a}^{2}-4\left| \overline{a}b+\overline{b }c\right| \left\| \partial _1 f_{m}\right\| _{a}\left\| \partial _2 f_{m}\right\| _a. \end{aligned}$$

It now follows from (16) that

$$\begin{aligned} A_{m}\ge & {} \left( 4\left| a\right| ^{2} +\left| b \right| ^{2}\right) mt\left\| f_{m}\right\| _{a}^{2} \\{} & {} +\left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) m\left( 1-t\right) \left\| f_{m}\right\| _{a}^{2}-4\left| \overline{a}b+\overline{b}c\right| m\sqrt{t}\sqrt{1-t}\left\| f_{m}\right\| _{a}^{2}. \end{aligned}$$

Thus \(A_{m}\ge m\left\| f_{m}\right\| _{a}^{2} f\left( t\right) \), where

$$\begin{aligned} f\left( t\right) =\left( 4\left| a\right| ^{2}+\left| b\right| ^{2}\right) t+\left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) \left( 1-t\right) -4\left| \overline{a}b+\overline{b}c\right| \sqrt{t}\sqrt{1-t}. \end{aligned}$$

Letting \(C:= \min \{f(t): t\in [0,1]\}\), since \(A_{m}\ge m\left\| f_{m}\right\| _{a}^{2}f\left( t\right) \), for the Khavinson–Shapiro bounds to be satisfied it is enough that f be strictly positive on \(\left[ 0,1\right] .\)

Recall our assumption \(4ab\ne b^{2}.\) If \(b=0\), then \(ac\ne 0,\) and

$$\begin{aligned} f\left( t\right) =4\left| a\right| ^{2}t+4\left| c\right| ^{2}\left( 1-t\right) . \end{aligned}$$

Since f is affine and \(f\left( 0\right) >0\), \(f\left( 1\right) >0\), we conclude that \(C > 0\).

Assume next that \(b\ne 0\). Then

$$\begin{aligned} f\left( t\right)= & {} \left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) \left( \left( 1-t\right) -\frac{4\left| \overline{a}b+\overline{b}c\right| }{\left| b\right| ^{2}+4\left| c\right| ^{2}}\sqrt{t}\sqrt{1-t}+\frac{4\left| a\right| ^{2}+\left| b\right| ^{2}}{\left| b\right| ^{2}+4\left| c\right| ^{2} }t\right) \\= & {} \left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) \\ {}{} & {} \times \left( \! \left( \sqrt{1-t}-\frac{2\left| \overline{a }b+\overline{b}c\right| }{\left| b\right| ^{2}+4\left| c\right| ^{2}}\sqrt{t}\right) ^{2} \!+\! \left( \frac{4\left| a\right| ^{2}+\left| b\right| ^{2}}{\left| b \right| ^{2} +4\left| c \right| ^{2}} - \frac{4\left| \overline{a}b+\overline{b} c \right| ^{2} \ }{\left( \left| b\right| ^{2}+4\left| c\right| ^{2}\!\right) ^{2}}\right) t \!\right) . \end{aligned}$$

Note that \(f\left( 0\right) >0.\) Thus \(f\left( t\right) >0\) for all \(t\in \left[ 0,1\right] \), provided the numerator obtained when summing the last two fractions is strictly positive, that is,

$$\begin{aligned} B=\left( 4\left| a\right| ^{2}+\left| b\right| ^{2}\right) \left( \left| b\right| ^{2}+4\left| c\right| ^{2}\right) -4\left| \overline{a}b +\overline{b}c\right| ^{2} > 0. \end{aligned}$$

Now

$$\begin{aligned} \left| \overline{a}b+\overline{b}c\right| ^{2}=\left( \overline{a}b+\overline{b}c\right) \left( a\overline{b}+b\overline{c}\right) =\left| a\right| ^{2}\left| b\right| ^{2}+\overline{a} bb\overline{c}+\overline{b}c a\overline{b }+\left| b^{2}\right| \left| c^{2}\right| , \end{aligned}$$

so

$$\begin{aligned} B=\ 16\left| ac\right| ^{2}+\left| b\right| ^{4}- 8{\text {Re}}\left( \overline{a}bb\overline{c }\right) . \end{aligned}$$

Write \(z=b^{2}\) and \(w=\overline{a}\overline{c}\), with \(z=u+iv\) and \(w=x+i y.\) Then

$$\begin{aligned} B=16\left( z^{2}+y^{2}\right) +u^{2}+v^{2}- 8\left( u x-v y\right) =\left( 4x-u\right) ^{2}+\left( 4y +v\right) ^{2}. \end{aligned}$$

Hence \(B=0\) entails that \(y=-v/4\) and \(x=u/4,\) so

$$\begin{aligned} \overline{a}\overline{c}=w=x+iy= \frac{1}{4}\left( u-iv\right) =\frac{1}{4}\overline{z}=\frac{1}{4}\overline{b}^{2}. \end{aligned}$$

Thus \(B=0\) if and only if \(4ac=b^{2}.\) Since we assume that \(b^{2}\ne 4ab\) we see that \(A_{m}\ge C m\left\| f_{m}\right\| _{a}^{2}.\)

For \(b)\implies c)\), write \(P(z_1, z_2) = r^2 z_1^2 + b z_1 z_2 + s^2 z_2^2\), and assume that \(b^2 = 4 r^2\,s^2\). Then \(P(z_1, z_2) = (r z_1 + s z_2)^2\). Finally, for \(c)\implies a)\), given the nontrivial polynomial \(P(z_1, z_2) = (r z_1 + s z_2)^2\), choose the nonzero vector \(c = (s, -r)\) and let \(f_m(z_1, z_2):= (s z_1 - r z_2)^m\). Then the equality \( \Vert P f_m \Vert _a^2 = \Vert P \Vert _a^2 \Vert f_m\Vert _a^2 \) is a special case of Proposition 4. \(\square \)

5 Special cases: when \(k= 1\) or \(d=1\) the conjecture is true

In view of the fact that the Khavinson–Shapiro bounds never hold when \(k=1\), this case requires separate treatment. It is noted next that the Khavinson–Shapiro bounds are not needed for lineal polynomials.

Theorem 8

Let \(P_{1} (z_1, \dots , z_d)\) be a non-zero homogeneous polynomial of degree 1, and let \(P= P_1 - P_0\). Then the Fischer operator F defined by

$$\begin{aligned} F\left( q\right) := P_1^{*}\left( D\right) \left[ \left( P_1- P_0\right) q\right] \end{aligned}$$

is a bijection on the set of all entire functions in d variables, i.e., \(P = P_1- P_0\) and \(P_1^{*}\left( D\right) \) form a Fischer pair for this space.

Proof

We prove that \(P_1 - P_0\) and \(P_1^{*}\left( D\right) \) form a Fischer pair. By Shapiro’s theorem [17, Theorem 1], \(P_1\) and \(P_1^{*}\left( D\right) \) form a Fischer pair for the entire functions. Since \(P_1\left( z \right) \) is homogeneous of degree 1, for some \(b \ne 0\) we have

$$\begin{aligned} P_1\left( z\right) =\left\langle z,b\right\rangle . \end{aligned}$$

Since \(b \ne 0\), the linear funtion \(\left\langle \cdot ,b\right\rangle : \mathbb {C}^d \rightarrow \mathbb {C}\) is surjective, so there is a \(z_{0}\) with \(\left\langle z_{0}, b\right\rangle = P_0\), and hence \(P_1\left( z-z_{0}\right) =P_1\left( z\right) - P_0.\)

Let f be entire, and set \(F\left( z\right) =f\left( z + z_{0}\right) .\) By Shapiro’s theorem there are unique entire functions q and h such that \(P_1^{*}\left( D\right) h=0\) and

$$\begin{aligned} F\left( z\right) =P_1\left( z\right) q\left( z\right) +h\left( z\right) . \end{aligned}$$

Replace z with \(z - z_{0}.\) Then

$$\begin{aligned} f\left( z \right)&= F\left( z -z_{0}\right) =P_1\left( z -z_{0}\right) q\left( z -z_{0}\right) +h\left( z-z_{0}\right) \\&=\left( P_1\left( z \right) - P_0 \right) q\left( z -z_{0}\right) +h\left( z-z_{0}\right) . \end{aligned}$$

Clearly \(\widetilde{h}\left( z \right) : =h\left( z - z_{0}\right) \) satisfies \(P_1^{*}\left( D\right) \widetilde{h}=0\), since \(P_1^{*}\left( D\right) h=0.\) Uniqueness of \(\widetilde{h}\left( z \right) \) and \(\widetilde{q}\left( z \right) : =q\left( z - z_{0}\right) \) follows from the corresponding uniqueness statements for h and q. \(\square \)

For \(d=1\), the conjecture can be proven directly:

Theorem 9

Let \(P:\mathbb {C}\rightarrow \mathbb {C}\) be a non-zero polynomial of degree k, with homogeneous principal part \(P_{k}\). Then P and \(P_k^{*}\left( D\right) \) form a Fischer pair for \(E(\mathbb {C})\). Furthermore, if f has order \(\rho \), writing \(f = P q + r\) we find that q also has order \(\rho \) and r is a polynomial of degree \(k -1\).

Proof

Let P be a non-zero polynomial of degree k with complex zeros \(\alpha _{1},\dots ,\alpha _{k}\), listed according to their multiplicity. Given an arbitrary entire function f, define \(I\left( f\right) \) to be the unique polynomial of degree \(k-1\) interpolating f at the zeros of P (using the Lagrange interpolation polynomial if all the zeros are different, or more generally, using Hermite interpolation in the case of repeated roots, so not only the values of f and I(f) coincide at the roots, but also an appropriate number of derivatives do so). Then \(f-I\left( f\right) \) vanishes at the points \(\alpha _{1},\dots ,\alpha _{k}\) and we can write \(f-I\left( f\right) =Pq\) for some entire function q. This yields the Fischer decomposition, because trivially \(P_k^{*}\left( D\right) (I(f)) = 0 \). Uniqueness follows from the uniqueness of the interpolating polynomial, since given any decomposition \(f = Pq^\prime + r^\prime \), in order for \(P_k^{*}\left( D\right) (r^\prime ) = 0 \) to hold, \(r^\prime \) must be a polynomial of degree strictly smaller than k.

When f is entire of order \(\rho \), then both \(f-I\left( f\right) \) and q also have order \(\rho \). \(\square \)

We recall next some additional results regarding the conjecture. It also holds when \(P_{k}^{*}\left( z\right) \) is of the form \(z_{1}^{k}\) for \(z=\left( z_{1},z^{\prime }\right) \in \mathbb {C}\times \mathbb {C}^{d-1}.\) In [11] Meril and Struppa have proven the following result:

Theorem 10

Let \(P\left( z\right) \) be a polynomial of degree k,  let \(Q_{k}\left( z\right) =z_{1}^{k}\), where \(z=\left( z_{1},z^{\prime }\right) \in \mathbb {C}\times \mathbb {C}^{d-1}\), let \(C\ne 0\) be a complex number, and let \(p_{0},\dots ,p_{k-1}\) be polynomials in the variable \(z^{\prime }\in \mathbb {C}^{d-1}\). Then the polynomial P and the differential operator \(Q_{k}\left( D\right) =\partial _{1}^{k}\) form a Fischer pair if and only if P is of the form

$$\begin{aligned} P\left( z\right) =Cz_{1}^{k}+p_{k-1}\left( z^{\prime }\right) z_{1} ^{k-1}+\cdots +p_{0}\left( z^{\prime }\right) . \end{aligned}$$

Finally we mention that the following conjecture:

Conjecture 11

(II) Let P be a polynomial. Then the Fischer operator \(F_{P}:E\left( \mathbb {C}^{d}\right) \rightarrow E\left( \mathbb {C}^{d}\right) \) defined by

$$\begin{aligned} F_{P}\left( q\right) =P^{*}\left( D\right) \left( Pq\right) \end{aligned}$$

is a bijection.

A. Meril and A. Yger have shown that \(F_{p}\) is injective when P is a polynomial of degree \(\le 2\), cf. [12]. In dimension \(d=2\), they have also proven that the Fischer operator \(F_{P}:E\left( \mathbb {C}^{2}\right) \rightarrow E\left( \mathbb {C}^{2}\right) \) is bijective for any polynomial of degree \(\le 2\). In [8] it is shown that conjecture II holds for the polynomial \(P\left( z\right) =1+z^{\alpha }\), where \(\alpha \in \mathbb {N} _{0}^{d}\) has only positive entries. In general, conjecture II is still open.

For more information about Fischer operators and their relationship to problems in analysis we refer to [10, 15] and the classical paper [17].

6 Proof of Theorem 2

Let us start with some preliminary bounds.

Lemma 12

Given a multi-index \(\alpha \in \mathbb {N}_{0}^{d}\), the estimate

$$\begin{aligned} \left\| f_{m}\right\| _{a}\le \left\| z^{\alpha }f_{m}\right\| _{a}\le C_{\alpha ,m}\left\| f_{m}\right\| _{a} \end{aligned}$$
(17)

holds for all homogeneous polynomials \(f_{m}\) of degree m, where

$$\begin{aligned} C_{\alpha ,m}=\sup _{\beta \in \mathbb {N}_{0}^{d},\left| \beta \right| =m}\sqrt{\frac{\left( \alpha +\beta \right) !}{\beta !}}, \end{aligned}$$

and this is the smallest constant such that (17) holds for all homogeneous polynomials of degree m.

Proof

We consider \(f_{m} (z) = z^{\beta }\) with \(\left| \beta \right| =m.\) Then

$$\begin{aligned} \left\| z^{\alpha }f_{m}(z)\right\| _{a}^{2}=\left\| z^{\alpha +\beta }\right\| _{a}^{2}=\left( \alpha +\beta \right) !=\frac{\left( \alpha +\beta \right) !}{\beta !}\left\| z^{\beta }\right\| _{a}^{2} =\frac{\left( \alpha +\beta \right) !}{\beta !}\left\| f_{m}\right\| _{a}^{2}. \end{aligned}$$

It follows that

$$\begin{aligned} C_{\alpha ,m}\ge \sup _{\beta \in \mathbb {N}_{0}^{d},\left| \beta \right| =m}\sqrt{\frac{\left( \alpha +\beta \right) !}{\beta !}}. \end{aligned}$$

Now we show that \(C_{\alpha ,m}\) is a suitable constant. Let us write \(f_{m} (z) = {\sum \nolimits _{\left| \beta \right| =m}} a_{\beta }z^{\beta }.\) Then \(z^{\alpha }f_{m}(z) = {\sum \nolimits _{\left| \beta \right| =m}} a_{\beta } z^{\alpha +\beta }\) and

$$\begin{aligned} \left\| f_{m}\right\| _{a}^2 \le \left\| z^{\alpha }f_{m}(z) \right\| _{a}^{2}&= {\displaystyle \sum _{\left| \beta \right| =m}} \left( \alpha +\beta \right) !\left| a_{\beta }\right| ^{2}= {\displaystyle \sum _{\left| \beta \right| =m}} \frac{\left( \alpha +\beta \right) !}{\beta !}\beta !\left| a_{\beta }\right| ^{2}\\&\le {\displaystyle \sum _{\left| \beta \right| =m}} C_{\alpha ,m}^2\beta !\left| a_{\beta }\right| ^{2}=C_{\alpha ,m} ^{2}\left\| f_{m}\right\| _{a}^{2}. \end{aligned}$$

\(\square \)

With different normalizations, the following result is essentially due to B. Beauzamy, cf. [3, Formula (6)].

Lemma 13

If \(P\left( z \right) = {\sum \nolimits _{\left| a\right| =k}} c_{\alpha }z^{\alpha }\) is a homogeneous polynomial of degree k, then

$$\begin{aligned} \left\| Pf_{m}\right\| _{a}\le \left\| f_{m}\right\| _{a}\left( 1+m\right) ^{\frac{k}{2}} {\displaystyle \sum _{\left| \alpha \right| =k}} \left| c_{\alpha }\right| \sqrt{\alpha !}. \end{aligned}$$

Proof

By the preceding lemma,

$$\begin{aligned} \left\| Pf_{m}\right\| _{a}\le {\displaystyle \sum _{\left| a\right| =k}} \left| c_{\alpha }\right| \left\| z^{\alpha }f_{m}\right\| _{a}\le \left\| f_{m}\right\| _{a} {\displaystyle \sum _{\left| a\right| =k}} \left| c_{\alpha }\right| \sup _{\beta \in \mathbb {N}_{0}^{d},\left| \beta \right| =m}\sqrt{\frac{\left( \alpha +\beta \right) !}{\beta !}}. \end{aligned}$$

Note next that

$$\begin{aligned} \frac{\left( \alpha +\beta \right) !}{\alpha !\beta !}&=\frac{\left( \alpha _{1}+\beta _{1}\right) !\cdots \left( \alpha _{d}+\beta _{d}\right) !}{\alpha _{1}!\cdots \alpha _{d}!\beta _{1}!\cdots \beta _{d}!}\\&=\frac{\left( \beta _{1}+1\right) \left( \beta _{1}+2\right) \cdots \left( \beta _{1}+\alpha _{1}\right) }{\alpha _{1}!} \cdots \frac{\left( \beta _{d}+1\right) \cdots \left( \beta _{d} +\alpha _{d}\right) }{\alpha _{d}!} \\&\le \left( 1+\beta _{1}\right) ^{\alpha _{1}}\cdots \left( 1+\beta _{d}\right) ^{\alpha _{d}}\le \left( 1+m\right) ^{\left| \alpha \right| }. \end{aligned}$$

\(\square \)

We will use the following result, which appears in [14, Theorem 17]. More details regarding its proof are presented in [16, Theorem 7]. While in the latter reference the result is stated for even degrees 2k, the argument presented there works for general values of k.

Theorem 14

Let Q be a homogeneous polynomial of degree \(k > 0\), let P be a polynomial of degree k of the form

$$\begin{aligned} P=P_{k}-P_{k-1}- \cdots -P_{0},\ \end{aligned}$$
(18)

and assume that \((P_{k},Q)\) is a Fischer pair for \(\mathcal {P}\left( \mathbb {C}^{d}\right) \). Setting \(T:=T_{P_{k}}\), we have

$$\begin{aligned} T_{P}\left( f_{m}\right) =\sum _{j=-1}^{m}\sum _{s_{0}=0}^{k-1}\sum _{s_{1}=0}^{k-1} \dots \sum _{s_{j}=0}^{k-1}T(P_{s_{j}}T(\cdots P_{s_{1}} T(P_{s_{0}}T(f_{m}))\cdots )) \end{aligned}$$
(19)

for all homogeneous polynomials \(f_{m}\) of degree m, with the convention that the summand for \(j=-1\) is \(Tf_{m}\).

We have noted above that when the remainder \(r = 0\) we have \(f_m = P_k Tf_m\), and thus assuming the Khavinson–Shapiro bounds is equivalent to the condition

$$\begin{aligned} \left\| Tf_{m}\right\| _{a}^2\le \frac{C}{m^{\tau }}\left\| f_{m}\right\| _{a}^2 \end{aligned}$$
(20)

actually used in the proof of Theorem 2. However, it is conceivable that for a particular pair (fP) and all the homogeneous polynomials appearing in (19), bounds of type (20) might actually be strictly weaker than the Khavinson–Shapiro conditions. But we will not pursue these elaborations here.

To simplify expressions such as (19) parentheses shall often be omitted. We will also use the following result of H.S. Shapiro [17, p. 519]:

Lemma 15

(H.S. Shapiro) Suppose that \(f_{k} (z)=\sum _{\left| \alpha \right| =k}c_{\alpha }z^{\alpha }\) is a homogeneous polynomial of degree k. Then for every complex vector \(z\in \mathbb {C}^{d}\) the following estimate holds:

$$\begin{aligned} \left| f_{k}\left( z\right) \right| ^{2}\le \frac{1}{k!}\left| z\right| ^{2k}\left\| f_{k}\right\| _{a}^{2}. \end{aligned}$$

Let us recall some well known definitions and facts about entire functions (additional details and references can be found in [16], cf. also [1]). The order \(\rho _{\mathbb {C}^{d}}\left( f\right) \) of a continuous function \(f: \mathbb {C}^{d}\rightarrow \mathbb {C}\) is defined by setting

$$\begin{aligned} M_{\mathbb {C}^{d}}\left( f,r\right) :=\sup \left\{ \left| f\left( z\right) \right| :z\in \mathbb {C}^{d},\left| z\right| =r\right\} , \end{aligned}$$

and then

$$\begin{aligned} \rho _{\mathbb {C}^{d}}\left( f\right) :=\lim _{r\rightarrow \infty }\sup \frac{ \log \log M_{\mathbb {C}^{d}}\left( f,r\right) }{\log r}\in \left[ 0,\infty \right] . \end{aligned}$$

Given an entire function f, we write \(f=\sum _{m=0}^{\infty }f_{m}\), where the homogeneous polynomials \(f_{m}\) are those given by the Taylor expansion about 0, that is,

$$\begin{aligned} f_{m}\left( z \right) =\sum _{\left| \alpha \right| =m}\frac{1}{\alpha !} \ \partial ^{\alpha }f \left( 0\right) \; z^{\alpha }\text { for }m\in \mathbb {N}_{0}. \end{aligned}$$
(21)

It is well known that for \(\rho \ge 0\), we have \(\rho _{ \mathbb {C}^{d}}\left( f\right) \le \rho \) if and only if for every \( \varepsilon >0\), there exists an \(m_0 \ge 0\) such that for every \(m \ge m_0\) the following bounds hold:

$$\begin{aligned} \max _{\theta \in \mathbb {S}^{d-1}}\left| f_{m}\left( \theta \right) \right| \le \frac{1}{m^{m/\left( \rho +\varepsilon \right) }}. \end{aligned}$$
(22)

We shall also use an old result due to V. Bargmann, cf. [2].

Theorem 16

Let P and Q be polynomials in d complex variables. Then

$$\begin{aligned} \left\langle P,Q\right\rangle _{a}=\frac{1}{\pi ^{d}}\int _{\mathbb {R}^{d}} \int _{\mathbb {R}^{d}}P\left( x+iy\right) \overline{Q\left( x+iy\right) }e^{-\left| x\right| ^{2}-\left| y\right| ^{2}}dxdy<\infty , \end{aligned}$$
(23)

where dxdy is Lebesgue measure on \(\mathbb {R}^{2d}\).

Lemma 17

Let \(f_{m}\) be a homogeneous polynomial of degree m, and denote by \( \mathbb {S}^{2d-1}\) the unit sphere in \(\mathbb {R}^{2n}\). There is a dimensional constant \(C_d > 0\) such that, identifying \(\mathbb {C}^{n}\) with \(\mathbb {R}^{2n}\) as a measure space, we have

$$\begin{aligned} \left\| f_{m}\right\| _{a} \le C_d \sqrt{\left( m+d-1\right) !}\max _{\theta \in \mathbb {S}^{2d-1}}\left| f_{m}\left( \eta \right) \right| . \end{aligned}$$

Proof

Let \(f_{m}:\mathbb {C}^{d}\rightarrow \mathbb {C}\) be a homogeneous polynomial of degree m, and recall that for \(x > 0\), the Gamma function is defined as \(\Gamma (x):= \int _{0}^{\infty }e^{-t} \ t^{x - 1}dt\). Integrating in polar coordinates and using the change of variables \(t = r^2\) we get

$$\begin{aligned} \left\| f_{m}\right\| _{a}^{2} = \frac{1}{\pi ^{d}}\int _{\mathbb {C}^{d} }\left| f_{m}\left( z\right) \right| ^{2}e^{-\left| z\right| ^{2}}dz= \frac{1}{\pi ^{d}}\int _{0}^{\infty }e^{-r^{2}}r^{2m+2d-1}dr\int _{\mathbb {S}^{2d-1}}\left| f_{m}\left( \eta \right) \right| ^{2}d\eta \end{aligned}$$
$$\begin{aligned} \le C_d \ \Gamma (m + d) \max _{\theta \in \mathbb {S}^{2d-1}}\left| f_{m}\left( \eta \right) \right| . \end{aligned}$$

\(\square \)

Next we present the proof of our main result, Theorem 2.

Proof

In view of Theorem 8 we may suppose that \(k\ge 2\).

Set \(P = P_{k}-P_{\beta }- \cdots -P_{0}\). Then \(T_{P}\left( f_{m}\right) \) is either the zero polynomial or a polynomial of degree \(<m\) (not necessarily homogeneous). Our strategy is to show that

$$\begin{aligned} g:= {\displaystyle \sum _{m=0}^{\infty }} T_{P}\left( f_{m}\right) \end{aligned}$$
(24)

defines an entire function \(g:\mathbb {C}^{d}\rightarrow \mathbb {C}\) of order bounded by \(\rho \), by writing \(g\left( z\right) = {\sum \nolimits _{M=0}^{\infty }} G_{M}\left( z\right) ,\) where each \(G_{M}\) is a homogeneous polynomial of degree M, and then applying the criterion presented in (22). As a reminder, we mention that when the representation of g as \(\sum _{M=0}^{\infty } G_{M}\) exists, it is unique.

It has been noted above that

$$\begin{aligned} T_{P}\left( f_{m}\right) =\sum _{j=-1}^{m}\sum _{s_{0}=0}^{k-1}\sum _{s_{1} =0}^{k-1}\cdots \sum _{s_{j}=0}^{k-1}TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m}, \end{aligned}$$
(25)

and clearly, \(TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m}\) is a homogeneous polynomial of degree

$$\begin{aligned}{}[s_{0}+\cdots +s_{j}+m-k\left( j+2\right) ]_+. \end{aligned}$$

If \(s_{0},\dots ,s_{n}\in \left\{ 0,\dots , \beta \right\} \) are given, where by definition \(\beta \le k - 1\), and if \(n>m,\) then \(TP_{s_{n}}\cdots TP_{s_{0}}Tf_{m}\) is zero by inspection of its degree:

$$\begin{aligned} m+s_{0}+\cdots +s_{n}-k\left( n+2\right)&\le m+\left( n+1\right) \left( k-1\right) -k\left( n+2\right) \\&=m-k-n-1<0, \end{aligned}$$

so the m in the first summatory of (25) can be replaced by \(\infty .\) By hypothesis, \(P_{s}=0\) for all \(s\in \left\{ \beta +1,\dots ,k-1\right\} ,\) with the convention that this set is presented in increasing order, so it is empty when \(\beta = k - 1\). Hence, we can write

$$\begin{aligned} T_{P}\left( f_{m}\right) =\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta } \sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m}. \end{aligned}$$

In order to show that the sum in (24) defines an entire function it suffices to prove that

$$\begin{aligned} G=\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta } \cdots \sum _{s_{j}=0}^{\beta } {\displaystyle \sum _{m=0}^{\infty }} TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m} \end{aligned}$$

does so. Then the sum can be reordered and shown to be equal to g. Next we collect all summands having degree \(M\ge 0.\) The requirement

$$\begin{aligned} \deg TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m}=s_{0}+\cdots +s_{j}+m-k\left( j+2\right) =M \end{aligned}$$

means that \(m=M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) ,\) and therefore we consider the sum

$$\begin{aligned} G_{M} (z):=\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta } \cdots \sum _{s_{j}=0}^{\beta } TP_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) } (z). \end{aligned}$$
(26)

Next we show that \(G_{M}\) converges absolutely everywhere. Note that while \(G_M\) will turn out to be a homogeneous polynomial of degree M, contributions to it may come from infinitely many values of j, so a proof of convergence is required.

By Lemma 15, for any complex vector \(z\in \mathbb {C}^{d}\) and any homogeneous polynomial \(h_{M}\) of degree M the following estimate holds:

$$\begin{aligned} \left| h_{M}\left( z\right) \right| \le \frac{1}{\sqrt{M!} }\left| z\right| ^{M}\left\| h_{M}\right\| _{a}. \end{aligned}$$

Thus we have

$$\begin{aligned}{} & {} \left| TP_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\left( z\right) \right| \end{aligned}$$
(27)
$$\begin{aligned}{} & {} \le \frac{\left| z\right| ^{M}}{\sqrt{M!}}\left\| TP_{s_{j} }\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a}. \end{aligned}$$
(28)

Since

$$\begin{aligned} \left\| Tf_{m}\right\| _{a}\le \frac{C}{m^{\tau /2}}\left\| f_{m}\right\| _{a} \end{aligned}$$

and \(P_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0} +\cdots +s_{j}\right) }\) has degree \( M+k\), it follows that

$$\begin{aligned}{} & {} S_{j,M}:=\left\| TP_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a} \end{aligned}$$
(29)
$$\begin{aligned}{} & {} \le \frac{C}{\left( M+k\right) ^{\tau /2}}\left\| P_{s_{j} }\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a} \end{aligned}$$
(30)

(note that \(S_{j,M}\) depends also on \(\left( s_{0}+\cdots +s_{j}\right) \), but we omit this fact from the notation). Now

$$\begin{aligned} TP_{s_{j-1}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0} +\cdots +s_{j}\right) } \end{aligned}$$

has degree \( M+k-s_{j}, \) so using the abbreviation

$$\begin{aligned} D_{P}:= {\displaystyle \sum _{\left| \alpha \right| =k}} \left| c_{\alpha }\right| \sqrt{\alpha !}, \end{aligned}$$

and, to reduce the number of subindices, writing \(D_{s_{j}}:=D_{ P_{s_{j}}} \), from Lemma 13 we get

$$\begin{aligned} \left\| P_{s_{j}}f_{m}\right\| _{a}\le D_{s_{j}} \left\| f_{m}\right\| _{a} \left( m+1\right) ^{s_{j}/2}. \end{aligned}$$

Hence we see that

$$\begin{aligned} S_{j,M} \le CD_{s_{j}}\frac{\left( M+k-s_{j}+1\right) ^{s_{j}/2}}{\left( M+k\right) ^{\tau /2}}\left\| TP_{s_{j-1}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a}. \end{aligned}$$

Proceed inductively to obtain positive numbers \(A_{j},\dots , A_{0}\) and \(B_{j},\dots ,B_{-1}\) such that

$$\begin{aligned} S_{j,M} \le \frac{C^{j+1}D_{s_{j}}\cdots D_{s_{0}}A_{j}^{s_{j}/2}A_{j-1}^{s_{j-1} /2}\cdots A_{0}^{s_{0}/2}\left\| f_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a}}{B_{j}^{\tau /2}B_{j-1}^{\tau /2}\cdots B_{-1}^{\tau /2}}, \end{aligned}$$

where \(B_{j}=M+k\), \( B_{n-1}=B_{n}+k-s_{n} \) for \(n =j,j-1,\dots , 0\), and

$$\begin{aligned} A_{n}= B_{n}-s_{n}+1 = B_{n - 1}- k + 1 \text { for } n =j,j-1,\dots , 0. \end{aligned}$$

Thus \(B_{j-1} = B_{j}+k-s_{j} = M+2k-s_{j}\), and in general we have

$$\begin{aligned} B_{j-n} = M+\left( n +1\right) k-\left( s_{j}+s_{j-1}+\cdots +s_{j-n+1}\right) ; \end{aligned}$$

note that the largest term is the last one:

$$\begin{aligned} B_{-1} = M+\left( j+2\right) k-\left( s_{j}+s_{j-1}+\cdots +s_{0}\right) = m. \end{aligned}$$

Now \(A_{n} < B_{n - 1}\), so \(A_{n}^{s_n}\le B_{n - 1}^{s_{n}}\) for all \(s_n\), where \(0 \le s_{n}\le \beta \). It follows that

$$\begin{aligned} S_{j,M} \le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} B_{j-1}^{(s_{j} -\tau )/2}\cdots B_{-1}^{(s_{0} -\tau ) /2}\left\| f_{M+k\left( j+2\right) - \sum _{n=0}^{j } s_{n} }\right\| _{a} \end{aligned}$$
$$\begin{aligned} \le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} B_{-1}^{2^{-1} \sum _{n=0}^{j } (s_{n} -\tau )}\left\| f_{M+k\left( j+2\right) -\sum _{n=0}^{j } s_{n} }\right\| _{a}. \end{aligned}$$

By Lemma 17, there is a dimensional constant \(C_d\) such that

$$\begin{aligned} \left\| f_{m}\right\| _{a} \le C_d \ \sqrt{\left( m+d-1\right) !} \max _{\eta \in \mathbb {S}^{2d-1}}\left| f_{m}\left( \eta \right) \right| , \end{aligned}$$

so we have

$$\begin{aligned} \left\| f_{M+k\left( j+2\right) -\sum _{n=0}^{j } s_{n} }\right\| _{a} \le C_d \ \sqrt{\left( B_{-1} +d-1\right) !} \max _{\eta \in \mathbb {S}^{2d-1}}\left| f_{B_{-1}}\left( \eta \right) \right| . \end{aligned}$$

Note that for every M sufficiently large and all j, or for every j sufficiently large and all M,

$$\begin{aligned} \frac{\left( B_{-1} +d-1\right) ! }{M !}\le & {} \left( B_{-1} +d-1\right) ^{ B_{-1} + d - 1 - M}\\\le & {} \left( B_{-1}\right) ^{ B_{-1} + d - 1 - M} \left( 1 + \frac{d-1}{B_{-1}}\right) ^{ B_{-1} + d - 1 - M}\\\le & {} e^{d^2} \left( B_{-1}\right) ^{ B_{-1} + d - 1 - M}, \end{aligned}$$

so

$$\begin{aligned} \frac{ B_{-1}^{2^{-1} \sum _{n=0}^{j } (s_{n} -\tau )} \sqrt{\left( B_{-1} +d-1\right) !} }{\sqrt{M !}}\le & {} B_{-1}^{2^{-1} \sum _{n=0}^{j } (s_{n} -\tau )} e^{d^2} B_{-1}^{ 2^{-1} \left( M+\left( j+2\right) k- \sum _{n=0}^{j } s_{n} + d -1 - M \right) }\\= & {} e^{d^2} B_{-1}^{ 2^{-1} \left( k + d - 1 + \left( j+1\right) (k- \tau ) \right) }, \end{aligned}$$

from whence it follows that

$$\begin{aligned} \frac{S_{j,M} }{\sqrt{M!}} \le e^{d^2} C_d \ C^{j+1}D_{s_{j}}\cdots D_{s_{0}} B_{-1}^{2^{-1} (k + d -1 + (j + 1)(k -\tau ))} \max _{\eta \in \mathbb {S}^{2d-1}}\left| f_{B_{-1} }\left( \eta \right) \right| . \end{aligned}$$

Since f has order \(\rho \), the bound (22) entails that for every \(\varepsilon >0\) sufficiently small there exists a constant \(A_{\varepsilon }\) such that

$$\begin{aligned} \max _{\eta \in \mathbb {S}^{2d-1}}\left| f_{m}\left( \eta \right) \right| \le \frac{A_{\varepsilon }}{m^{\frac{m}{\rho +\varepsilon }}} \end{aligned}$$
(31)

for all natural numbers m. Using (31) and then replacing \(B_{ -1 }^{- \frac{B_{ -1 } }{\rho +\varepsilon }}\) with the larger quantity \(B_{ -1 }^{- \frac{ M + k + (j + 1)(k -\beta ))}{\rho +\varepsilon }}\), we get

$$\begin{aligned}{} & {} \frac{S_{j,M}}{\sqrt{M!}} \le e^{d^2} C_d A_{\varepsilon }\ C^{j+1}D_{s_{j}}\cdots D_{s_{0}} B_{-1}^{\frac{(k + d - 1 + (j + 1)(k -\tau ))}{2}} B_{ -1 }^{- \frac{B_{ -1 } }{\rho +\varepsilon }} \end{aligned}$$
(32)
$$\begin{aligned}{} & {} \le e^{d^2} C_d A_{\varepsilon }\ C^{j+1}D_{s_{j}}\cdots D_{s_{0}} B_{-1}^{\frac{(j + 1)(k -\tau )}{2}}B_{ -1 }^{- \frac{(j + 1)(k -\beta )}{\rho +\varepsilon }} B_{-1}^{\frac{(k + d - 1)}{2}} B_{ -1 }^{- \frac{ M + k}{\rho +\varepsilon }}. \end{aligned}$$
(33)

From the hypothesis \(\rho <\frac{2(k-\beta )}{k-\tau } \), by selecting \(\varepsilon \) small enough we conclude that

$$\begin{aligned} \frac{k-\tau }{2} < \frac{k-\beta }{\rho + \varepsilon }. \end{aligned}$$

Thus, replacing \( B_{-1}\) in \(B_{-1}^{\frac{k -\tau }{2} - \frac{k -\beta }{\rho +\varepsilon }}\) by something smaller, namely

$$\begin{aligned} M + k + (j + 1)(k - \beta ), \end{aligned}$$

we get a larger quantity in (32), to wit,

$$\begin{aligned} \frac{S_{j,M}}{\sqrt{M!}}\le & {} e^{d^2} C_d A_{\varepsilon }\ C^{j+1}D_{s_{j}}\cdots D_{s_{0}} \left( \left( M + k + (j + 1)(k - \beta )\right) ^{\frac{k -\tau }{2} - \frac{k -\beta }{\rho +\varepsilon }} \right) ^{j + 1} \end{aligned}$$
(34)
$$\begin{aligned}{} & {} \times B_{-1}^{\frac{(k + d - 1)}{2}} B_{ -1 }^{- \frac{ M + k}{\rho +\varepsilon }}. \end{aligned}$$
(35)

We can make the same substitution in \(B_{-1}^{\frac{(k + d - 1)}{2}} B_{ -1 }^{- \frac{ M + k}{\rho +\varepsilon }} \) when \(\frac{(k + d - 1)}{2} \le \frac{ M + k}{\rho +\varepsilon }, \) and use \( B_{-1} \le M + (j + 2)k \) when \(\frac{(k + d - 1)}{2} > \frac{ M + k}{\rho +\varepsilon }. \) In either case, the \(j + 1\)-root of the corresponding quantity approaches 1 as \(j \rightarrow \infty \), so for \(M \in \mathbb {N}_0\) fixed and j sufficiently large we obtain

$$\begin{aligned} B_{-1}^{\frac{(k + d - 1)}{2}} B_{ -1 }^{- \frac{ M + k}{\rho +\varepsilon }} \le 2^{j + 1} \end{aligned}$$
(36)

for all choices of \(s_0, \dots , s_j\) between 0 and \(\beta \).

After these preliminary bounds, we prove that the sum in (26) is well defined and converges absolutely for every \(z\in \mathbb {C}^d{\setminus } \{0\}\) and every natural number M. So fix M and set \(z = r \eta \), where \(r > 0\) and \(\eta \in \mathbb {S}^{2d-1}\). Let us write

$$\begin{aligned} G_{M}^{\left( j\right) }\left( r \eta \right) := r^M \sum _{s_{0}=0}^{\beta } \sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }\left| TP_{s_{j}}....TP_{s_{0} }Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\left( \eta \right) \right| , \end{aligned}$$

and \(\widetilde{D}:=D_{s_{0}}+\cdots +D_{s_{\beta }}.\) Note that we have

$$\begin{aligned} \sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }D_{s_{j}}\cdots D_{s_{0}}=\left( D_{s_{0}}+\cdots +D_{s_{\beta }}\right) ^{j+1}=\widetilde{D}^{j+1}. \end{aligned}$$

Putting together (34)–(35) with the bound from (36), and using (27)–(28), we obtain for sufficiently large values of j,

$$\begin{aligned} G_{M}^{\left( j\right) }\left( r \eta \right)\le & {} r^M e^{d^2} C_d A_{\varepsilon }\cdot \left( (M + k + (j + 1)(k - \beta ))^{\frac{k - \tau }{2} - \frac{k - \beta }{\rho +\varepsilon }} 2 C \widetilde{D} \right) ^{j+1}\\\le & {} \left( ( (M + k + (j + 1)(k - \beta ))^{\frac{k - \tau }{2} - \frac{k - \beta }{\rho +\varepsilon }} 3 C \widetilde{D} \right) ^{j+1}. \end{aligned}$$

Hence, we can find a natural number \(j_{0}\) such that for all \(j\ge j_{0}\) the last inequality is satisfied and furthermore

$$\begin{aligned} (M + k + (j + 1)(k - \beta ))^{\frac{k - \tau }{2} - \frac{k - \beta }{\rho +\varepsilon }} 3 C \widetilde{D} \le \frac{1}{2}. \end{aligned}$$

It follows that \( {\sum \nolimits _{j=0}^{\infty }} G_{M}^{\left( j\right) }\) converges on \(\mathbb {C}^d\), so \(G_{M}\) is a well-defined homogeneous polynomial for each \(M\in \mathbb {N}_{0}.\)

Next we show that for sufficiently large values of M, the polynomial \(G_M\) on the unit sphere satisfies the bounds given in (22). Recall that

$$\begin{aligned} |G_{M}(\eta )| \le {\displaystyle \sum _{j=0}^{\infty }} G_{M}^{\left( j\right) } (\eta ). \end{aligned}$$

We want to estimate \(G_{M}^{\left( j\right) } (\eta )\) for every \(j\in \mathbb {N}_{0}\), under the assumption that M is “large”. It follows from our previous discussion that whenever \(\frac{(k + d - 1)}{2} < \frac{ M + k}{\rho +\varepsilon }, \) and M is sufficiently large, we have

$$\begin{aligned} G_{M}^{\left( j\right) } (\eta )\le & {} \left( \frac{\left( e^{d^2} C_d A_{\varepsilon } \right) ^{1/(j+1)} C\widetilde{D} }{\left( M+k+\left( k-\beta \right) \left( j+1\right) \right) ^{\frac{k-\beta }{\rho +\varepsilon }-\frac{k-\tau }{2}}}\right) ^{j+1}\\{} & {} \times \left( M+k+\left( k-\beta \right) \left( j+1\right) \right) ^{\frac{(k + d - 1)}{2} - \frac{ M + k}{\rho +\varepsilon }}. \end{aligned}$$

We may assume that \( e^{d^2} C_d A_{\varepsilon } \ge 1\) (otherwise we remove the term from the inequality). Then

$$\begin{aligned} G_{M}^{\left( j\right) } (\eta ) \le \left( \frac{\left( e^{d^2} C_d A_{\varepsilon } \right) C\widetilde{D} }{\left( M+k \right) ^{\frac{k-\beta }{\rho +\varepsilon }-\frac{k-\tau }{2}}}\right) ^{j+1} \left( M+k \right) ^{\frac{(k + d - 1)}{2} - \frac{ M + k}{\rho +\varepsilon }}. \end{aligned}$$

Thus we can select \(M_0\) so large that for all \(M\ge M_{0}\),

$$\begin{aligned} \frac{\left( e^{d^2} C_d A_{\varepsilon } \right) C\widetilde{D} }{\left( M+k \right) ^{\frac{k-\beta }{\rho +\varepsilon }-\frac{k-\tau }{2}}} \le \frac{1}{2} \end{aligned}$$

and additionally, so that for all \(M\ge M_{0}\) the last inequality below is satisfied:

$$\begin{aligned} |G_{M}(\eta )| \le {\displaystyle \sum _{j=0}^{\infty }} G_{M}^{\left( j\right) } (\eta ) \le 2 \left( M+k \right) ^{\frac{(k + d - 1)}{2} - \frac{ M + k}{\rho +\varepsilon }} \le M^{ - \frac{ M }{\rho +2 \varepsilon }}. \end{aligned}$$

\(\square \)

7 Fischer decompositions on certain Banach spaces of entire functions

As in [9], we use \(\Lambda \) to denote the set of all decreasing sequences \( \lambda =\left( \lambda _{m}\right) _{m\in \mathbb {N}_{0}}\) of positive numbers \(\lambda _{m} \le 1\) which converge to 0. Then \(B_{\lambda }\) is defined as the space of all entire functions f on \(\mathbb {C}^{n}\) such that the homogeneous expansion \(f=\sum _{m=0}^{\infty }f_{m}\) satisfies

$$\begin{aligned} \frac{\left\| f_{m}\right\| _{a}}{m^{m/2}\left( \lambda _{m}\right) ^{m}}\rightarrow 0. \end{aligned}$$
(37)

One can show that \(B_{\lambda }\) is a Banach space with respect to the norm

$$\begin{aligned} \left\| f\right\| _{\lambda }:=\sup _{m\in \mathbb {N}_{0}}\frac{ \left\| f_{m}\right\| _{a}}{m^{m/2}\lambda _{m}^{m}}. \end{aligned}$$
(38)

Actually, the assumption \(\lambda _m \le 1\) is not made in [9], but it will be convenient for us later on, so we include it into the definition; clearly replacing a decreasing sequence \( \lambda =\left( \lambda _{m}\right) _{m\in \mathbb {N}_{0}}\) with the decreasing sequence \( \lambda \wedge 1:=\left( \lambda _{m}\wedge 1 \right) _{m\in \mathbb {N}_{0}}\) leads to the same space \(B_\lambda \), with a comparable norm.

Motivated by an anonymous referee’s comments, we show that certain modifications of the preceding arguments allow us to partially deal with a question formulated in [9, Remark 5.2]: can the uniqueness assumption in [9, Theorem 3] be omitted? The answer is yes. We will assume that the sequence \(\lambda \) converges to 0 sufficiently fast, and more precisely, that condition (39) below is satisfied. This condition adapts hypothesis (4.10) of [9, Lemma 12] to the more general setting considered here, where \(\tau \) can be larger than 1 and \(\beta \) larger than 0.

Theorem 18

Let \(P_{k}\) be a homogeneous polynomial of degree \(k>0\) on \(\mathbb {C}^{d}\), and let us write \(T:=T_{P_{k}}\). Suppose that there exist a \(C>0\) and a \(\tau \in \{0,\dots ,k\}\) such that for every \(m>0\) and every homogeneous polynomial \(f_{m}\) of degree m,  the following inequality holds:

$$\begin{aligned} \left\| Tf_{m}\right\| _{a}\le \frac{C}{m^{\tau /2}}\left\| f_{m}\right\| _{a}. \end{aligned}$$

Assume that for \(0\le j<k\) the polynomials \(P_{j}\left( z\right) \) are homogeneous of degree j, and for some \(\beta <k\) and every j with \(\beta<j<k\) we have \(P_{j}=0\). Let \(\lambda \in \Lambda \) be such that

$$\begin{aligned} \lim _{m\rightarrow \infty }m^{\frac{(k-\tau )}{2}}\lambda _{m}^{(k-\beta )}=0. \end{aligned}$$
(39)

Then for every \(f\in B_{\lambda }\), there exist \(q\in B_{\lambda }\) and r entire such that

$$\begin{aligned} f=\left( P_{k}-P_{\beta }-\cdots -P_{0}\right) q+r\text { and }P_{k}^{*}\left( D\right) r=0. \end{aligned}$$

It is not clear to us whether \(\left( P_{k}-P_{\beta }-\cdots -P_{0}\right) q\), and hence r, must be in \(B_\lambda \). The question whether \(r \in B_\lambda \) seems to be a delicate one: it is not neccesarily true that the product of a function in \(B_\lambda \) with a polynomial must again be in \(B_\lambda \).

Proof

We proceed as in the proof of Theorem 2: our strategy is to show that

$$\begin{aligned} g:={\displaystyle \sum _{m=0}^{\infty }}T_{P}\left( f_{m}\right) \end{aligned}$$
(40)

defines an entire function \(g:\mathbb {C}^{d}\rightarrow \mathbb {C}\) which belongs to \( B_{\lambda },\) by writing \(g\left( z\right) ={\sum \nolimits _{M=0}^{\infty }}G_{M}\left( z\right) \), where each \(G_{M}\) is a homogeneous polynomial of degree M, and then proving that \(\Vert g\Vert _\lambda < \infty \) by showing that

$$\begin{aligned} \frac{\left\| G_{M}\right\| _{a}}{M^{M/2}\left( \lambda _{M}\right) ^{M}}\rightarrow 0. \end{aligned}$$

It has been noted in the proof of Theorem 2 that

$$\begin{aligned} T_{P}\left( f_{m}\right) =\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m}. \end{aligned}$$

In order to show that the sum in (40) defines an entire function it suffices to prove that

$$\begin{aligned} G=\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }{\displaystyle \sum _{m=0}^{\infty }}TP_{s_{j}}\cdots TP_{s_{0}}Tf_{m} \end{aligned}$$

does so. Then the sum can be reordered and shown to be equal to g. As before (cf. 26) we collect all summands having degree \(M\ge 0\) and we consider the sum

$$\begin{aligned} G_{M}(z):=\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }TP_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }(z). \end{aligned}$$
(41)

Next we show that \(G_{M}\) converges absolutely everywhere. As in the proof of Theorem 2 and with the same notation (cf. 29) setting

$$\begin{aligned} S_{j,M}:=\left\| TP_{s_{j}}\cdots TP_{s_{0}}Tf_{M+k\left( j+2\right) -\left( s_{0}+\cdots +s_{j}\right) }\right\| _{a} \end{aligned}$$

we see that

$$\begin{aligned} \left| G_{M}\left( z\right) \right| \le \frac{\left| z\right| ^{M}}{\sqrt{M!}}\sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }S_{j,M}. \end{aligned}$$
(42)

Thus (41) converges absolutely to a homogeneous polynomial of degree M if

$$\begin{aligned} \sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }S_{j,M} . \end{aligned}$$
(43)

converges. As in the proof of Theorem 2 (and with the same notation) we have

$$\begin{aligned} S_{j,M}\le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} \ B_{-1}^{2^{-1} \sum _{n=0}^{j}(s_{n}-\tau )} \left\| f_{M+k\left( j+2\right) -\sum _{n=0}^{j}s_{n}}\right\| _{a}. \end{aligned}$$

Recall that

$$\begin{aligned} B_{-1}=M+k\left( j+2\right) -\sum _{n=0}^{j}s_{n}\ge M+k + \left( k-\beta \right) \left( j+1\right) . \end{aligned}$$

Let us fix \(M \ge 0\) and write \(\widetilde{D}:=D_{s_{0}}+\cdots +D_{s_{\beta }}\). We want to prove the convergence of (43). Now

$$\begin{aligned} S_{j,M}\le & {} C^{j+1} D_{s_{j}}\cdots D_{s_{0}} \ B_{-1}^{2^{-1}\sum _{n=0}^{j}(s_{n}-\tau )} \ \frac{ \left\| f_{B_{-1}}\right\| _{a}}{B_{-1}^{B_{-1}/2}\lambda _{B_{-1}}^{B_{-1}}} \ B_{-1}^{B_{-1}/2} \lambda _{B_{-1}}^{B_{-1}}.\\= & {} C^{j+1} D_{s_{j}}\cdots D_{s_{0}} \ \frac{\left\| f_{B_{-1}}\right\| _{a}}{B_{-1}^{B_{-1}/2}\lambda _{B_{-1}}^{B_{-1}}} \ B_{-1}^{2^{-1} ( M+k + \left( k-\tau \right) \left( j+1\right) )} \lambda _{B_{-1}}^{B_{-1}}. \end{aligned}$$

Since \(\lambda _m \le 1\) for every m, and \(f \in B_\lambda \), it follows that

$$\begin{aligned} \sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta } S_{j,M} \le C^{j+1} \widetilde{D}^{j+1} \ \left\| f\right\| _{\lambda } \ B_{-1}^{2^{-1} ( M+k + \left( k-\tau \right) \left( j+1\right) )} \lambda _{B_{-1}}^{ M+k + \left( k-\beta \right) \left( j+1\right) }. \end{aligned}$$

Recalling the hypothesis \( \lim _{m\rightarrow \infty }m^{\frac{(k-\tau )}{2}}\lambda _{m}^{(k-\beta )}=0 \), cf. (39), we see that

$$\begin{aligned} \lim _{j\rightarrow \infty } B_{-1}^{2^{-1} \left( \frac{M+k}{j + 1} + \left( k-\tau \right) \right) } \lambda _{B_{-1}}^{\frac{M+k}{j + 1} + \left( k-\beta \right) } =0, \end{aligned}$$

since \(\lim _{j\rightarrow \infty } B_{-1}^{ \frac{1}{j + 1}} =1\) and \(\sup _{j} \lambda _{B_{-1}}^{\frac{1}{j + 1}} \le 1\). Choosing \(J\gg 1\) so that for every \(j \ge J\) we have

$$\begin{aligned} C \widetilde{D} \ B_{-1}^{2^{-1} \left( \frac{M+k}{j + 1} + \left( k-\tau \right) \right) } \lambda _{B_{-1}}^{\frac{M+k}{j + 1} + \left( k-\beta \right) } < 1/2, \end{aligned}$$

it becomes clear that the series (43) converges, and hence, that each \(G_{M}\) is a homogeneous polynomial of degree M. Once we know \(G_{M}\) is a polynomial, we can estimate its apolar norm using (41) and the triangle inequality:

$$\begin{aligned} \left\| G_{M}\right\| _{a} \le \sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }S_{j,M}. \end{aligned}$$
(44)

The next step consists in showing that

$$\begin{aligned} \frac{\left\| G_{M}\right\| _{a}}{M^{M/2}\left( \lambda _{M}\right) ^{M}}\rightarrow 0. \end{aligned}$$

Arguing as in the proof of Theorem 2 we obtain

$$\begin{aligned} S_{j,M}\le C^{j+1}D_{s_{j}}\cdots D_{s_{0}}\frac{B_{-1}^{2^{-1} \sum _{n=0}^{j}(s_{n}-\tau )}}{B_{j}^{\tau /2}}\left\| f_{M+k\left( j+2\right) -\sum _{n=0}^{j}s_{n}}\right\| _{a}, \end{aligned}$$

the only difference being the additional factor \(1/B_{j}^{\tau /2}\), which previously was estimated by 1. This factor becomes important here, since \(M\rightarrow \infty \) in this part of the argument, while previously M was fixed.

It follows that

$$\begin{aligned} \frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} \frac{B_{-1}^{2^{-1}\sum _{n=0}^{j}(s_{n}-\tau )}}{B_{j}^{\tau /2}}\frac{ \left\| f_{B_{-1}}\right\| _{a}}{M^{M/2}\lambda _{M}^{M}}\frac{ B_{-1}^{B_{-1}/2} \lambda _{B_{-1}}^{B_{-1}}}{B_{-1}^{B_{-1}/2}\lambda _{B_{-1}}^{B_{-1}}}. \end{aligned}$$

As before,

$$\begin{aligned} B_{-1}^{2^{-1}\sum _{n=0}^{j}(s_{n}-\tau )}B_{-1}^{B_{-1}/2}=B_{-1}^{M/2+k/2+\left( k-\tau \right) \left( j+1\right) /2}, \end{aligned}$$

so using \(B_{j} = M+k\) and

$$\begin{aligned} \frac{\left\| f_{B_{-1}}\right\| _{a}}{B_{-1}^{B_{-1}/2}\lambda _{B_{-1}}^{B_{-1}}} \le \left\| f\right\| _{\lambda }, \end{aligned}$$

we get

$$\begin{aligned} \frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} \frac{B_{-1}^{M/2+k/2+\left( k-\tau \right) \left( j+1\right) /2}}{\left( M+k\right) ^{\tau /2}M^{M/2}}\frac{\lambda _{B_{-1}}^{B_{-1}}}{\lambda _{M}^{M}} \left\| f\right\| _{\lambda }. \end{aligned}$$

For every \(M \ge 0\), since \(B_{-1}\ge M\) and all terms in \(\lambda \) are bounded by 1, we have \(\lambda _{B_{-1}}\le \lambda _{M}\) and

$$\begin{aligned} \frac{\lambda _{B_{-1}}^{B_{-1}}}{\lambda _{M}^{M}}=\left( \frac{\lambda _{B_{-1}}}{\lambda _{M}}\right) ^{M}\left( \lambda _{B_{-1}}\right) ^{k\left( j+2\right) -\sum _{n=0}^{j}s_{n}}\le \left( \lambda _{B_{-1}}\right) ^{k+\left( k-\beta \right) \left( j+1\right) }. \end{aligned}$$

Furthermore, if \(M\ge 1\) then

$$\begin{aligned} \left( \frac{B_{-1}}{M}\right) ^{M/2}\le \left( 1+\frac{k\left( j+2\right) }{M}\right) ^{M/2}\le e^{\frac{1}{2}k\left( j+2\right) }, \end{aligned}$$

so

$$\begin{aligned} \frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le C^{j+1}D_{s_{j}}\cdots D_{s_{0}} \frac{B_{-1}^{k/2+\left( k-\tau \right) \left( j+1\right) /2}}{\left( M+k\right) ^{\tau /2}} \ e^{\frac{1}{2}k\left( j+2\right) }\left( \lambda _{B_{-1}}\right) ^{k+\left( k-\beta \right) \left( j+1\right) } \left\| f\right\| _{\lambda }. \end{aligned}$$

Next we observe that

$$\begin{aligned} \frac{B_{-1}^{\tau /2}}{\left( M+k\right) ^{\tau /2}}\le & {} \left( \frac{ M+k\left( j+2\right) }{M+k}\right) ^{\tau /2}=\left( 1+\frac{k\left( j+1\right) }{M+k}\right) ^{\tau /2} \\\le & {} \left( 1+\left( j+1\right) \right) ^{\tau /2}\le \left( 1+2^{j}\right) ^{\tau /2}\le 2^{\left( j+1\right) \tau /2}. \end{aligned}$$

Thus

$$\begin{aligned} \frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le & {} C^{j+1}D_{s_{j}}\cdots D_{s_{0}}e^{\frac{1}{2} k \left( j+2\right) } \ 2^{\left( j+1\right) \tau /2} \ B_{-1}^{\left( k-\tau \right) /2+\left( k-\tau \right) \left( j+1\right) /2}\left( \lambda _{B_{-1}}\right) ^{k+\left( k-\beta \right) \left( j+1\right) } \left\| f\right\| _{\lambda } \\= & {} D_{s_{j}}\cdots D_{s_{0}}e^{\frac{1}{2}k}\left( Ce^{\frac{1}{2} k}2^{\tau /2}B_{-1}^{\left( k-\tau \right) /2}\left( \lambda _{B_{-1}}\right) ^{\left( k-\beta \right) }\right) ^{j+1}B_{-1}^{\left( k-\tau \right) /2}\left( \lambda _{B_{-1}}\right) ^{k} \left\| f\right\| _{\lambda }. \end{aligned}$$

Writing again \(\widetilde{D}:=D_{s_{0}}+\cdots +D_{s_{\beta }}\), by (39), for every \(M \gg 1\) sufficiently large we have

$$\begin{aligned} Ce^{\frac{1}{2}k}2^{\tau /2}\cdot B_{-1}^{\frac{(k-\tau )}{2}}\lambda _{B_{-1}}^{(k-\beta )}\le \frac{1}{2\widetilde{D}}, \end{aligned}$$

from whence it follows that

$$\begin{aligned} \frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le D_{s_{j}}\cdots D_{s_{0}}e^{ \frac{1}{2}k}\left( \frac{1}{2\widetilde{D}}\right) ^{j+1}B_{-1}^{\left( k-\tau \right) /2}\left( \lambda _{B_{-1}}\right) ^{k}\left\| f\right\| _{\lambda }. \end{aligned}$$

Therefore

$$\begin{aligned} \frac{\left\| G_{M}\right\| _{a}}{M^{M/2}\lambda _{M}^{M}}\le \sum _{j=-1}^{\infty }\sum _{s_{0}=0}^{\beta }\sum _{s_{1}=0}^{\beta }\cdots \sum _{s_{j}=0}^{\beta }\frac{S_{j,M}}{M^{M/2}\lambda _{M}^{M}}\le \left\| f\right\| _{\lambda }B_{-1}^{\left( k-\tau \right) /2}\left( \lambda _{B_{-1}}\right) ^{k}e^{\frac{1}{2}k}\sum _{j=-1}^{\infty } \frac{1}{2^{j+1}}. \end{aligned}$$

Recalling (39), we see that

$$\begin{aligned} \lim _{M\rightarrow \infty } B_{-1}^{2^{-1} \left( k-\tau \right) } \lambda _{B_{-1}}^{k} =0, \end{aligned}$$

and thus

$$\begin{aligned} \frac{\left\| G_{M}\right\| _{a}}{M^{M/2}\lambda _{M}^{M}}\rightarrow 0. \end{aligned}$$

\(\square \)