skip to main content
research-article
Open Access

On the Zeros of Exponential Polynomials

Published:12 August 2023Publication History

Skip Abstract Section

Abstract

We consider the problem of deciding the existence of real roots of real-valued exponential polynomials with algebraic coefficients. Such functions arise as solutions of linear differential equations with real algebraic coefficients. We focus on two problems: the Zero Problem, which asks whether an exponential polynomial has a real root, and the Infinite Zeros Problem, which asks whether such a function has infinitely many real roots. Our main result is that for differential equations of order at most 8 the Zero Problem is decidable, subject to Schanuel’s Conjecture, while the Infinite Zeros Problem is decidable unconditionally. We show moreover that a decision procedure for the Infinite Zeros Problem at order 9 would yield an algorithm for computing the Lagrange constant of any given real algebraic number to arbitrary precision, indicating that it will be very difficult to extend our decidability results to higher orders.

Skip 1INTRODUCTION Section

1 INTRODUCTION

This article concerns root-finding problems for exponential polynomials, i.e., functions that satisfy ordinary linear differential equations with constant coefficients. The first problem we consider is the Zero Problem. This has as input an interval \(I\subseteq \mathbb {R}_{\ge 0}\) with rational (or infinite) endpoints and an ordinary differential equation (1) \(\begin{gather} f^{(n)} + c_{n-1} f^{(n-1)} + \cdots + c_0 f = 0, \end{gather}\)

with the coefficients \(c_0, \ldots ,c_{n-1}\) and initial conditions \(f(0),\ldots ,f^{(n-1)}(0)\) being real algebraic numbers. Writing \(f:\mathbb {R}_{\ge 0} \rightarrow \mathbb {R}\) for the unique solution of the differential equation and initial conditions, the question is whether there exists \(t\in I\) such that \(f(t)=0\). Decidability of the Zero Problem is currently open. Indeed, decidability is open even for the version of the problem in which we only allow bounded intervals I, called the Bounded Zero Problem [2, Open Problem 17]. We also study the Infinite Zeros Problem, which asks whether f has infinitely many zeros over the nonnegative reals.

We recall some terminology in order to state our main contributions. Every solution of Equation (1) has the form \(f(t)=\sum _{j=1}^m P_j(t) e^{\lambda _j t}\), where \(\lambda _1,\ldots ,\lambda _m\) are the characteristic roots of the differential equation and \(P_1,\ldots ,P_m\) are polynomials that are determined by the initial conditions. We call f an exponential polynomial. The frequencies of f are the imaginary parts of the characteristic roots, and the polynomials \(P_j\) are called the coefficients of f. The order of f is the least number n such that f satisfies a differential equation of the form in Equation (1).

Our first main result establishes decidability of the Bounded Zero Problem subject to Schanuel’s Conjecture, a unifying conjecture in transcendental number theory that plays a key role in the study of the exponential function over both the real and complex numbers [21, 22]. We use Schanuel’s Conjecture to show that every every exponential polynomial admits a factorization such that the zeros of each factor are simple and can be detected using finite-precision numerical computations.

A celebrated paper of Macintyre and Wilkie [18] obtains decidability of the first-order theory of the structure \(\mathfrak {R}_{\mathrm{exp}}=(\mathbb {R},0,1,\lt ,\,\cdot \,,+,\exp)\) assuming Schanuel’s Conjecture over \(\mathbb {R}\). The proof of [17, Theorem 3.1] refers to unpublished work of Macintyre and Wilkie that extends the above-mentioned result to obtain decidability when \(\mathfrak {R}_{\mathrm{exp}}\) is augmented with the restricted functions \({\sin }\!\upharpoonright _{[0,2\pi ]}\) and \({\cos }\!\upharpoonright _{[0,2\pi ]}\), this time assuming Schanuel’s Conjecture over \(\mathbb {C}\). This result immediately implies decidability of the Bounded Zero Problem. However, decidability of the latter problem is much simpler and, as we show in Section 3, can be established by a direct application of Schanuel’s Conjecture.

Our second main result, spanning Sections 4 and 5, shows that the Infinite Zeros Problem is decidable for the class of exponential polynomials of order at most 8. We show moreover that if such a function has finitely many zeros, then an upper bound on the magnitude of the largest zero can be computed. We thereby reduce the Zero Problem to the Bounded Zero Problem and obtain decidability of the former up to order 8, conditional on Schanuel’s Conjecture. Decidability of the Infinite Zeros Problem is unconditional and relies on various results in real algebraic geometry, transcendental number theory, and Diophantine approximation, such as the cell decomposition theorem for semi-algebraic sets, Baker’s theorem on linear forms in logarithms of algebraic numbers, and Kronecker’s simultaneous approximation theorem.

We complete the picture in Section 6 by showing that decidability of the Infinite Zeros Problem at order 9 would entail a major new effectiveness result in Diophantine approximation—namely computability of the Lagrange constants of all real algebraic numbers. As discussed below, currently essentially nothing is known about the Lagrange constants of algebraic numbers of degree 3 or higher, and there are several longstanding open problems connected to this question. Thus, the exhibited reduction represents a significant barrier to extending the positive decidability results in this article. An analogous hardness result can be proven for the Zero Problem [6, Chapter 6.4].

1.1 Related Work

To the best of our knowledge, the Zero Problem and Infinite Zeros Problem were first studied in [2]. That work established decidability of the Infinite Zeros problem in the special case that the dominant characteristic roots are simple, are at least four in number, and have imaginary parts linearly independent over \(\mathbb {Q}\) (see [2, Theorem 15]).

The Zero Problem can be seen as a continuous analog of Skolem’s Problem for linear recurrence sequences, which asks whether a given linear recurrence sequence has a zero term [11]. Decidability of Skolem’s Problem is known for recurrences of order at most 4 but is open in general. Likewise, the Infinite Zeros Problem can be seen as a continuous analog of the problem of whether a given linear recurrence sequence has infinitely many zero terms. Decidability of the latter problem was established in [3].

Skip 2PRELIMINARIES Section

2 PRELIMINARIES

2.1 Exponential Polynomials

Consider a homogeneous linear differential equation (2) \(\begin{gather} f^{(n)} + c_{n-1} f^{(n-1)} + \cdots + c_0 f = 0 \end{gather}\) of order n with complex coefficients \(c_0,\ldots ,c_{n-1}\). The characteristic polynomial of Equation (2) is (3) \(\begin{gather} \chi (x) := x^n + c_{n-1} x^{n-1} + \cdots + c_0. \end{gather}\) Given initial values of \(f(0),f^{\prime }(0),\ldots ,f^{(n-1)}(0)\), the uniquely defined solution f of Equation (2) can be expressed as an exponential polynomial \(f(t) = \sum _{j=1}^k P_j(t)e^{\lambda _jt}\), where \(\lambda _1,\ldots ,\lambda _k\) are the distinct roots of \(\chi\) and for all j, \(P_j\) is a polynomial of degree 1 less than the multiplicity of \(\lambda _j\) as a root of \(\chi\). In particular, if the roots of \(\chi\) are all simple, then f has constant coefficients. We call the polynomials \(P_j\) the coefficients of f and we call the numbers \(\lambda _j\) the exponents of f. We refer to the exponents with maximum real part as dominant.

If the coefficients of the differential Equation (2) and the initial values of the derivatives of f are all real algebraic, the exponents and their corresponding coefficients come in complex-conjugate pairs, and f has the form (4) \(\begin{gather} f(t) = \sum _{j=1}^k e^{a_jt}\sum _{l=0}^{m_j-1}C_{j,l}t^l\cos (b_jt + \varphi _{j,l}), \end{gather}\) where the \(a_j,b_j,C_{j,l}\) are real algebraic and the \(\varphi _{j,l}\) are real numbers such that \(e^{i\varphi _{j,l}}\) is algebraic for all \(j,l\). Both solution forms of Equation (3) and (4) can be computed from the differential equation and initial conditions. We refer the reader to [2, Theorem 7] for details. Unless explicitly specified otherwise, throughout this article exponential polynomials are assumed to be real-valued; however, complex-valued ones arise in Section 3.

2.2 Computational Algebraic Number Theory

We denote by \(\overline{\mathbb {Q}}\) the set of algebraic numbers. For computational purposes we assume a representation of algebraic numbers (such as that described in [7, Section 4.2.1]) that allows to effectively perform arithmetic and to compute the roots of polynomials with algebraic coefficients.

Given an exponential polynomial \(f=\sum _{j=1}^m P_j(t)e^{\lambda _jt}\), consider the field \(K=\mathbb {Q}(\lambda _1,\ldots ,\lambda _m)\). Note that K is closed under complex conjugation. We can compute a primitive element of K, that is, an algebraic number \(\theta\) such that \(K=\mathbb {Q}(\theta)\), together with a representation of each \(\lambda _j\) as a polynomial in \(\theta\) with rational coefficients (see [7, Section 4.5]). From the representation of \(\lambda _1,\ldots ,\lambda _m\) as elements of \(\mathbb {Q}(\theta)\), it is straightforward to determine maximal \(\mathbb {Q}\)-linearly independent subsets of \(\lbrace \mathrm{Re}(\lambda _j) : 1 \le j \le m \rbrace\) and \(\lbrace \mathrm{Im}(\lambda _j) : 1 \le j \le m \rbrace\) (see [13, Section 1]).

2.3 Transcendental Number Theory

We will need the following two classical results from transcendental number theory [14].

Theorem 2.1 (Gelfond-Schneider).

If \(a,b\in \overline{\mathbb {Q}}\) with \(a\ne 0,1\) and \(b\not\in \mathbb {Q}\), then \(a^b\) is transcendental.

Theorem 2.2 (Lindemann-Weierstrass).

If \(a_1,\ldots ,a_n\in \overline{\mathbb {Q}}\) are linearly independent over \(\mathbb {Q}\), then \(e^{a_1},\ldots ,e^{a_n}\) are algebraically independent over \(\mathbb {Q}\).

An easy consequence of the Lindemann-Weierstrass Theorem is that a non-zero algebraic root \(\alpha\) of an exponential polynomial \(f(t) = \sum _{j=1}^k P_j(t) e^{\lambda _j t}\) satisfies \(P_1(\alpha)=\cdots =P_k(\alpha)=0\).

The following lemma, proven in [2], is a consequence of Baker’s theorem on linear forms in logarithms of algebraic numbers.

Lemma 2.3 ([2, Lemma 13]).

Let \(a,b\in \mathbb {R}\cap \overline{\mathbb {Q}}\) be linearly independent over \(\mathbb {Q}\) and let \(i\varphi _1,i\varphi _2\) be logarithms of algebraic numbers, that is, \(e^{i\varphi _1},e^{i\varphi _2}\in \overline{\mathbb {Q}}\). There exist effective constants \(C,N,T\gt 0\) such that for all \(t\ge T\), at least one of \(1-\cos (at+\varphi _1) \gt C/t^N\) and \(1-\cos (bt+\varphi _2) \gt C/t^N\) holds.

Our results on the Bounded Zero Problem depend on Schanuel’s Conjecture, a unifying conjecture in transcendental number theory [14], whose statement generalizes many of the central results in the field (including Theorems 2.1 and 2.2). Recall that a transcendence basis of a field extension \(L/K\) is a subset \(S \subseteq L\) such that S is algebraically independent over K and L is algebraic over \(K(S)\). All transcendence bases of \(L/K\) have the same cardinality, which is called the transcendence degree of the extension.

Conjecture 2.4 (Schanuel’s Conjecture, see [14]).

Let \(a_1,\ldots ,a_n\) be complex numbers that are linearly independent over \(\mathbb {Q}\). Then the field \(\mathbb {Q}(a_1,\ldots ,a_n,e^{a_1},\ldots ,e^{a_n})\) has transcendence degree at least n over \(\mathbb {Q}\).

2.4 Diophantine Approximation

Another key tool is a version of Kronecker’s theorem in Diophantine approximation.

Theorem 2.5 (Kronecker, see [5, Chap. 7, Sec. 1.3, Prop. 1.7]).

Let \(\lambda _1,\ldots ,\lambda _m\) and \(x_1,\ldots ,x_m\) be real numbers. Suppose that for all integers \(u_1,\ldots ,u_m\) such that \(u_1\lambda _1+\cdots +u_m\lambda _m\in \mathbb {Z}\), we also have \(u_1x_1 + \cdots + u_mx_m\in \mathbb {Z}\). Then for all \(\varepsilon \gt 0\), there exist \(p\in \mathbb {Z}^m\) and \(n\in \mathbb {N}\) such that \(|n \lambda _j - x_j - p_j | \lt \varepsilon\) for all \(1\le j \le m\). In particular, if \(1,\lambda _1,\ldots ,\lambda _m\) are linearly independent over \(\mathbb {Z}\), then for all \(x\in \mathbb {R}^m\) and \(\varepsilon \gt 0\) there exist \(n\in \mathbb {N}\) and \(p\in \mathbb {Z}^m\) such that \(|n \lambda _j - x_j - p_j | \lt \varepsilon\).

A direct consequence is the following:

Proposition 2.6.

Let \(a_1,\ldots ,a_m \in \mathbb {R}\cap \overline{\mathbb {Q}}\) be linearly independent over \(\mathbb {Q}\) and let \(\varphi _1,\ldots ,\varphi _m\in \mathbb {R}\). Given \(x\in \mathbb {R}\), denote by \(x \bmod 2\pi\) the unique value \(x^{\prime }\in [0,2\pi)\) such that \(x-x^{\prime } \in 2\pi \mathbb {Z}\). Then the image of the mapping \(h(t) : \mathbb {R}_{\ge 0} \rightarrow [0,2\pi)^m\) given by \(\begin{equation*} h(t) = ((a_1t + \varphi _1)\bmod 2\pi , \ldots , (a_mt + \varphi _m)\bmod 2\pi) \end{equation*}\) is dense in \([0, 2\pi)^m\). Moreover, the set \(\begin{equation*} \lbrace h(t)\, |\, (a_1t + \varphi _1) \bmod 2\pi = 0 \rbrace \end{equation*}\) is dense in \(\lbrace 0\rbrace \times [0,2\pi)^{m-1}\).

Proof.

For the first part of the claim, it is clear that \(1,a_1/2\pi ,\ldots ,a_m/2\pi\) are rationally linearly independent and hence, by Kronecker’s Theorem (Theorem 2.5), the set \(\lbrace h(t) \mid t\in \mathbb {N}\rbrace\) is dense in \([0,2\pi)^m\). It follows that \(\lbrace h(t) \mid t\in \mathbb {R}_{\ge 0}\)} is also dense in \([0,2\pi)^m\). For the second part, note that the first coordinate of \(h(t)\) is zero precisely when \(t = -\varphi _1/a_1 + 2n\pi\) for some \(n\in \mathbb {Z}\). At such a time we have \(h(t)=(0,g(n))\), where \(\begin{align*} g(n) & \stackrel{\mathrm{def}}{=}\left\langle \left(n\frac{2\pi a_j}{a_1} + \frac{a_1\varphi _j-\varphi _1a_j}{a_1}\right) \bmod 2\pi : {2\le j \le m} \right\rangle \,. \end{align*}\) As above, we have that \(\lbrace 1, 2\pi a_2 / a_1, \ldots , 2\pi a_m / a_1\rbrace\) are linearly independent over \(\mathbb {Q}\), so applying Kronecker’s Theorem (Theorem 2.5), we have that \(\lbrace g(n) : n \in \mathbb {N}\rbrace\) is dense in \([0,2\pi)^m\).□

We will also need the following quantitative version of the one-dimensional case of Kronecker’s Theorem.

Theorem 2.7 (Chebyshev, see [12, Theorem 440]).

If \(\alpha \in \mathbb {R}\setminus \mathbb {Q}\) and \(\beta \in \mathbb {R}\), then there are infinitely many pairs of integers \(l, k\) with \(k\gt 0\) such that \(k |k\alpha + l + \beta | \lt 2\).

2.5 Semi-algebraic Sets

A subset of \(\mathbb {R}^n\) is semi-algebraic if it is defined by a Boolean combination of constraints of the form \(P(x_1,\ldots ,x_n) \gt 0\), where P is a polynomial with real algebraic coefficients. A partial function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) is semi-algebraic if its graph is a semi-algebraic subset of \(\mathbb {R}^{n+1}\). The Tarski-Seidenberg theorem [4, Section 1] states that the semi-algebraic sets are closed under projection and are therefore precisely the first-order definable sets over the structure \((\mathbb {R},\lt ,+,\cdot ,0,1)\).

Let \((i_1,\ldots ,i_n)\) be a sequence of zeros and ones of length \(n\ge 1\). An \((i_1,\ldots ,i_n)\)-cell is a subset of \(\mathbb {R}^n\), defined by induction on n as follows:

(i)

A \((0)\)-cell is a singleton subset of \(\mathbb {R}\) and a \((1)\)-cell is an open interval \((a,b)\subseteq \mathbb {R}\).

(ii)

Let \(X\subseteq \mathbb {R}^n\) be a \((i_1,\ldots ,i_n)\)-cell and \(f:X\rightarrow \mathbb {R}\) a continuous semi-algebraic function. Then \(\lbrace (\boldsymbol {x},f(\boldsymbol {x}))\in \mathbb {R}^{n+1} :\boldsymbol {x}\in X\rbrace\) is an \((i_1,\ldots ,i_n,0)\)-cell, while \(\lbrace (\boldsymbol {x},y)\in \mathbb {R}^{n+1} : \boldsymbol {x}\in X \wedge y\lt f(\boldsymbol {x})\rbrace\) and \(\lbrace (\boldsymbol {x},y)\in \mathbb {R}^{n+1} : \boldsymbol {x}\in X \wedge y\gt f(\boldsymbol {x})\rbrace\) are both \((i_1,\ldots ,i_n,1)\)-cells.

(iii)

Let \(X\subseteq \mathbb {R}^n\) be a \((i_1,\ldots ,i_n)\)-cell and \(f,g:X\rightarrow \mathbb {R}\) continuous semi-algebraic functions such that \(f(\boldsymbol {x})\lt g(\boldsymbol {x})\) for all \(\boldsymbol {x}\in X\). Then \(\lbrace (\boldsymbol {x},y) \in \mathbb {R}^{n+1} : f(\boldsymbol {x})\lt y\lt g(\boldsymbol {x})\rbrace\) is a \((i_1,\ldots ,i_n,1)\)-cell.

A cell in \(\mathbb {R}^n\) is a \((i_1,\ldots ,i_n)\)-cell for some (unique) sequence \((i_1,\ldots ,i_n)\). The following is a fundamental result about semi-algebraic sets [1, 19]:

Theorem 2.8 (Cell decomposition theorem).

Given a semi-algebraic set \(E\subseteq \mathbb {R}^n\), one can compute a partition of E as a disjoint union of cells \(E=C_1\cup \cdots \cup C_m\).

We will need the following two simple results about the real exponential function.

Proposition 2.9.

There is a procedure that, given a semi-algebraic set \(D\subseteq \mathbb {R}^{n+1}\) and real algebraic numbers \(r_1, \ldots ,r_n\), decides whether the set \(S=\lbrace t\ge 0 : (t, e^{r_1t},\ldots ,e^{r_nt})\in D\rbrace\) is bounded, and moreover returns an integer T, such that if S is bounded, then \(S\subseteq [0, T]\), and if S is unbounded, then \((T,\infty)\subseteq S\).

Proof.

Consider a non-zero polynomial \(P \in \mathbb {R}[x_0,\ldots ,x_n]\) whose coefficients are real algebraic numbers. Then we can write \(P(t,e^{r_1t},\ldots ,e^{r_nt})\) in the form \(\begin{equation*} Q_1(t)e^{\beta _1 t} + \cdots + Q_{m}(t) e^{\beta _{m}t} \end{equation*}\) for non-zero univariate polynomials \(Q_1,\ldots ,Q_{m}\) with real algebraic coefficients and real algebraic numbers \(\beta _1\gt \cdots \gt \beta _{m}\). It is clear that for t sufficiently large, \(P(t,e^{r_1t},\ldots ,e^{r_nt})\) has the same sign as the leading term \(Q_1(t)\). The proposition easily follows.□

Proposition 2.10.

Let \(g : D \rightarrow \mathbb {R}\) be a bounded semi-algebraic function with domain \(D \subseteq \mathbb {R}^n\). Let \(\boldsymbol {r}=(r_1,\ldots ,r_n)\) be a tuple of real algebraic numbers and \(T_1\) an integer such that \(e^{\boldsymbol {r}t} = (e^{r_1t},\ldots ,e^{r_nt}) \in D\) for all \(t\gt T_1\). Then the limit \(g^* = \lim _{t\rightarrow \infty } g(e^{\boldsymbol {r}t})\) exists and is an algebraic number, and there are effective constants \(T_2,\varepsilon \gt 0\) such that \(|g(e^{\boldsymbol {r}t})- g^*| \lt e^{-\varepsilon t}\) for all \(t\gt T_2\).

Proof.

Since g is semi-algebraic, there is a non-zero polynomial P with real algebraic coefficients such that \(P(\boldsymbol {x},g(\boldsymbol {x}))=0\) for all \(\boldsymbol {x}\in D\) (see [1, Proposition 2.86]). In particular, we have \(P(e^{\boldsymbol {r}t},g(e^{\boldsymbol {r}t}))=0\) for all \(t\gt T_1\). Gathering terms, we can rewrite this equation in the form \(\begin{equation*} Q_1(g(e^{\boldsymbol {r}t}))e^{\beta _1 t} + \cdots + Q_{m}(g(e^{\boldsymbol {r}t}))e^{\beta _{m} t} = 0 \end{equation*}\) for non-zero univariate polynomials \(Q_1,\ldots ,Q_{m}\) with real algebraic coefficients and real algebraic numbers \(\beta _1\gt \cdots \gt \beta _m\). Without loss of generality, assume that \(Q_1\) is monic.

If \(m=1,\) then for all \(t\gt T_1\) we have \(Q_1(g(e^{\boldsymbol {r}t}))=0\). Thus, \(g(e^{\boldsymbol {r}t})\) is equal to some root of \(Q_1\) for all \(t\gt T_1\). Then by Proposition 2.9 there exists \(T_2\) and a root \(g^*\) of \(Q_1\) such that \(g(e^{\boldsymbol {r}t})=g^*\) for all \(t\gt T_2\).

If \(m\gt 1\), since g is a bounded function, for all \(t\gt T_1\) we have (5) \(\begin{eqnarray} \big |Q_1(g(e^{\boldsymbol {r}t}))\big | = \left|Q_2(g(e^{\boldsymbol {r}t})) e^{(\beta _2-\beta _1)t} + \cdots + Q_m(g(e^{\boldsymbol {r}t})) e^{(\beta _m-\beta _1)t} \right| \le M e^{(\beta _2-\beta _1)t} \end{eqnarray}\) for some constant M. Let \(Q_1\) have degree d. Since \(Q_1\) is monic, Equation (5) implies that the distance of \(g(e^{\boldsymbol {r}t})\) to a root of \(Q_1\) is at most \((Me^{(\beta _2-\beta _1)t})^{1/d}\). Hence, there exists a root \(g^*\) of \(Q_1\) and effective constants \(\varepsilon ,T_2\gt 0\) such that \(|g(e^{\boldsymbol {r}t})-g^*| \lt e^{-\varepsilon t}\) for all \(t\gt T_2\).□

Skip 3BOUNDED ZERO PROBLEM Section

3 BOUNDED ZERO PROBLEM

In this section we show that the Bounded Zero Problem is decidable. Recall that in this problem we are given a real-valued exponential polynomial f and a bounded interval I, and we wish to determine whether \(f(t) = 0\) for some \(t\in I\). A natural approach is to apply numerical zero-finding techniques. The main obstacle here is to bound the required precision, e.g., consider how one might determine numerically whether a local minimum of f is a zero. To circumvent this obstacle, we first write f as a product of irreducible factors, working over a certain ring of Laurent polynomials, and study the zeros of the factors separately. This factorization allows us to discount repeated real zeros (assuming Schanuel’s Conjecture). Note that the factors of a given real-valued exponential polynomial may be complex-valued.

3.1 Zero Finding

In this section we describe a method for deciding whether an exponential polynomial \(f:\mathbb {C}\rightarrow \mathbb {C}\) has a zero in a given bounded open interval \((a,b)\) in the real line. The idea is to cover the interval \((a,b)\) by disks in the complex plane and determine the number of zeros of f in each disk using the argument principle in complex analysis. Such an approach underlies several root-finding algorithms for polynomials (see, e.g., [15]); however, in the absence of root separation bounds for exponential polynomials, we require additional assumptions on f in order to distinguish real from imaginary roots. Establishing these conditions ultimately relies on Schanuel’s Conjecture.

Recall the following classical theorem of complex analysis:

Theorem 3.1 (Argument Principle).

Let \(f:\mathbb {C}\rightarrow \mathbb {C}\) be a holomorphic function and C a positively oriented simple closed contour in \(\mathbb {C}\) on which f has no zeros. Then (6) \(\begin{gather} \oint _C \frac{f^{\prime }(z)}{f(z)} dz = 2\pi i N, \end{gather}\) where N denotes the number of zeros of f in the region enclosed by C, where each zero is counted as many times as its multiplicity.

We briefly explain how to evaluate integrals of the form in Equation (6) in case f is an exponential polynomial and C is a circular contour that does not contain a root of f. We assume that f has algebraic coefficients and C has center \(p+qi\) and radius r for some \(p,q,r\in \mathbb {Q}\). Consider the parameterization \(\gamma :[0,1]\rightarrow \mathbb {C}\) of C given by \(\gamma (t)=p+qi+re^{2\pi i t}\). Then the integral in Equation (6) can we written in the form \(\int _0^1 g(t)dt\) for the function \(\begin{equation*} g(t) = \frac{f^{\prime }(\gamma (t))\gamma ^{\prime }(t)}{f(\gamma (t))}. \end{equation*}\)

Now it is straightforward to compute a constant \(L_1\gt 0\) such that \(f\circ \gamma\) is \(L_1\)-Lipschitz. By approximating the value of f at sufficiently many values on C, one can compute \(\varepsilon \gt 0\) such that \(|f(z)|\gt \epsilon\) for all \(z\in C\). Using this lower bound, we can in turn compute a constant \(L_2\gt 0\) such that the function \(g(t)\) is \(L_2\)-Lipschitz. It follows that for all \(n\gt 0\), \(\begin{equation*} \left| \int _0^1 g(t)dt - \frac{1}{n} \sum _{k=1}^n g\left(\frac{k}{n}\right) \right| \le \frac{L_2}{n}. \end{equation*}\)

Thus, we can approximate the integral in Equation (6) to arbitrary precision and thereby determine N.

Proposition 3.2.

Let \(f:\mathbb {C}\rightarrow \mathbb {C}\) be an exponential polynomial with algebraic coefficients and exponents. Assume that all real zeros of f are simple and that complex zeros of f come in complex-conjugate pairs (i.e., for all \(z\in \mathbb {C}\), \(f(z)=0\) iff \(f(\overline{z}) =0\)). Then we can decide whether f has a zero in a given open interval \((a,b)\) with rational endpoints.

Proof.

We argue that the procedure in Figure 1 determines whether f has a zero in the interval \((a,b)\).

Fig. 1.

Fig. 1. Root-finding procedure for exponential polynomials.

If the procedure terminates in Line (4), then the output is correct by a direct application of Theorem 3.1. If the procedure terminates in Line (3), then the output is correct by the assumption that the real roots of f are simple and complex roots come in conjugate pairs; in conjunction with Theorem 3.1, this entails that \(N_j\) is odd just in case \(C_j\) contains a real root. It remains to argue that the procedure always terminates.

For a given value of k, Line (2) succeeds if f has no zero on any of the contours \(C_j\). Since f has only finitely many zeros in any bounded region of the complex plane, for k sufficiently large the body of the for loop will terminate. Moreover, for k sufficiently large, each contour \(C_j\) will enclose at most one real root and no complex roots and thus the whole procedure will halt. Thus, termination is assured if we evaluate the loop for all values of k in parallel in a fair way (e.g., divide the computation into phases such that in the mth phase one executes m steps of the loop body for \(k=1,\ldots ,m\)).□

3.2 Laurent Polynomials

Fix non-negative integers r and s, and consider a single variable X and tuples of variables \(\boldsymbol {Y}= \langle Y_1,\ldots ,Y_r \rangle\) and \(\boldsymbol {Z}= \langle Z_1,\ldots ,Z_s \rangle\). Consider the ring of Laurent polynomials \(\begin{equation*} \mathcal {R}:= \mathbb {C}[X,Y_1,Y_1^{-1},\ldots ,Y_r,Y_r^{-1},Z_1,Z_1^{-1},\ldots , Z_s,Z_s^{-1}], \end{equation*}\) which can be seen as a localization of the polynomial ring \(\begin{equation*} \mathcal {A}:=\mathbb {C}[X,Y_1,\ldots ,Y_r,Z_1,\ldots ,Z_s] \end{equation*}\) in the set of monomials in variables \(Y_1,\ldots ,Y_r\) and \(Z_1,\ldots ,Z_s\). The multiplicative units of \(\mathcal {R}\) are the non-zero monomials in variables \(Y_1,\ldots ,Y_r\) and \(Z_1,\ldots ,Z_s\). As the localization is a unique factorization domain, \(\mathcal {R}\) is itself a unique factorization domain [8, Theorem 10.3.7]. From the proof of this fact it moreover easily follows that \(\mathcal {R}\) inherits from \(\mathcal {A}\) the properties that a polynomial with algebraic coefficients factors as a product of irreducible polynomials that also have algebraic coefficients and that this factorization can be effectively computed [16].

We define an involution on \(\mathcal {R}\) (that is, a self-inverse ring automorphism) as follows. Given a polynomial \(\begin{equation*} P(X,\boldsymbol {Y},\boldsymbol {Z}) = \sum _{j=1}^n \alpha _j X^{u_j} {Y_1}^{v_{j,1}} \ldots {Y_r}^{v_{j,r}} {Z_1}^{w_{j,1}}\ldots {Z_s}^{w_{j,s}}, \end{equation*}\) where \(\alpha _1,\ldots ,\alpha _n \in \mathbb {C}\), define \(\begin{equation*} {P}^\star (X,\boldsymbol {Y},\boldsymbol {Z}) := \sum _{j=1}^n \overline{\alpha _j} X^{u_j} {Y_1}^{v_{j,1}} \ldots {Y_r}^{v_{j,r}} {Z_1}^{-w_{j,1}}\ldots {Z_s}^{-w_{j,s}} \, . \end{equation*}\) As we will see shortly, the mapping \((-)^\star\) on \(\mathcal {R}\) corresponds to a natural operation on exponential polynomials.

Consider an exponential polynomial \(f:\mathbb {C}\rightarrow \mathbb {C}\) given by \(\begin{equation*} f(z):=\sum _{j=1}^n P_j(z)e^{\lambda _j z}, \end{equation*}\) where \(\lambda _1,\ldots ,\lambda _n \in \mathbb {C}\) are algebraic numbers and \(P_1,\ldots ,P_n \in \mathbb {C}[z]\) are univariate polynomials with algebraic coefficients. Let \(\lbrace a_1,\ldots ,a_r\rbrace\) be a basis of the \(\mathbb {Q}\)-vector space spanned by \(\lbrace \mathrm{Re}(\lambda _j) : 1 \le j \le n\rbrace\) and let \(\lbrace b_1,\ldots ,b_s\rbrace\) be a basis of the \(\mathbb {Q}\)-vector space spanned by \(\lbrace \mathrm{Im}(\lambda _j) : 1 \le j \le n\rbrace\). Without loss of generality we may assume that each characteristic root \(\lambda\) is an integer linear combination of \(a_1,\ldots ,a_r\) and \(ib_1,\ldots ,ib_s\). Then \(e^{\lambda z}\) is a product of positive and negative powers of \(e^{a_1z},\ldots ,e^{a_rz}\) and \(e^{ib_1z},\ldots ,e^{ib_sz}\). It follows that there is a Laurent polynomial \(P\in \mathcal {R}\) such that (7) \(\begin{gather} f(z) = P(z,e^{a_1z},\ldots ,e^{a_r z},e^{i b_1 z},\ldots , e^{i b_s z}). \end{gather}\)

Given the exponential polynomial f in Equation (7), we are led to define \(f^\star : \mathbb {C}\rightarrow \mathbb {C}\) by \(\begin{equation*} f^\star (z) := P^\star (z,e^{a_1z},\ldots ,e^{a_r z},e^{i b_1 z},\ldots , e^{i b_s z}). \end{equation*}\) Clearly \(f^\star\) is an exponential polynomial and, by definition of \(P^\star\), we have \(f^\star (z)=\overline{f(\overline{z})}\) for all \(z\in \mathbb {C}\).

Since the polynomial P in Equation (7) can be written as a product of irreducible factors in the ring of Laurent polynomials \(\mathcal {R}\), the exponential polynomial f can be written as a product of exponential polynomials of the form (8) \(\begin{gather} g(z) = Q(z,e^{a_1z},\ldots ,e^{a_r z},e^{i b_1 z},\ldots , e^{i b_s z}) \end{gather}\) with \(Q\in \mathcal {R}\) irreducible. We now classify such exponential polynomials into two types.

Let \(\lbrace a_1,\ldots ,a_r\rbrace\) and \(\lbrace b_1,\ldots ,b_s\rbrace\) be \(\mathbb {Q}\)-linearly independent sets of real algebraic numbers and consider the exponential polynomial g in Equation (8). We say that g is a Type-1 exponential polynomial if Q is irreducible and if Q and \(Q^\star\) are not associates in \(\mathcal {R}\) (i.e., Q is not the product of \(Q^\star\) with a monomial in \(Y_1,\ldots ,Y_r,Z_1,\ldots ,Z_s\)). We say that g is a Type-2 exponential polynomial if Q is irreducible and if Q and \({Q}^\star\) are associates in \(\mathcal {R}\).

Example 3.3.

A simple example of a Type-2 exponential polynomial is \(g(z)=1+e^{iz}\). Here \(g(z)=Q(e^{iz})\), where \(Q(Z)=1+Z\) is an irreducible polynomial that is associated with its conjugate \(Q^\star (Z)=1+{Z^{-1}}\) (since \(Q=ZQ^\star\)).

3.3 Conditional Decidability

In this section we present a decision procedure for the Bounded Zero Problem.

Since the ring \(\mathcal {R}\) has factorization into irreducibles, an arbitrary exponential polynomial can be written as a product of Type-1 and Type-2 exponential polynomials. Moreover, as noted above, this factorization can be computed from f. Thus, it suffices to show how to decide the existence of zeros of Type-1 and Type-2 exponential polynomials. We will handle both cases using Schanuel’s conjecture via the following proposition.

Proposition 3.4.

Given non-negative integers r and s, let \(\lbrace a_1,\ldots ,a_r\rbrace\) and \(\lbrace b_1,\ldots ,b_s\rbrace\) be \(\mathbb {Q}\)-linearly independent sets of real algebraic numbers. Furthermore, let \(P,Q\in \mathcal {R}\) be two polynomials that have algebraic coefficients and are coprime in \(\mathcal {R}\). Then the equations (9) \(\begin{eqnarray} P(t,e^{a_1 t},\ldots ,e^{a_r t},e^{ib_1 t},\ldots ,e^{i b_s t})&=& 0 \end{eqnarray}\) (10) \(\begin{eqnarray} Q(t,e^{a_1 t},\ldots ,e^{a_r t},e^{ib_1 t},\ldots ,e^{i b_s t})&=& 0 \end{eqnarray}\) have no common solution \(t\in \mathbb {R}\setminus \lbrace 0\rbrace\).

Proof.

Consider a solution \(t\ne 0\) of Equations (9) and (10). By passing to suitable associates, we may assume without loss of generality that P and Q lie in \(\mathcal {A}\), i.e., that all variables in P and Q appear with non-negative exponent. Moreover, since P and Q are coprime in \(\mathcal {R}\), their greatest common divisor R in \(\mathcal {A}\) is a monomial. In particular, \(\begin{equation*} R(t,e^{a_1 t},\ldots ,e^{a_r t},e^{ib_1 t},\ldots ,e^{i b_s t})\ne 0. \end{equation*}\) Thus, dividing P and Q by R, we may assume that P and Q are coprime in \(\mathcal {A}\) and that Equations (9) and (10) still hold.

Since coprime univariate polynomials cannot have a common root, we may assume without loss of generality that \(r+s\ge 1\). By Schanuel’s Conjecture, the field \(\begin{equation*} \mathbb {Q}(a_1t,\ldots ,a_rt,ib_1t,\ldots ,ib_st,e^{a_1t},\ldots ,e^{a_rt}, e^{ib_1t},\ldots ,e^{ib_st}) \end{equation*}\) has transcendence degree at least \(r+s\) over \(\mathbb {Q}\). Since \(a_1,\ldots ,a_r\) and \(b_1,\ldots ,b_s\) are algebraic over \(\mathbb {Q}\), writing \(\begin{equation*} S:= (t,e^{a_1t},\ldots ,e^{a_rt},e^{ib_1t},\ldots ,e^{ib_st}), \end{equation*}\) it follows that the field \(\mathbb {Q}(S) / \mathbb {Q}\) also has transcendence degree at least \(r+s\) over \(\mathbb {Q}\).

Equations (9) and (10) say that S is a common root of P and Q. Pick some variable \(\sigma \in \lbrace x,y_j,z_j : 1\le i \le r, 1 \le j\le s\rbrace\) that has positive degree in P. Then the entry of S corresponding to \(\sigma\) (where t corresponds to x, \(e^{a_jt}\) to \(y_j\), and \(e^{b_jt}\) to \(z_j\)) is algebraic over the remaining entries of S. We claim that these remaining entries of S are algebraically dependent and thus S comprises at most \(r+s-1\) algebraically independent elements, contradicting Schanuel’s Conjecture. The claim clearly holds if \(\sigma\) does not appear in Q (for then Q gives the desired algebraic relation). On the other hand, if \(\sigma\) has positive degree in Q, then, since P and Q are coprime in \(\mathcal {A}\), the multivariate resultant \(\mathrm{Res}_\sigma (P,Q)\) is a non-zero polynomial in which the variable \(\sigma\) does not appear and which vanishes at S (see, e.g., [9, Page 163]). Thus, the claim also holds in this case. We thus obtain a contradiction and conclude that Equations (9) and (10) have no non-zero solution \(t\in \mathbb {R}\).□

Theorem 3.5.

The Bounded Zero Problem is decidable assuming Schanuel’s Conjecture.

Proof.

Consider an exponential polynomial (11) \(\begin{gather} f(z)=P(z,e^{a_1z},\ldots ,e^{a_r z},e^{i b_1 z},\ldots , e^{i b_s z}), \end{gather}\) where \(\lbrace a_1,\ldots ,a_r\rbrace\) and \(\lbrace b_1,\ldots ,b_s\rbrace\) are \(\mathbb {Q}\)-linearly independent sets of real algebraic numbers, and \(P\in \mathcal {R}\) is irreducible. We show how to determine whether f has a zero in a bounded interval \(I\subseteq \mathbb {R}_{\ge 0}\) with rational endpoints.

If \(r=s=0,\) then \(f(z)\) is a polynomial with algebraic coefficients and deciding the existence root in I is straightforward.

If r and s are not both zero, then any root of \(f(z)\) must be transcendental by Theorem 2.2 (see [2, Theorem 8] for details). Thus, we may assume without loss of generality that I is an open interval. We continue by considering separately the cases of Type-1 and Type-2 exponential polynomials.

Case (i): f is a Type-1 exponential polynomial. By assumption, P and \(P^\star\) are irreducible and are not associates in \(\mathcal {R}\). Thus, they are coprime in \(\mathcal {R}\). We claim that the equation \(f(z)=0\) has no solution \(z\in \mathbb {R}\). Indeed, if \(z\in \mathbb {R}\) is a zero of f, then \(\begin{equation*} f^\star (z)=\overline{f(\overline{z})} = \overline{f(z)} = 0 \, ; \end{equation*}\) that is, z is a common zero of f and \(f^\star\). It follows that \(\begin{equation*} (z,e^{a_1z},\ldots ,e^{a_r z},e^{i b_1 z},\ldots ,e^{i b_s z}) \end{equation*}\) is a common zero of P and \(P^\star\), contradicting Proposition 3.4.

Case(ii): f is a Type-2 exponential polynomial. We wish to use the zero-finding procedure of Section 3.1 to determine whether f has a zero in the open interval I. To this end we must show that the complex zeros of f come in conjugate pairs and all its real zeros are simple.

Considering complex zeros first, note that by the assumption on f we have \({P}^\star =UP\) for some unit \(U\in \mathcal {R}\). It follows that \(\begin{eqnarray*} f(z) = 0 &\Leftrightarrow & f^\star (z) = 0\qquad \mbox{(since $P^\star =UP$)}\\ & \Leftrightarrow & \overline{f(\overline{z})} = 0 \\ & \Leftrightarrow & f(\overline{z}) = 0. \end{eqnarray*}\) Thus, z is zero of f iff \(\overline{z}\) is a zero of f.

We next show, assuming Schanuel’s Conjecture, that the zeros of f are all simple. We have \(\begin{gather*} f^{\prime }(z) = Q(z,e^{a_1 z},\ldots ,e^{a_r z}, e^{i b_1 z},\ldots ,e^{i b_s z}), \end{gather*}\) for some polynomial \(Q\in \mathcal {R}\). We claim that P and Q are coprime in \(\mathcal {R}\). Since P is irreducible, P and Q can only fail to be coprime if P divides Q.

If P has degree \(k\gt 0\) in X, then Q has degree \(k-1\) in X and thus P cannot divide Q. (Recall that all polynomials in \(\mathcal {R}\) have non-negative degree in the variable X.) On the other hand, if X does not appear in \(P,\) then we can write \(P=\sum _{\boldsymbol {u},\boldsymbol {v}} \alpha _{\boldsymbol {u},\boldsymbol {v}} \boldsymbol {Y}^{\boldsymbol {u}}\boldsymbol {Z}^{\boldsymbol {v}}\), where \(\alpha _{\boldsymbol {u},\boldsymbol {v}} \in \mathbb {C}\) for all \(\boldsymbol {u}\in \mathbb {Z}^r\) and \(\boldsymbol {v}\in \mathbb {Z}^s\). We then have \(Q=\sum _{\boldsymbol {u},\boldsymbol {v}} \alpha _{\boldsymbol {u},\boldsymbol {v}} \beta _{\boldsymbol {u},\boldsymbol {v}} \boldsymbol {Y}^{\boldsymbol {u}}\boldsymbol {Z}^{\boldsymbol {v}}\), where \(\begin{equation*} \beta _{\boldsymbol {u},\boldsymbol {v}}:=\sum _{j=1}^r a_j u_j + i \sum _{j=1}^s b_j v_j. \end{equation*}\) By the rational linear independence of \(\lbrace a_1,\ldots ,a_r\rbrace\) and \(\lbrace b_1,\ldots ,b_s\rbrace\), the numbers \(\beta _{\boldsymbol {u},\boldsymbol {v}}\) are pairwise distinct and non-zero. Since P is not a unit, it has at least two monomials. We conclude that P does not divide Q and hence P and Q are coprime. From Proposition 3.4 we conclude that the equations \(f^{\prime }(z)=f(z)=0\) have no solution \(z\in \mathbb {C}\).□

Skip 4DECIDABILITY FOR ONE AND TWO INDEPENDENT FREQUENCIES Section

4 DECIDABILITY FOR ONE AND TWO INDEPENDENT FREQUENCIES

Define a frequency of an exponential polynomial to be the imaginary part of one of its exponents. In the following subsections we consider exponential polynomials with, respectively, one and two rationally linearly independent frequencies. In each case our goal is to determine whether a given exponential polynomial f has infinitely many real zeros and, if f has only finitely many zeros, to obtain an effective constant T such that all positive real zeros of f lie in the interval \([0,T]\).

4.1 One Independent Frequency

Theorem 4.1.

Let \(f(t) = \sum _{j=1}^k P_j(t) e^{\lambda _j t}\) be an exponential polynomial whose set of frequencies spans a \(\mathbb {Q}\)-vector space of dimension at most one. Then we can determine whether \(\lbrace t\in \mathbb {R}_{\ge 0}:f(t)=0\rbrace\) is bounded and, if so, we can compute an integer T such that \(\lbrace t\in \mathbb {R}_{\ge 0}:f(t)=0\rbrace \subseteq [0,T]\).

Proof.

Write \(\lambda _j=a_j+ib_j\), where \(a_j,b_j\) are real algebraic numbers for \(j=1,\ldots ,k\). By assumption, there is a single real algebraic number b such that each \(b_j\) is an integer multiple of b. For each integer n, both \(\cos (nbt)\) and \(\sin (nbt)\) can be written as polynomials in \(\sin (bt)\) and \(\cos (bt)\) with integer coefficients. If follows that \(\begin{equation*} f(t) = Q(t,e^{a_1t},\ldots ,e^{a_kt},\cos (bt),\sin (bt)), \end{equation*}\) for some polynomial Q with algebraic coefficients.

Write \(R := \lbrace t\ge 0 : \sin (bt) \ge 0\rbrace\) and \(R^{\prime } := \lbrace t\ge 0 : \sin (bt) \le 0\rbrace\). We show how to determine boundedness of \(\lbrace t \in R : f(t) = 0\rbrace\). The procedure to decide boundedness of \(\lbrace t \in R^{\prime } : f(t) = 0\rbrace\) can be obtained with minor modifications.

Consider the semi-algebraic set \(\begin{equation*} E:= \left\lbrace (\boldsymbol {u},x) \in \mathbb {R}^{k+1}\times [-1,1] : Q\left(\boldsymbol u,x,\sqrt {1-x^2}\right)=0) \right\rbrace \end{equation*}\) and note that for \(t \in R\) we have \(f(t)=0\) if and only if \((t,e^{\boldsymbol a t},\cos (bt)) \in E\), where \(e^{\boldsymbol a t}\) denotes \((e^{a_1t},\ldots ,e^{a_kt})\). Let \(E=C_1\cup \cdots \cup C_m\) be a cell decomposition of E, and define \(\begin{equation*} Z_j = \lbrace t \in R : (t,e^{\boldsymbol a t},\cos (bt)) \in C_j\rbrace , \quad j=1,\ldots ,m. \end{equation*}\) Then \(\lbrace t \in R : f(t)=0\rbrace = Z_1 \cup \cdots \cup Z_m\).

Fix \(j\in \lbrace 1,\ldots ,m\rbrace\). We show how to decide whether \(Z_j\) is bounded and, in case \(Z_j\) is bounded, we show how to compute an upper bound on \(Z_j\). This suffices to prove the theorem. To this end, write \(D_j \subseteq \mathbb {R}^{k+1}\) for the projection of \(C_j \subseteq \mathbb {R}^{k+2}\) on the first \(k+1\) coordinates. We consider two cases.

The first case is that \(\lbrace t : (t,e^{\boldsymbol a t}) \in D_j\rbrace\) is bounded. By Proposition 2.9, we can compute an upper bound T of this set. But then T is an effective upper bound on the set \(Z_j\).

The second case is that \(\lbrace t : (t,e^{\boldsymbol a t}) \in D_j\rbrace\) is unbounded. We claim that \(Z_j\) is also unbounded. Indeed, observe that by Proposition 2.9, the set \(\lbrace t : (t,e^{\boldsymbol a t}) \in D_j\rbrace\) contains an unbounded interval \((T,\infty)\). Note also that, by definition of a cell, there exists a continuous semi-algebraic function \(\xi\) with domain \(D_j\) such that \((\boldsymbol u,\xi (\boldsymbol u)) \in C_j\) for all \(\boldsymbol u \in D_j\). Then for all \(t\in R\), \(\begin{eqnarray*} f(t)=0 & \Longleftarrow & (t,e^{\boldsymbol a t},\cos (bt)) \in C_j\\ & \Longleftarrow & (t,e^{\boldsymbol a t}) \in D_j \wedge \xi (t,e^{\boldsymbol a t})=\cos (bt) \\ & \Longleftarrow & {t \in (T,\infty)} \wedge \xi (t,e^{\boldsymbol a t})=\cos (bt). \end{eqnarray*}\) Now \(\xi (t,e^{\boldsymbol at})\) is a continuous function with domain \((T,\infty)\) that takes values in \([-1,1]\). Furthermore, \(R \cap (T,\infty)\) contains infinitely many intervals over which \(\cos (bt)\) runs from \(+1\) to \(-1\). Thus, there are infinitely many \(t\in R\) such that \(\xi (t,e^{\boldsymbol at})=\cos (bt)\) and hence f has infinitely many zeros in \(Z_j\).□

4.2 Two Independent Frequencies

In this section we consider exponential polynomials whose frequencies span a two-dimensional vector space over \(\mathbb {Q}\). We show that the Infinite Zeros Problem is decidable under the additional assumption that the coefficient polynomials are all constants. (The results in Section 6 show that dropping this additional assumption would entail significant new results about Diophantine approximation of algebraic numbers.) Our strategy is to reduce the Infinite Zero Problem to the question of whether a linear flow on a two-dimensional torus hits infinitely often a certain shrinking target set.

Theorem 4.2.

Let \(f(t) = \sum _{j=1}^k P_j e^{\lambda _j t}\) be an exponential polynomial whose coefficient polynomials \(P_1,\ldots ,P_k\) are constant and whose frequencies span a two-dimensional vector space over \(\mathbb {Q}\). Then we can decide whether \(\lbrace t\in \mathbb {R}_{\ge 0}:f(t)=0\rbrace\) is bounded and, if so, we can compute an integer T such that \(\lbrace t\in \mathbb {R}_{\ge 0}:f(t)=0\rbrace \subseteq [0,T]\).

Proof.

Write \(a_j=\mathrm{Re}(\lambda _j)\) for \(j=1,\ldots ,k\). Let \(b_1,b_2\) be real algebraic numbers, linearly independent over \(\mathbb {Q}\), such that \(\mathrm{Im}(\lambda _j)\) is an integer linear combination of \(b_1\) and \(b_2\) for \(j=1,\ldots ,k\). For each \(n\in \mathbb {Z}\), \(\sin (nb_1t)\) and \(\cos (nb_1t)\) can be written as polynomials in \(\sin (b_1t)\) and \(\cos (b_1t)\) with integer coefficients, and similarly for \(b_2\). It follows that we can write f in the form \(\begin{equation*} f(t) = Q(e^{a_1 t},\ldots , e^{a_kt},\cos (b_1t),\cos (b_2t), \sin (b_1t),\sin (b_2t)) \end{equation*}\) for some polynomial Q with real algebraic coefficients that is computable from f.

Write \(R:= \lbrace t\ge 0:\sin (b_1t)\ge 0 \wedge \sin (b_2t)\ge 0 \rbrace\). We show how to decide boundedness of \(\lbrace t\in R :f(t)=0\rbrace\). The cases for the other three sign conditions on \(\sin (b_1t)\) and \(\sin (b_2t)\) follow mutatis mutandis.

Define a semi-algebraic set \(\begin{equation*} E := \lbrace (\boldsymbol {u},x,y) \in \mathbb {R}^k \times [-1,1]^2: Q (\boldsymbol {u},x,y,\sqrt {1-x^2},\sqrt {1-y^2})=0\rbrace . \end{equation*}\) Then for \(t\in R\) we have \(f(t)=0\) if and only if \((e^{\boldsymbol {a}t},\cos (b_1t),\cos (b_2t)) \in E\), where \(\boldsymbol {a}= (a_1,\ldots ,a_k)\). Now consider a cell decomposition \(E=C_1 \cup \cdots \cup C_m\), and define (12) \(\begin{gather} Z_j := \lbrace t\in R : (e^{\boldsymbol {a}t},\cos (b_1t),\cos (b_2t)) \in C_j \rbrace , \qquad j=1,\ldots ,m. \end{gather}\) Then \(\lbrace t \in R : f(t)=0\rbrace = Z_1 \cup \cdots \cup Z_m\). We now analyze the boundedness of each component \(Z_j\).

Fix \(j\in \lbrace 1,\ldots ,m\rbrace\). We show how to decide boundedness of \(Z_j\). To this end, write \(D_j \subseteq \mathbb {R}^{k}\) for the projection of the corresponding cell \(C_j \subseteq \mathbb {R}^{k+2}\) on the first k coordinates.

First, suppose that \(\lbrace t \in \mathbb {R} : e^{\boldsymbol {a}t} \in D_j \rbrace\) is bounded. Then, by Proposition 2.9, we can compute an upper bound T of this set, entailing that \(Z_j \subseteq [0,T]\). On the other hand, suppose that \(\lbrace t\in \mathbb {R}: e^{\boldsymbol {a}t} \in D_j \rbrace\) is unbounded. Then, by Proposition 2.9, this set contains an unbounded interval \((T,\infty)\) for some \(T\in \mathbb {N}\). Write \(I=[-1,1]\) and define functions \(g_1,g_2,h_1,h_2 : D_j \rightarrow I\) by (13) \(\begin{align} g_1(\boldsymbol {u}) & = \inf \lbrace x\in I : \exists y \, (\boldsymbol {u},x,y) \in C_j\rbrace \qquad & g_2(\boldsymbol {u}) & = \inf \lbrace y\in I : \exists x \, (\boldsymbol {u},x,y) \in C_j\rbrace \end{align}\) (14) \(\begin{align} h_1(\boldsymbol {u}) & = \sup \lbrace x\in I: \exists y \, (\boldsymbol {u},x,y) \in C_j\rbrace \qquad & h_2(\boldsymbol {u}) & = \sup \lbrace y\in I : \exists x \, (\boldsymbol {u},x,y) \in C_j\rbrace . \end{align}\) These functions are all semi-algebraic; hence, by Proposition 2.10, the limits \(g_i^* := \lim _{t\rightarrow \infty } g_i(e^{\boldsymbol {a}t})\) and \(h_i^* := \lim _{t\rightarrow \infty } h_i(e^{\boldsymbol {a}t})\) exist for \(i=1,2\) and are algebraic numbers. Clearly we have \(g_1^*\le h_1^*\) and \(g_2^*\le h_2^*\). We now consider three cases according to the strictness of these inequalities.

Case I: Suppose that \(g_1^*= h_1^*\) and \(g_2^*=h_2^*\). We show that \(Z_j\) is bounded and that we can compute \(T_2\) such that \(Z_j\subseteq [0,T_2]\).

By Proposition 2.10, there exist \(T_1,\varepsilon \gt 0\) such that for all \(t\gt T_1\) and \(i=1,2\), (15) \(\begin{gather} |g_i(e^{\boldsymbol {a}t}) - g_i^*| \lt e^{-\varepsilon t} \mbox{ and } |h_i(e^{\boldsymbol {a}t}) - h_i^*| \lt e^{-\varepsilon t}. \end{gather}\)

Then for \(t \in R\) such that \(t\gt T_1\) we have (16) \(\begin{eqnarray} t\in Z_j & \Longleftrightarrow & \left(e^{\boldsymbol {a}t},\cos (b_1t),\cos (b_2t)\right) \in C_j \;\;{\text{(by Equations }\href{#eq12}{\text{(12))}}}\\ & \Longrightarrow & g_1(e^{\boldsymbol {a}t}) \le \cos (b_1t) \le h_1(e^{\boldsymbol {a}t}) \,\mbox{ and }\, g_2(e^{\boldsymbol {a}t}) \le \cos (b_2t) \le h_2(e^{\boldsymbol {a}t}) \;\;{{\text{(by Equations}} ({{\href{#eq13}{\text{13}}}})\;{{\text {and}\;({\href{#eq14}{\text{14}}}})}}\\ &\Longrightarrow & \left| \cos (b_1t) - g_1^* \right| \lt e^{-\varepsilon t} \mbox{ and } \left| \cos (b_2t) - g_2^* \right| \lt e^{-\varepsilon t} {\text{by Equations }\href{#eq15}{\text{(15)}}}). \end{eqnarray}\)

Write \(g_1^* = \cos (\varphi _1)\) and \(g_2^*=\cos (\varphi _2)\) for some \(\varphi _1,\varphi _2\in [0,\pi ]\). Since \(|\cos (\varphi _1+x)-\cos (\varphi _1)| \ge x^3/3\) for all x sufficiently small (by a Taylor expansion), the inequality in Equation (16) implies that for some \(k_1,k_2 \in \mathbb {Z}\), (17) \(\begin{gather} |b_1t - \varphi _1 - 2k_1\pi | \lt 3e^{-\varepsilon t/3} \,\mbox{ and }\, |b_2t - \varphi _2 - 2k_2\pi | \lt 3e^{-\varepsilon t/3}. \end{gather}\) Combining the upper bounds in Equation (17) with the polynomial lower bounds from Lemma 2.3 (i.e., that either for some N and all t sufficiently large, either \(|b_1t-\varphi _1-2k_1\pi | \gt 1/t^N\) or \(|b_2t-\varphi _2-2k_2\pi | \gt 1/t^N\)), we obtain an effective bound \(T_2\) for which \(t\in Z_j\) implies \(t\lt T_2\).

Case II: Next, suppose that \(g_1^*\lt h_1^*\). In this case we show that \(Z_j\) is unbounded. The geometric intuition is as follows: imagine a particle in the plane whose position at time t is \((\cos (b_1t),\cos (b_2t))\), together with a “moving target” whose extent at time t is \(\Gamma _t:=\lbrace (x,y) : (e^{\boldsymbol {a}t},x,y) \in C_j \rbrace\). Intuitively, such a particle must reach \(\Gamma _t\) for arbitrarily large values of t since its orbit is dense in \([-1,1]^2\) and \(\Gamma _t\) “converges” to a subset of \([-1,1]^2\) that has positive dimension.

Proceeding formally, first notice that \(C_j\) cannot be a \((\ldots ,0,1)\)-cell or a \((\ldots ,0,0)\)-cell, for then we would have \(g_1(\boldsymbol {u}) = h_1(\boldsymbol {u})\) for all \(\boldsymbol {u}\in D_j\) and hence \(g_1^*=h_1^*\). Thus, \(C_j\) must either be a \((\ldots ,1,0)\)-cell or a \((\ldots ,1,1)\)-cell. In either case, \(C_j\) includes a cell of the form \(\lbrace (\boldsymbol {u},x,\xi (\boldsymbol {u},x)) : \boldsymbol {u}\in D_j, g_1(\boldsymbol {u}) \lt x \lt h_1(\boldsymbol {u}) \rbrace\) for some continuous semi-algebraic function \(\xi\).

Let \(c,d\) be real algebraic numbers such that \(g_1^*\lt c\lt d\lt h_1^*\). Write \(c=\cos (\psi ^{\prime })\) and \(d=\cos (\psi)\) for \(0\le \psi \lt \psi ^{\prime } \le \pi\). By Proposition 2.10, the limits \(\lim _{t\rightarrow \infty } \xi (e^{\boldsymbol {a}t},c)\) and \(\lim _{t\rightarrow \infty } \xi (e^{\boldsymbol {a}t},d)\) exist and are algebraic numbers in the interval \([-1,1]\). Let \(\theta ,\theta ^{\prime } \in [0,\pi ]\) be such that \(\cos (\theta)=\lim _{t\rightarrow \infty } \xi (e^{\boldsymbol {a}t},d)\) and \(\cos (\theta ^{\prime }) = \lim _{t\rightarrow \infty } \xi (e^{\boldsymbol {a}t},c)\).

Since \(\cos (\theta)\), \(\cos (\theta ^{\prime })\), \(\cos (\psi)\), and \(\cos (\psi ^{\prime })\) are algebraic, \(e^{i(\theta ^{\prime }-\theta)}\) and \(e^{i(\psi ^{\prime }-\psi)}\) are also algebraic, and hence by the Gelfond-Schneider theorem (Theorem 2.1), the quotient \(\frac{\theta ^{\prime }-\theta }{\psi ^{\prime }-\psi }\) is either rational or transcendental. In particular, we know that it is not equal to \(\frac{b_2}{b_1}\), which is algebraic and irrational. Let us suppose that \(\frac{\theta ^{\prime }-\theta }{\psi ^{\prime }-\psi } \gt \frac{b_2}{b_1}\) (the converse case is almost identical). Then there exists \(\theta ^{\prime \prime }\) with \(\theta \lt \theta ^{\prime \prime }\lt \theta ^{\prime }\), such that (18) \(\begin{gather} \theta \lt \theta ^{\prime \prime } + \frac{b_2}{b_1}(\psi ^{\prime }-\psi) \lt \theta ^{\prime }. \end{gather}\)

Since \(2\pi ,b_1,b_2\) are linearly independent over \(\mathbb {Q}\), it follows from Proposition 2.6 that \(\lbrace (b_1t,b_2t)\bmod 2\pi : t \in \mathbb {R}_{\ge 0} \rbrace\) is dense in \([0,2\pi)^2\). Thus, there is an increasing sequence \(t_1\lt t_2\lt \cdots\), with \(b_1t_n \equiv \psi \bmod 2\pi\) for all n, such that \(b_2t_n \bmod 2\pi\) converges to \(\theta ^{\prime \prime }\). Then, defining \(s_1\lt s_2\lt \cdots\) by \(s_n = t_n + \frac{\psi ^{\prime }-\psi }{b_1}\), we have \(b_1s_n \equiv \psi ^{\prime } \bmod 2\pi\) for all n and, by Equation (18), \(\begin{equation*} \lim _{n \rightarrow \infty } b_2s_n = \lim _{n\rightarrow \infty } b_2t_n +\frac{b_2}{b_1}(\psi ^{\prime }-\psi) = \theta ^{\prime \prime } + \frac{b_2}{b_1}(\psi ^{\prime }-\psi) \lt \theta ^{\prime } \quad (\bmod \; 2\pi). \end{equation*}\)

Let \(\eta (t)=\xi (e^{\boldsymbol {a}t},\cos (b_1t)) - \cos (b_2t)\). Then for \(t\in R\) such that \(g(e^{\boldsymbol {a}t})\lt \cos (b_1t)\lt h(e^{\boldsymbol {a}t})\), \(\begin{eqnarray*} \eta (t)=0&\Longrightarrow & \cos (b_2t)=\xi (e^{\boldsymbol {a}t},\cos (b_1t))\\ &\Longrightarrow & (e^{\boldsymbol {a}t},\cos (b_1t),\cos (b_2t)) \in C_j\\ &\Longrightarrow & t\in Z_j {\text{ (by Equation}}(\href{#eq12}{\text{12}})). \end{eqnarray*}\) Now \(\lim _{n\rightarrow \infty }\eta (t_n)=\cos (\theta)-\cos (\theta ^{\prime \prime })\gt 0\) and \(\lim _{n\rightarrow \infty } \eta (s_n)\lt \cos (\theta ^{\prime })-\cos (\theta ^{\prime })=0\). Moreover, for n sufficiently large we have \([t_n,s_n]\subseteq R\). It follows that \(\eta (t)\) has a zero in every interval \([t_n,s_n]\) for n large enough. We conclude that \(Z_j\) is unbounded.

Case III: Finally, the case \(g_2^* \lt h_2^*\) is symmetric to Case II.□

Skip 5DECIDABILITY UP TO ORDER EIGHT Section

5 DECIDABILITY UP TO ORDER EIGHT

We now shift our attention to the low-order case. The main results of this section establish decidability of the Infinite Zeros Problem and conditional decidability of the Zero Problem for exponential polynomials of order at most 8.

5.1 Known Decidable Cases

First, we recall some simple criteria on the dominant terms of an exponential polynomial that make it easy to decide whether it has infinitely many zeros.

Lemma 5.1.

Let f be an exponential polynomial that has a single dominant term that is moreover associated to a real exponent. Then f has finitely many nonnegative real zeros, all lying in an interval \([0,T]\) for some effectively computable constant T.

Proof.

Suppose that f has dominant term \(At^de^{rt}\) for real algebraic numbers \(A\ne 0\) and r and a nonnegative integer d. Then we can write \(\frac{f(t)}{e^{rt}t^d} = A + g(t)\), where \(|g(t)| = O(1/t)\). Moreover, the constants in the asymptotic notation are effective and hence we can compute the desired threshold T.□

Lemma 5.2 ([2, Theorem 11]).

An exponential polynomial that has no dominant term associated to a real exponent has infinitely many zeros.

Theorem 5.3 ([2, Theorem 15]).

Consider an exponential polynomial whose dominant exponents all have constant coefficients, are at least four in number, and have imaginary parts linearly independent over \(\mathbb {Q}\). Then the existence of infinitely many zeros is decidable. Moreover, if there are finitely many zeros, then there is an effective threshold T such that all zeros are in \([0, T]\).

5.2 Two and Three Rationally Linearly Independent Frequencies

In this section we consider certain sub-cases of the Infinite Zeros Problem for exponential polynomials of order at most 8 that are not covered by the results of Section 4. Namely, we consider exponential polynomials with two rationally linearly independent frequencies and non-constant coefficients, and exponential polynomials with three rationally linearly independent frequencies.

Lemmas 5.4 and 5.5, below, concern exponential polynomials in which the dominant terms involve linear coefficient polynomials.

Lemma 5.4.

Let \(A, B, C, D, E, a, b, r_1\) be real algebraic numbers such that \(a,b \gt 0 \gt r_1\) and \(A,B \ne 0\). Let also \(\varphi _1,\varphi _2,\varphi _3 \in \mathbb {R}\) be such that \(e^{i\varphi _1},e^{i\varphi _2},e^{i\varphi _3}\) are algebraic. Define the exponential polynomial f by \(\begin{equation*} f(t) = t(A\cos (at+\varphi _1) + B) + (C\cos (at + \varphi _2) + D) + E e^{r_1t}\cos (bt+\varphi _3). \end{equation*}\) Then it is decidable whether f has infinitely many zeros. Moreover, if f has only finitely many zeros, then there exists an effective constant T such that all zeros lie in \([0,T]\).

Proof.

We can assume without loss of generality that \(a/b\not\in \mathbb {Q}\), since otherwise f would have at most one rationally linearly independent frequency and the lemma would follow from Theorem 4.1.

Considering the dominant term \(t(A\cos (at+\varphi _1) + B)\), it is clear that if \(|A|\gt |B|,\) then f changes sign infinitely often, whereas if \(|B|\gt |A|\), then for t large enough, \(f(t)\) has the same sign as B. Thus, we can assume \(|A|=|B|\). Dividing f by B, and replacing \(\varphi _1\) by \(\varphi _1+\pi\) if necessary, we can moreover assume that f has the form (19) \(\begin{gather} f(t) = \underbrace{t(1-\cos (at+\varphi _1))}_{\alpha (t)} + \underbrace{(C\cos (at+\varphi _2)+D)}_{\beta (t)} + \underbrace{e^{r_1t}E\cos (bt+\varphi _3)}_{\gamma (t)}. \end{gather}\)

We now focus on the critical times\(t_j\stackrel{\mathrm{def}}{=}\frac{2j\pi -\varphi _1}{a}\), \(j\in \mathbb {N}\), at which the dominant term \(\alpha (t)\) vanishes. Define \(F \stackrel{\mathrm{def}}{=}C\cos (\varphi _2-\varphi _1)+D\) and notice that \(\beta (t_j)=F\) for all \(j \in \mathbb {N}\).

Case (i). Suppose that \(F\le 0\). By the linear independence of \(a,b\) and Proposition 2.6, we have \(\gamma (t_j)\lt 0\) for infinitely many j. Thus, \(f(t_j) = F+\gamma (t_j)\) is negative for infinitely many j. But, considering the dominant term \(\alpha (t)\), it is also clear that \(\lim \sup _{t\rightarrow \infty }f(t)\gt 0\). We conclude that f has infinitely many zeros.

Case (ii). Suppose that \(F \gt 0\). We claim that f is ultimately positive. Since \(\lim \sup _{t\rightarrow \infty } f(t)\gt 0\), it suffices to show that for t sufficiently large, if \(f^{\prime }(t)=0,\) then \(f(t)\gt 0\).

Since \(\beta\) is uniformly continuous, there exists \(\delta \gt 0\) such that for all \(t\in \mathbb {R}_{\ge 0}\) and \(j\in \mathbb {N}\) such that \(|t-t_j|\lt \delta\) we have \(\beta (t)\gt F/2\). Furthermore, since \(f(t) \ge \beta (t)-e^{r_1t}|E|\), we have that \(f(t)\gt 0\) for all sufficiently large \(t\in \mathbb {R}_{\ge 0}\) and \(j\in \mathbb {N}\) such that \(|t-t_j|\lt \delta\).

Let t be such that \(f^{\prime }(t)=0\) and choose \(j \in \mathbb {N}\) such that \(|at+\varphi _1-j\pi | \le \frac{\pi }{2}\). If j is odd, then \(\cos (at+\varphi _1) \le 0\) and hence \(f(t) \ge t+\beta (t)+\gamma (t)\) is positive for t sufficiently large. Now suppose that j is even—say \(j=2k\) for some k. From the equation \(f^{\prime }(t)=0\) we have (20) \(\begin{gather} |at\sin (at+\varphi _1)| = |\cos (at+\varphi _1) - 1 + \beta ^{\prime }(t)+\gamma ^{\prime }(t) | \le G \end{gather}\) for some positive constant G. It follows that \(\begin{align*} |t-t_j| & \, =\, \frac{1}{a} \left|at+\varphi _1-2k\pi \right| && \lbrace \mbox{defn. of $t_k$, $j=2k$} \rbrace \\ & \,\le \, \frac{2}{a} |\sin (at+\varphi _1-2k\pi)| && \lbrace \mbox{since $|x|\le 2|\sin (x)|$ for all $x\in [-\frac{\pi }{2},\frac{\pi }{2}]$} \rbrace \\ & \, \le \, \frac{2G}{a^2t} && \lbrace {\text{by Equations }{\text(\href{#eq20}{20})}} \rbrace . \end{align*}\) We conclude that for t sufficiently large, \(|t-t_j|\lt \delta\) and hence, as observed above, \(f(t)\gt 0\).□

Lemma 5.5.

Let \(\begin{equation*} f(t) = t(A_1 + A_2\cos (at + \varphi)) + A\cos (at + \varphi _1) + B\cos (bt + \varphi _2) + C, \end{equation*}\) where \(a,b,A,B,C,A_1,A_2\ne 0\) are real algebraic and \(\varphi ,\varphi _1,\varphi _2\) are real with \(a, b\) positive and \(e^{i\varphi },e^{i\varphi _1},e^{i\varphi _2}\) algebraic. It is decidable whether f has infinitely many zeros in \([0, \infty)\), and if not, then a threshold T can be computed such that all zeros of f lie in \([0,T]\).

Proof.

We can assume without loss of generality that \(a/b\not\in \mathbb {Q}\), since otherwise the claim follows from Theorem 4.1. If \(|A_1| \lt |A_2|\), then f has infinitely many zeros (consider large t such that \(\cos (at+\varphi)=\pm 1\)), whereas if \(|A_1| \gt |A_2|\), then the sign of f will eventually match that of \(A_1\), so f has only finitely many zeros. We are left with the case that \(|A_1| = |A_2|\). Dividing f through by \(A_1\) and replacing \(\varphi\) by \(\varphi +\pi\) if necessary, we can write the exponential polynomial as \(\begin{equation*} f(t) = \underbrace{t(1 - \cos (at + \varphi))}_{\alpha (t)} + \underbrace{A\cos (at + \varphi _1) + B\cos (bt + \varphi _2) + C}_{\beta (t)}. \end{equation*}\) The analysis centers around the behavior of f around the sequence of critical points \(t_j \stackrel{\mathrm{def}}{=}\frac{2j\pi - \varphi }{a}\), \(j\in \mathbb {N}\), at which the dominant term \(\alpha (t)\) is zero. We distinguish three cases, based on comparing \(A\cos (\varphi _1-\varphi) + C\) with \(|B|\).

Case (i). Suppose first that \(A\cos (\varphi _1-\varphi) + C \lt |B|\). We claim that f has infinitely many zeros. Writing \(\varepsilon = |B| - A\cos (\varphi _1-\varphi) - C \gt 0\), we have \(\begin{align*} f(t_j) & = A\cos (\varphi _1-\varphi) + C + B\cos (bt_j + \varphi _2) \\ & = |B| - \varepsilon + B\cos (bt_j + \varphi _2). \end{align*}\) By the linear independence of \(a, b\) and Proposition 2.6, we have that for infinitely many j, \(|B| + B\cos (bt_j+\varphi _2) \in [0, \varepsilon /2]\), say, and hence \(f(t_j) \lt -\varepsilon /2 \lt 0\). Since also \(\lim \sup _{t\rightarrow \infty } f(t)\gt 0\), we conclude that f has infinitely many zeros, as claimed.

Case (ii). Second, suppose that \(A\cos (\varphi _1-\varphi)+C \gt |B|\). We argue that f has finitely many zeros. To start, we note that for all \(j\in \mathbb {N}\), \(\begin{eqnarray*} \beta (t_j)&=& A(\cos (\varphi _1-\varphi)+C+B\cos (bt+\varphi _2) \\ &\ge & A(\cos (\varphi _1-\varphi)+C-|B| \\ & \gt & 0. \end{eqnarray*}\) Since moreover \(\beta\) is uniformly continuous, there exists \(\delta \gt 0\) such that \(\beta (t)\gt 0\) for all \(t\in \mathbb {R}_{\ge 0}\) and \(j\in \mathbb {N}\) such that \(|t-t_j|\lt \delta\). In particular, since \(f(t)\ge \beta (t)\) for all t, we have \(f(t) \gt 0\) for all t and j such that \(|t-t_j|\lt \delta\).

Clearly \(\lim \inf _{t\rightarrow \infty } f(t) \gt 0\). Hence, to show that f is ultimately positive, it suffices to show that \(f(t)\gt 0\) for all sufficiently large t such that \(f^{\prime }(t)=0\). To this end, let t be such that \(f^{\prime }(t)=0\) and choose \(j \in \mathbb {N}\) such that \(|at+\varphi -j\pi | \le \frac{\pi }{2}\). If j is odd, then \(\cos (at+\varphi) \le 0\) and hence \(f(t)\ge t+\beta (t)\) is positive for t sufficiently large. Now suppose that j is even—say \(j=2k\) for some k. From the equation \(f^{\prime }(t)=0\) we get that (21) \(\begin{gather} |at\sin (at+\varphi)| = |\cos (at+\varphi)-1+\beta ^{\prime }(t)| \le D \end{gather}\) for some positive constant D. It follows that \(\begin{align*} |t-t_j| & \, =\, \frac{1}{a} \left|at+\varphi -2k\pi \right| && \lbrace \text{defn. of $t_j$, $j=2k$} \rbrace \\ & \,\le \, \frac{2}{a} | \sin (at+\varphi -2k\pi)| && \lbrace \text{since $|x|\le 2|\sin (x)|$ for all $x\in [-\frac{\pi }{2},\frac{\pi }{2}]$} \rbrace \\ & \, \le \, \frac{2D}{a^2t}. && \lbrace {\text{by Equations }{\text(\href{#eq21}{21})}}\rbrace . \end{align*}\) Thus, for t sufficiently large, \(|t-t_j|\lt \delta\) and hence, as observed above, \(f(t) \gt 0\).

Case (iii). Finally, suppose \(A\cos (\varphi _1-\varphi) + C = |B|\). We claim that f has finitely many zeros if and only if \(\varphi _1-\varphi =k\pi\) for some \(k\in \mathbb {Z}\).

Suppose first \(\varphi _1 - \varphi = k\pi\) for some \(k\in \mathbb {Z}\). Then \(|B|=A\cos (\varphi _1-\varphi) + C=A(-1)^k + C\), so \(\begin{align*} f(t) & = t(1-\cos (at + \varphi)) + A\cos (at+\varphi +k\pi) + B\cos (bt+\varphi _2) +C \\ & = (t - A(-1)^k)(1-\cos (at + \varphi)) + (|B|+B\cos (bt+\varphi _2)). \end{align*}\) For \(t\ge |A|\), \(f(t) = 0\) is equivalent to \(\begin{equation*} \cos (at + \varphi) = 1 \text{ and } \cos (bt + \varphi _2) = -\mbox{sign}(B), \end{equation*}\) which entails that \(e^{iat}, e^{ibt}\) are both algebraic. Then by the Gelfond-Schneider theorem (Theorem 2.1), either \(t=0\) or \(a/b\in \mathbb {Q}\), which is a contradiction. Therefore, \(f(t)\) has no zeros \(t\ge |A|\).

Finally, suppose that \(\varphi _1-\varphi \ne k\pi\) for all \(k\in \mathbb {Z}\) and \(A\cos (\varphi _1-\varphi) + C = |B|\). We claim that f has infinitely many zeros on \([0, \infty)\). Without loss of generality, by replacing \(\varphi _2\) with \(\varphi _2+\pi\) if necessary, assume that \(B \lt 0\). Note also that \(\sin (\varphi _1-\varphi)\ne 0\).

By Theorem 2.7, there exist infinitely many \(k\in \mathbb {N}\) such that there exists a corresponding \(l\in \mathbb {N}\) satisfying (22) \(\begin{align} \left| \frac{b}{a}k - l + \frac{\varphi _2-\varphi \frac{b}{a}}{2\pi }\right| \lt \frac{2}{k}. \mbox{22} \end{align}\) For each such k, we consider \(\begin{equation*} s_k \stackrel{\mathrm{def}}{=}\frac{1}{a} \left(2k\pi -\varphi +\frac{\delta }{k}\right), \end{equation*}\) where \(\delta\) is a real constant chosen such that \(\mathrm{sign}(\delta A \sin (\varphi _1-\varphi)) = 1\) and \(|\delta |\) is sufficiently small (to be specified later). We will show that \(f(s_k)\le 0\) for all sufficiently large k. Since also \(\lim \sup _{t\rightarrow \infty } f(t)\gt 0\), this suffices to show that f has infinitely many zeros. To this end, we will separately bound the terms \(s_k(1-\cos (as_k+\varphi))\), \(A\cos (as_k+\varphi _1)\) and \(B\cos (bs_k + \varphi _2)\) for all k large enough, and hence bound \(f(s_k)\) from above by zero.

First, we consider the dominant term of \(f(s_k)\): \(\begin{align} s_k(1-\cos (as_k+\varphi)) & \, = \, \frac{2k\pi - \varphi + \frac{\delta }{k}}{a}\left(1-\cos \left(\frac{\delta }{k}\right)\right) && \lbrace \text{definition of}\; s_k \rbrace \end{align}\) \(\begin{align} & \,\le \, \frac{2k\pi - \varphi + \frac{\delta }{k}}{a}\frac{\delta ^2}{2k^2} && \lbrace s_k \gt 0 \text{and} 1-\cos (x) \le x^2/2 \rbrace \end{align}\) \(\begin{align} & \, \le \frac{\pi \delta ^2}{a}\frac{1}{k} - \frac{\varphi \delta ^2}{2a}\frac{1}{k^2} + \frac{\delta ^3}{2a}\frac{1}{k^3} && \lbrace \text{rearranging} \rbrace \end{align}\) (23) \(\begin{align} & \, \le \, \frac{2\pi \delta ^2}{a} \frac{1}{k} \, && \lbrace \text{for large enough}\; k \rbrace . \end{align}\)

Next we bound the term \(A\cos (as_k+\varphi _1)\). Here note that for a differentiable function \(f:\mathbb {R}\rightarrow \mathbb {R}\) and \(x_0\in \mathbb {R}\) such that \(f^{\prime }(x_0)\ne 0\), we have \(f(x_0+\varepsilon) \le f(x_0) + \varepsilon \frac{f^{\prime }(x_0)}{2}\) for all \(\varepsilon\) with \(\mathrm{sign}(\varepsilon f^{\prime }(x_0)) = -1\) and \(|\varepsilon |\) sufficiently small. Applying this observation to \(f(x)=A\cos x\), for \(|\delta |\) sufficiently small we have \(\begin{align} A\cos (as_k + \varphi _1) \, = \, & A\cos \left(\varphi _1 - \varphi + \frac{\delta }{k}\right) && \lbrace \mbox{definition of $s_k$} \rbrace \end{align}\) (24) \(\begin{align} \,\le \, & A \cos (\varphi _1 - \varphi) - \frac{\delta }{k} \frac{A\sin (\varphi _1-\varphi)}{2} \, && \lbrace \mbox{$\mathrm{sign}(\delta A \sin (\varphi _1-\varphi)) = 1$} \rbrace . \end{align}\)

Finally, we use Equation () to bound \(B\cos (bs_k+\varphi _2)\): \(\begin{align*} & \cos (bs_k + \varphi _2) & \lbrace \text{definition of $s_k$} \rbrace \\ = & \,\cos \left(\frac{b}{a}\left(2k\pi - \varphi + \frac{\delta }{k}\right) + \varphi _2\right) & \lbrace \text{rearranging} \rbrace \\ = & \,\cos \left(2\pi \left(\frac{b}{a}k + \frac{\varphi _2-\frac{b}{a}\varphi }{2\pi }\right) + \frac{b\delta }{ak}\right) & \lbrace {\text{by Equations }{\text(\href{#eq22}{22})}} \text{for some $l\in \mathbb {N}$ and $\varepsilon \in \mathbb {R}$ with $|\varepsilon |\lt 2/k$} \rbrace \\ = & \,\cos \left(2\pi (l + \varepsilon) + \frac{b\delta }{ak}\right) & \lbrace \text{by the inequality $1-\cos (x) \le x^2 / 2$ for all $x\in \mathbb {R}$} \rbrace \\ \ge & \,1 - \frac{1}{2}\left|2\pi \varepsilon + \frac{b\delta }{ak}\right|^2 & \lbrace \text{by the triangle inequality and $|\varepsilon | \lt 2/k$}\rbrace \\ \ge & \,1 - \frac{1}{2k^2}\left(4\pi + \frac{b|\delta |}{a}\right)^2. \end{align*}\) Hence, noting that \(B\lt 0\), we have (25) \(\begin{align} B\cos (bs_k+\varphi _2) \le B\left(1 - \frac{1}{2k^2}\left(4\pi + \frac{b|\delta |}{a}\right)^2\right). \mbox{32} \end{align}\)

Combining the above three bounds, we have \(\begin{align*} & f(s_k) = s_k(1-\cos (as_k+\varphi)) + A\cos (as_k + \varphi _1) + B\cos (bs_k + \varphi _2) + C \\ & \lbrace {\text{by Equations }{\text(\href{#eq23}{23})}}, {\text(\href{#eq24}{24})}, \text{and} {\text(\href{#eq25}{25})} \rbrace \\ \le & \, \frac{2\pi \delta ^2}{ak} + A \cos (\varphi _1-\varphi) - \frac{\delta }{2k} A \sin (\varphi _1-\varphi) + B\left(1 - \frac{1}{2k^2}\left(4\pi + \frac{b|\delta |}{a}\right)^2\right) + C \\ & \lbrace \text{by $A\cos (\varphi _1-\varphi)+C = |B|$ and $B \lt 0$} \rbrace \\ = &\, \frac{1}{k}\left(\frac{2\pi \delta ^2}{a} - \frac{\delta A \sin (\varphi _1 - \varphi)}{2} \right) - \frac{1}{k^2}\frac{B}{2}\left(4\pi + \frac{b|\delta |}{a}\right)^2. \end{align*}\) Since \(\mathrm{sign}(\delta A \sin (\varphi _1-\varphi))=1\), for \(|\delta |\) small enough we have \(\frac{2\pi \delta ^2}{a} - \frac{\delta A \sin (\varphi _1 - \varphi)}{2} \lt 0\). Then for all large enough k, since \(1/k^2\) shrinks faster than \(1/k\), we will have \(\begin{equation*} \mbox{sign}(f(s_k)) = \mbox{sign}\left(\frac{2\pi \delta ^2}{a} - \delta c \right) = -1. \end{equation*}\) Therefore, f is negative infinitely often and hence has infinitely many zeros, as claimed.□

Lemma 5.6.

Let \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(B\cos (bt + \varphi _2) + C\cos (ct + \varphi _3) + D) + e^{r_2t}E, \end{equation*}\) where \(a,b,c,B,C,D,E,r_1,r_2\) are real algebraic; \(e^{i\varphi _1},e^{i\varphi _2},e^{i\varphi _3}\) are algebraic; at least one of \(D,E\) is equal to zero; \(a,b,c\gt 0\); \(B,C\ne 0;\) and \(r_2 \lt r_1 \lt 0\). It is decidable whether f has infinitely many zeros on \([0, \infty)\), and if not, then a threshold T can be computed such that if \(f(t) = 0\), then \(t \le T\).

Proof.

Since all the coefficient polynomials occurring in f are constant, if \(\lbrace a,b,c\rbrace\) is not a linearly independent set over \(\mathbb {Q},\) then the claim follows from Theorem 4.2. Thus, we may assume that \(a,b,c\) are linearly independent over the rationals.

Next, we argue that if \(D\lt |C|+|B|\), then f has infinitely many zeros. Indeed, by Proposition 2.6, the linear independence of \(a,b,c\) entails that the trajectory \((at+\varphi _1, bt+\varphi _2,ct + \varphi _3)\bmod 2\pi\) is dense in \([0,2\pi)^3\), and moreover the restriction of this trajectory to \(at + \varphi _1 \bmod 2\pi = 0\) is dense in \(\lbrace 0\rbrace \times [0, 2\pi)^2\). Thus, \(\lbrace B\cos (bt + \varphi _2) + C\cos (ct + \varphi _3) + D : t\ge 0\mbox{ and }at + \varphi _1 \bmod 2\pi = 0 \rbrace\) is dense in \([D-|C|-|B|, D+|C|+|B|]\). If \(D\lt |C|+|B|\), then because \(r_2 \lt r_1 \lt 0\), it will happen infinitely often that \(\begin{align*} & 1 - \cos (at + \varphi _1) = 0, \\ & e^{r_1t}(B\cos (bt + \varphi _2) + C\cos (ct + \varphi _3) + D) \lt e^{r_1t}\frac{D-|C|-|B|}{2} \lt -|E|e^{r_2t}, \end{align*}\) and hence \(f(t) \lt 0\). On the other hand, f is also positive infinitely often, so f must have infinitely many zeros.

Suppose now that \(D\ge |B|+|C|\gt 0\). By the premise of the lemma, \(E=0\), so f has the form \(\begin{equation*} f(t) = 1-\cos (at + \varphi _1) + e^{r_1t}(B\cos (bt+\varphi _2) + C\cos (ct+\varphi _3) + D). \end{equation*}\) We argue that f has no zeros, except possibly at \(t=0\). Indeed, we have \(B\cos (bt + \varphi _2) + |B|\ge 0\) and \(C\cos (ct + \varphi _3) + |C|\ge 0\) for all t, so \(f(t)\ge 0\) for all t. Then \(f(t) = 0\) if and only if \(D = |B| + |C|\) and simultaneously \(\begin{align*} & \cos (at + \varphi _1) = 1, \\ & \cos (bt + \varphi _2) = -\mbox{sign}(B), \\ & \cos (ct + \varphi _3) = -\mbox{sign}(C), \end{align*}\) which entails that \(e^{iat},e^{ibt},e^{ict}\) are all algebraic. Therefore, by the Gelfond-Schneider theorem (Theorem 2.1), either \(t=0\) or \(a,b,c\) are rational multiples of one another. The latter possibility contradicts the linear independence of \(a,b,c\), so we conclude that f has at most one zero.□

Lemma 5.7.

Let \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(Bt\cos (bt + \varphi _2) + C\cos (bt + \varphi _3) + D) + e^{r_2t}E, \end{equation*}\) where \(a,b,B,C,D,E,r_1,r_2\) are real algebraic; \(e^{i\varphi _1},e^{i\varphi _2},e^{i\varphi _3}\) are algebraic; at least one of \(D,E\) is equal to zero; \(a,b\gt 0\); \(B\ne 0;\) and \(r_2 \lt r_1 \lt 0\). It is decidable whether f has infinitely many zeros on \([0, \infty)\), and if not, then a threshold T can be computed such that if \(f(t) = 0\), then \(t \le T\).

Proof.

If \(a/b\in \mathbb {Q}\), then the claim follows from Theorem 4.1, so assume that \(a,b\) are linearly independent over the rationals. Then by Proposition 2.6, it happens infinitely often that \(1-\cos (at+\varphi _1) = 0\) and \(Bt\cos (bt + \varphi _2) \lt -|B|t/2\), say. For large enough such t, we will have \(\begin{equation*} e^{r_1t}(Bt\cos (bt + \varphi _2) + C\cos (bt + \varphi _3) + D) + e^{r_2t}E \lt 0, \end{equation*}\) and hence \(f(t) \lt 0\), so f has infinitely many zeros.□

Lemma 5.8.

Let \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(B\cos (bt + \varphi _2) + C) + e^{r_2t}f_2(t), \end{equation*}\) where \(a,b,B,C,r_1,r_2\) are real algebraic; \(e^{i\varphi _1},e^{i\varphi _2}\) are algebraic; \(a,b\gt 0\) positive; \(B\ne 0\); \(r_2 \lt r_1 \lt 0;\) and \(f_2\) is an exponential polynomial whose dominant exponents are purely imaginary. Suppose also the order of f is at most 8. It is decidable whether f has infinitely many zeros on \([0, \infty)\), and if not, then a threshold T can be computed such that if \(f(t) = 0\), then \(t \le T\).

Proof.

Suppose first \(a/b\in \mathbb {Q}\). Because of the bound on the order of f, \(f_2\) has order at most 3, so it cannot simultaneously have complex exponents and non-constant coefficients. Thus, either f has at most one linearly independent frequency or f has constant coefficients and at most two linearly independent frequencies. Either way, the result follows from Theorems 4.1 and 4.2.

Therefore, assume now that \(a, b\) are linearly independent over \(\mathbb {Q}\). By Proposition 2.6, the trajectory \((at+\varphi _1 \bmod 2\pi ,bt + \varphi _2\bmod 2\pi)\) is dense in \([0,2\pi)^2\), and moreover the restriction of this trajectory to \(at + \varphi _1 \bmod 2\pi = 0\) is dense in \(\lbrace 0\rbrace \times [0, 2\pi)\).

If \(|C| \lt |B|\), then we argue that f is infinitely often negative, and hence has infinitely many zeros. Indeed, \(|C| \lt |B|\) entails the existence of a non-empty interval \(I\subseteq [0,2\pi)\) such that \(\begin{equation*} bt + \varphi _2\bmod 2\pi \in I \text{ implies } B\cos (bt+\varphi _2) + C \lt 0. \end{equation*}\) What is more, we can in fact find \(\varepsilon \gt 0\) and a subinterval \(I^{\prime } \subseteq I\) such that \(\begin{equation*} bt + \varphi _2\bmod 2\pi \in I^{\prime } \text{ implies } B\cos (bt+\varphi _2)+C \lt -\varepsilon . \end{equation*}\) Thus, by density, \(1 - \cos (at + \varphi _1)=0\) and \(B\cos (bt+\varphi _2) + C \lt -\varepsilon\) will infinitely often hold simultaneously. Then just take t large enough to ensure, say, \(|e^{r_2t}f_2(t)| \lt \varepsilon /2\) at these infinitely many points, and the claim follows.

If \(|C| \gt |B|\), then clearly for all large enough t, we have \(\begin{equation*} \mbox{sign}(e^{r_1t}(B\cos (bt+\varphi _2)+C) + e^{r_2t}f_2(t)) = \mbox{sign}(C). \end{equation*}\) If \(C\lt 0\), then f has infinitely many zeros (consider t such that \(\cos (at+\varphi _1)=1\)), while if \(C\gt 0\), then f is ultimately positive.

Thus, suppose now \(|B|=|C|\). Replacing \(\varphi _2\) by \(\varphi _2+\pi\) if necessary, we can write the function as \(\begin{equation*} f(t) = 1-\cos (at + \varphi _1) + Ce^{r_1t}(1-\cos (bt+\varphi _2)) + e^{r_2t}f_2(t). \end{equation*}\) As \(a,b\) are linearly independent, for all t large enough, \(1-\cos (at+\varphi _1)\) and \(1-\cos (bt+\varphi _2)\) cannot simultaneously be ”too small.” More precisely, by Lemma 2.3, there exist effective constants \(E, T, N\gt 0\) such that for all \(t\ge T\), we have \(\begin{align*} 1-\cos (at+\varphi _1) \gt E / t^N \mbox{ or } 1-\cos (bt+\varphi _2) \gt E / t^N. \end{align*}\) Now, if \(C\lt 0\), it is easy to show that f has infinitely many zeros. Indeed, consider the times t where the dominant term \(1-\cos (at + \varphi _1)\) vanishes. For all large enough such t, since \(|1/t^N|\) shrinks more slowly than \(e^{(r_2-r_1)t}\) and \(|f_2(t)|\) is bounded above by a constant (because its dominant exponents are purely imaginary), we will have \(\begin{align*} f(t) & = e^{r_1t}\left(C(1-\cos (bt + \varphi _2)) + e^{(r_2-r_1)t}f_2(t)\right) \\ & \lt e^{r_1t} \left(ECt^{-N} + e^{(r_2-r_1)t}f_2(t)\right) \\ & \le e^{r_1t}\frac{1}{2}ECt^{-N} \\ & \lt 0, \end{align*}\) so f has infinitely many zeros. Similarly, if \(C\gt 0\), we can show that f is ultimately positive. Indeed, for all t large enough, we have \(\begin{align*} f(t) & \ge e^{r_1t}\left(C(1-\cos (bt+\varphi _2)) + e^{(r_2-r_1)t}f_2(t)\right) \\ & \gt e^{r_1t}\left(CE t^{-N} + e^{(r_2-r_1)t}f_2(t)\right) \\ & \gt \frac{CE}{2}t^{-N} \\ & \gt 0 \end{align*}\)

or

\(\begin{align*} f(t) & \ge 1-\cos (at+\varphi _1) + e^{r_2t}f_2(t) \\ & \gt E t^{-N} + e^{r_2t}f_2(t) \\ & \gt \frac{E}{2}t^{-N} \\ & \gt 0. \end{align*}\) Therefore, f has only finitely many zeros, all occurring up to an effective bound.□

5.3 Main Result

We now draw together the lemmas in the previous two subsections to prove the main result of this section.

Theorem 5.9.

It is decidable whether a given exponential polynomial f of order at most 8 has infinitely many nonnegative real zeros. Moreover, there exists an effective threshold T computable from the description of f, such that if f has only finitely many zeros, then they are all contained in \([0,T]\).

Proof.

Suppose we are given an exponential polynomial f of order at most 8. By dividing f through by \(e^{rt}\) if necessary, where r is the real part of the dominant exponents of f, we may assume without loss of generality that \(r = 0\) and that all non-dominant exponents have strictly negative real part.

Throughout this proof, we will use the following notational conventions. We denote by \(r_1,r_2,\ldots\) the real parts of the non-dominant exponents of f. These are real algebraic numbers such that \(0 \gt r_1 \gt r_2 \gt \cdots\). The lowercase letters \(a,b,c\) will denote frequencies of f, always real algebraic and strictly positive. Uppercase letters \(A,B,C,\ldots\) will denote real algebraic numbers (of arbitrary sign), while \(\varphi ,\varphi _1,\varphi _2,\varphi _3\) will denote real numbers such that \(e^{i\varphi },e^{i\varphi _1},e^{i\varphi _2},e^{i\varphi _3}\) are algebraic. For an exponential polynomial \(f(t)=\sum _{j=1}^m P_j(t)e^{\lambda _j t}\), we say that exponent \(\lambda _j\) has multiplicity \(\mathrm{mul}(\lambda _j):=\mathrm{deg}(P_j)\).

We will now perform a case analysis on the number of dominant exponents. Throughout, we rely on the equivalent general forms of real-valued exponential polynomials, outlined in Section 2.1. By Lemma 5.2, it is sufficient to confine our attention to exponential polynomials with an odd number of dominant exponents. By Lemma 5.1 above, the claim is already proven for exponential polynomials with a single dominant term, corresponding to a real exponent.

Case I. Suppose first that f has seven dominant exponents: namely 0 and \(\pm ai, \pm bi, \pm ci\) for non-zero real algebraic numbers \(a,b,c\). The dominant complex exponents must all have multiplicity one since otherwise the order of f would exceed 8. If \(\mathrm{mult}(0) \gt 1\), then 0 has strictly greater multiplicity than the other dominant exponents and the claim follows from Lemma 5.1. Thus, we can assume all dominant exponents have multiplicity one.

The sum of the multiplicities of the dominant exponents is 7, so f can have at most one other, necessarily real, exponent without exceeding order 8, and this exponent must necessarily have multiplicity one.

We now consider the dimension of rational vector spaced spanned by \(\lbrace a,b,c\rbrace\). If the dimension is at most 2, then the claim follows from Theorems 4.1 and 4.2, whereas if the dimension is 3, then we are done by Lemma 5.3. This completes Case I.

Case II. Next, suppose f has five dominant exponents: 0 and \(\pm ai, \pm bi\) for non-zero real algebraic numbers \(a,b\). The bound on the order of f guarantees that \((\mathrm{mult}(\pm ai),\mathrm{mult}(\pm bi)) \in \lbrace (1,2), (2,1), (1,1)\rbrace\).

Assume first that \((\mathrm{mult}(\pm ai),\mathrm{mult}(\pm bi)) = (2, 1)\). (The case \((1,2)\) is symmetric.) If \(\mathrm{mult}(0)\ne 2\), then the claim follows from Lemma 5.2, since then f has a single dominant term that corresponds to a non-real exponent. Thus, assume that \(\mathrm{mult}(0)=2\), so f is given by \(\begin{equation*} f(t) = t(A_1 + A_2\cos (at + \varphi)) + A\cos (at + \varphi _1) + B\cos (bt + \varphi _2) + C. \end{equation*}\) The claim is proven for exponential polynomials of this form in Lemma 5.5.

We can now assume that the complex dominant exponents all have multiplicity one: \(\mathrm{mult}(\pm ai) = \mathrm{mult}(\pm bi) = 1\). If \(\mathrm{mult}(0) \gt 1\), then we are done by Lemma 5.1, so assume \(\mathrm{mult}(0)=1\). If \(a/b\not\in \mathbb {Q}\), the claim follows from Lemma 5.3, so assume \(a/b\in \mathbb {Q}\). If f has no complex exponents other than \(\pm ai\) and \(\pm bi\), then the \(\mathbb {Q}\)-span of the frequencies of f has dimension one and the claim follows by Theorem 4.1. On the other hand, if f has another conjugate pair of complex exponents, say \(r_1\pm ci\) with \(r_1 \lt 0\), then by the order bound of 8, f has constant coefficients and the \(\mathbb {Q}\)-span of the frequencies has dimension at most two, so the claim follows by Theorems 4.1 and 4.2.

Case III. Finally, we can assume that f has three dominant exponents: one real 0 and a complex pair \(\pm ai\). By Lemmas 5.1 and 5.2, the claim is proven when \(\mathrm{mult}(0)\ne \mathrm{mult}(\pm ai)\). Taking into account the order bound on f, we have \(\mathrm{mult}(0)=\mathrm{mult}(\pm ai)\le 2\). If \(\mathrm{mult}(0)=\mathrm{mult}(\pm ai)=2\), then f has the form \(\begin{equation*} f(t) = t(A\cos (a t+\varphi _1) + B) + (C\cos (a t + \varphi _2) + D) + f_1(t), \end{equation*}\) where \(f_1(t)\) is an exponential polynomial or order at most 2 whose exponents have strictly negative real parts. If the exponents of \(f_1\) are both real, then the claim follows by Theorem 4.1; otherwise, the claim follows from Lemma 5.4.

Suppose now \(\mathrm{mult}(0)=\mathrm{mult}(\pm ai)=1\), so f has the form \(\begin{equation*} f(t) = A_1 + A_2\cos (at + \varphi _1) + f_1(t), \end{equation*}\) where \(f_1\) is an exponential polynomial of order at most 5 whose exponents all have strictly negative real part, so that \(f_1(t)\) tends to 0 exponentially quickly as t grows to infinity. Notice that if \(|A_1| \gt |A_2|\), then f cannot have infinitely many zeros. Indeed, for t sufficiently large we will have \(|f_1(t)| \lt |A_1| - |A_2|\), and the sign of f will match that of \(A_1\). On the other hand, if \(|A_2| \gt |A_1|\), then f will change sign infinitely often. It remains to consider the case that \(|A_1| = |A_2|\). Moreover, dividing through by \(A_1\) and, if necessary, replacing \(\varphi _1\) by \(\varphi _1 + \pi\), we can assume without loss of generality that \(A_1 = 1\) and \(A_2 = -1\), so that \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + f_1(t). \end{equation*}\) Since \(f_1(t)\) converges to zero as t grows to infinity, it is clear that \(\lim \sup _{t\rightarrow \infty } f(t)\gt 0\). Thus, f has infinitely many zeros if also \(\lim \inf _{t\rightarrow \infty } f(t)\lt 0\).

From here onward, we proceed by performing a case split on the dominant exponents of \(f_1\), that is, the exponents of f of the second-greatest real part. (Note in passing that if \(f_1\) is identically zero, then f has infinitely many zeros, occurring at t such that \(\cos (at + \varphi _1) = 1\).) Let the exponents of \(f_1\) have real part \(r_1 \lt 0\).

Case III.1. Suppose first that the exponents of f of the second-largest real part are five in number, that is, a real frequency \(r_1\) and two pairs of complex exponents \(r_1\pm bi\), \(r_1\pm ci\). By the order bound on f, \(\mathrm{mult}(r_1 \pm bi) = \mathrm{mult}(r_1 \pm ci) = \mathrm{mult}(r_1) = 1\), so that f has the form \(\begin{equation*} f(t) = 1-\cos (at+\varphi _1) + e^{r_1t}(B\cos (bt + \varphi _2) + C\cos (ct + \varphi _3) + D). \end{equation*}\) Then the claim follows from Lemma 5.6.

Case III.2. Next, suppose that the exponents of f of the second-largest real part are four in number, that is, two pairs of complex exponents \(r_1\pm bi\), \(r_1\pm ci\). Then f is of the form \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(B\cos (bt + \varphi _2) + C\cos (ct + \varphi _3)) + e^{r_2t}D. \end{equation*}\) The claim follows from Lemma 5.6.

Case III.3. Next, suppose that the exponents of f of the second-largest real part are three in number, a real \(r_1\) and a complex pair \(r_1\pm bi\) with \(\mathrm{mult}(r_1\pm bi) \ge 2\). By the order bound on f, we must have \(\mathrm{mult}(r_1\pm bi) = 2\) and \(\mathrm{mult}(r_1) = 1\), so that f has the form \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(Bt\cos (bt + \varphi _2) + C\cos (bt + \varphi _3) + D). \end{equation*}\) In this case, the claim follows from Lemma 5.7.

Case III.4. Next, suppose that the exponents of f of the second-largest real part are two in number, a complex pair \(r_1\pm bi\) with \(\mathrm{mult}(r_1\pm bi)\ge 2\). By the order bound on f, \(\mathrm{mult}(r_1\pm bi) = 2\) and f can have at most one other exponent, so that \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(Bt\cos (bt + \varphi _2) + C\cos (bt + \varphi _3)) + De^{r_2t}. \end{equation*}\) The claim follows from Lemma 5.7.

Case III.5. Next, suppose that the exponents of f of the second-largest real part are three in number, a simple complex pair \(r_1 \pm bi\) and a (possibly repeated) real exponent \(r_1\).

If \(\mathrm{mult}(r_1) \gt 1\), then by Lemma 5.1, \(f_1\) is ultimately positive or ultimately negative, depending on the sign of the leading term of the polynomial associated with \(e^{r_1t}\). In the former case f is also ultimately positive, while in the latter case f has infinitely many zeros since f is negative for arbitrarily large values of t (consider large t for which \(1-\cos (at+\varphi _1) = 0\)).

Therefore, assume \(\mathrm{mult}(r_1) = \mathrm{mult}(r_1\pm bi)=1\), so that \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}(B\cos (bt + \varphi _2) + C) + f_2(t), \end{equation*}\) where \(f_2\) is an exponential polynomial of order at most 2 whose dominant exponents have real part \(r_2\) such that \(r_2 \lt r_1\). The claim follows from Lemma 5.8.

Case III.6. Next, suppose that the exponents of f of the second-largest real part are a simple complex pair \(r_1\pm bi\), so that \(\begin{equation*} f(t) = 1 - \cos (at + \varphi _1) + e^{r_1t}B\cos (bt + \varphi _2) + f_2(t), \end{equation*}\) where \(f_2\) is an exponential polynomial of order at most 3 whose dominant exponents have real part strictly smaller than \(r_1\). Then the result follows from Lemma 5.8.

Case III.7. Finally, if the exponents of f of the second-largest real part comprise a single real exponent \(r_1\), then by Lemma 5.1, \(f_1\) is ultimately positive or ultimately negative, depending on the sign of the leading coefficient of the polynomial associated with \(e^{r_1t}\). In the former case, f is also ultimately positive, while in the latter, f has infinitely many zeros.

This completes the proof.□

Skip 6DIOPHANTINE HARDNESS AT ORDER NINE Section

6 DIOPHANTINE HARDNESS AT ORDER NINE

In this section we show that decidability of the Infinite Zeros Problem at order 9 would entail significant new effectiveness results in Diophantine approximation, thereby identifying a formidable mathematical obstacle to extending the positive results in Sections 4 and 5. Analogous “mathematical hardness” results can be proven for the Zero Problem (see [6, Chapter 6.4]).

Given a real number \(\alpha\), its Lagrange constant (sometimes called the homogeneous Diophantine approximation constant) is defined by \(\begin{align*} L_{\infty }(\alpha) & := \inf \left\lbrace c \gt 0 : \left|\alpha -\frac{n}{m} \right| \lt \frac{c}{m^2} \mbox{ for infinitely many $m,n\in \mathbb {Z}$}\right\rbrace . \end{align*}\) The Lagrange constant measures how well \(\alpha\) can be approximated by rational numbers and is closely related to the simple continued fraction expansion of \(\alpha\). Specifically, it is shown in [20, pp. 22–23] that if \(\alpha\) is given as a simple continued fraction by the sequence of partial quotients \((a_j)_{j=1}^\infty\), so that \(\begin{equation*} \alpha = a_1 + \frac{1}{a_2 + \frac{1}{\ddots }}, \end{equation*}\) then \(L_\infty (\alpha) \gt 0\) if and only if the sequence \((a_j)_{j=1}^\infty\) is bounded and more generally we have \(K(\alpha) \le L_\infty (\alpha)^{-1} \le K(\alpha)+2\), where \(K(\alpha)=\lim \sup _{k\ge 1} a_k\).

Rational numbers have finite simple continued fraction expansions. It is furthermore well known that the simple continued fraction expansion of every real algebraic number of degree 2 is periodic and therefore has bounded partial quotients (and thus also a positive Lagrange constant). However, nothing is known for real algebraic numbers of degree 3 or more. Guy [10] asks: “ Is there an algebraic number of degree greater than two whose simple continued fraction expansion has unbounded partial quotients? Does every such number have unbounded partial quotients?” Equivalently, one can ask whether there exists an algebraic number of degree greater than 2 that has a strictly positive Lagrange constant.

Recall that a real number x is computable if there is an algorithm that, given any rational \(\varepsilon \gt 0\) as input, returns a rational q such that \(|q-x|\lt \varepsilon\). The main result of this section, Theorem 6.4, shows that the existence of a decision procedure for the Infinite Zeros Problem entails the computability of \(L_\infty (\alpha)\) for all real algebraic numbers \(\alpha\). (In fact, we would even obtain a single algorithm that inputs real algebraic number \(\alpha\) and rational \(\epsilon \gt 0\) and outputs a rational approximation of \(L_\infty (\alpha)\) to within \(\varepsilon\).) Now one possibility is that the Lagrange constant \(L_{\infty }(\alpha)\) of every real algebraic number \(\alpha\) of degree greater than 2 is zero, and hence trivially computable. However, the significance of Theorem 6.4 is that in order to prove the decidability of the Infinite Zeros Problem, one would have to establish, one way or another, the computability of \(L_{\infty }(\alpha)\) for every real algebraic number \(\alpha\).

Let \(\alpha\) be a positive real algebraic number and c a positive rational number. Consider the following functions: (26) \(\begin{eqnarray} f_1(t)&:=& e^t(1-\cos t)+\underbrace{t(1-\cos (\alpha t)) - c\sin (\alpha t)}_{=:g_1(t)} \end{eqnarray}\) (27) \(\begin{eqnarray} f_2(t)&:=& e^t(1-\cos t)+\underbrace{t(1-\cos (\alpha t)) + c\sin (\alpha t)}_{=:g_2(t)} \end{eqnarray}\) (28) \(\begin{eqnarray} f(t)&:=& \min (f_1(t),f_2(t)). \end{eqnarray}\) In this section we prove the following result.

Lemma 6.1.

(1)

If \(L_\infty (\alpha)\lt \frac{c}{2\pi ^2}\), then f has infinitely many zeros.

(2)

If \(L_\infty (\alpha)\gt \frac{c}{2\pi ^2}\), then \(f(t)=\Omega (\frac{1}{t})\) and so f has finitely many zeros.

We break down the proof of Lemma 6.1 into several propositions.

Proposition 6.2.

Fix \(\varepsilon \gt 0\) and define \(\kappa _{c,\varepsilon }:= \frac{c^2\varepsilon (1+\varepsilon)}{(1-\varepsilon)\pi }\). Given \(k\in \mathbb {N}\), let \(2\pi l\) be the closest integer multiple of \(2\pi\) to \(2k\pi \alpha\) and write \(\delta := 2k \pi \alpha - 2\pi l\). Then the following hold for all k sufficiently large:

(1)

If \(|\pi k \delta | \lt c(1-\varepsilon),\) then \(f(2\pi k)\lt 0\).

(2)

If \(|\pi k \delta | \gt \frac{c(1+\varepsilon)}{(1-\varepsilon)}\), then \(f(2\pi k) \gt \kappa _{c,\varepsilon } \cdot \frac{1}{k}\).

Proof.

In general, we have \(\begin{eqnarray*} f(2\pi k) &=& 2\pi k (1-\cos (2\pi k\alpha)) - c |\sin (2\pi k \alpha)|\\ &=& 2\pi k (1-\cos (\delta)) - c | \sin (\delta)|. \end{eqnarray*}\) We proceed to examine the sign of \(2\pi k (1-\cos (\delta)) - c | \sin (\delta)|\).

To show Item 1 we use the following estimates: \(1-\cos x \le \frac{1}{2}x^2\) for \(|x|\le \pi\) and \((1-\varepsilon)|x| \le |\sin x|\) for \(|x| \le \sqrt {2\varepsilon }\). For k sufficiently large, the assumption \(|\pi k \delta | \lt c(1-\varepsilon)\) implies that \(|\delta |\le \sqrt {2\varepsilon }\) and hence \(\begin{eqnarray*} 2\pi k (1-\cos (\delta)) - c | \sin (\delta)| &\le & 2\pi k \textstyle \frac{1}{2} \delta ^2 - c(1-\varepsilon) |\delta | \\ &=& |\delta | (\pi k |\delta | - c(1-\varepsilon)) \\ &\lt & 0. \end{eqnarray*}\)

To show Item 2, suppose that \(|\pi k \delta | \gt \frac{c(1+\varepsilon)}{(1-\varepsilon)}\). We will use the estimates \(|\sin x|\le |x|\) for \(|x|\le \pi\) and \((1-\varepsilon) \frac{1}{2} x^2 \le 1-\cos x\) for \(|x|\le \sqrt {2\varepsilon }\). We consider two cases. If \(|\delta |\le \sqrt {2\varepsilon }\), then \(\begin{eqnarray*} 2\pi k(1-\cos (\delta)) - c|\sin (\delta)| &\ge & 2\pi k \textstyle \frac{1}{2} (1-\varepsilon) |\delta |^2 - c |\delta | \\ &=& |\delta |(\pi k (1-\varepsilon) |\delta | - c) \\ &\gt & \textstyle \frac{c(1+\varepsilon)}{(1-\varepsilon)\pi k} (\pi k (1-\varepsilon) |\delta | - c) \\ &\gt & \textstyle \frac{c(1+\varepsilon)}{(1-\varepsilon) \pi k} c\varepsilon \\ &=& \textstyle \kappa _{c,\varepsilon } \cdot \frac{1}{k}. \end{eqnarray*}\) On the other hand, if \(|\delta |\gt \sqrt {2\varepsilon }\), then \(2\pi k(1-\cos (\delta)) - c|\sin (\delta)| \ge 2 \pi k (1-\varepsilon)\varepsilon - c | \sin (\delta)|\), which is clearly greater than \(\kappa _{c,\varepsilon } \cdot \frac{1}{k}\) for k sufficiently large.□

For all \(k\in \mathbb {N}\) define the interval \(I_{\varepsilon ,k} = \lbrace t\in \mathbb {R}_{\ge 0} : |t-2k\pi | \le \frac{1}{k^3} \rbrace\). The following proposition shows that for t sufficiently large, \(f(t)\) is positive outside \(\bigcup _k I_{\varepsilon ,k}\): so zeros of f have to be “sufficiently close” to integer multiples of \(2\pi\).

Proposition 6.3.

For \(t\not\in \bigcup _k I_{\varepsilon ,k}\) sufficiently large we have \(f(t) \gt e^{t/2}\).

Proof.

Given \(t \in \mathbb {R}_{\ge 0}\), choose \(k\in \mathbb {N}\) so as to minimize \(|t-2k\pi |\). Suppose that \(t \not\in I_{\varepsilon ,k}\). Then \(\frac{1}{k^3}\lt |t-2k\pi | \le \pi\). Using the estimate \(1-\cos x \ge \frac{1}{4} x^2\), valid for all \(x\in [-\pi ,\pi ]\), we have \(\begin{eqnarray*} f(t) & \ge & e^t(1-\cos t) - c\\ & = & e^t(1-\cos (t-2k\pi)) - c\\ & \ge & e^t\left(1-\cos \left(\textstyle \frac{1}{k^3}\right)\right) - c \\ & \ge & e^t \textstyle \frac{1}{4k^6} - c\\ & \ge & e^t \textstyle \frac{1}{4t^6} - c. \end{eqnarray*}\) But this lower bound is greater than \(e^{t/2}\) for t sufficiently large.□

Proof of Lemma 6.1

We first prove Item 1. Suppose that \(L_\infty (\alpha) \lt \frac{c}{2\pi ^2}\). Then there exists \(\varepsilon \gt 0\) such that \(L_\infty (\alpha)\lt \frac{c(1-\varepsilon)}{2\pi ^2}\). This means that there exist infinitely many positive integers \(k,l\) such that \(|\alpha -\frac{l}{k}| \lt \frac{c(1-\varepsilon)}{2\pi ^2k^2}\) and hence \(|2\pi k \alpha -2\pi l| \lt \frac{c(1-\varepsilon)}{\pi k}\). Now, applying Item 1 of Proposition 6.2, we deduce that \(f(2\pi k)\) is negative for infinitely many \(k\in \mathbb {N}\). In view of Proposition 6.3, we conclude that f has infinitely many zeros.

Toward proving Item 2, suppose that \(L_\infty (\alpha) \gt \frac{c}{2\pi ^2}\). Then there exists \(\varepsilon \gt 0\) such that \(L_\infty (\alpha) \gt \frac{c(1+\varepsilon)}{(1-\varepsilon)2\pi ^2}\). This means that for all k sufficiently large, \(|\alpha -\frac{l}{k}| \gt \frac{c(1+\varepsilon)}{(1-\varepsilon)2\pi ^2k^2}\) and hence \(|2\pi k \alpha -2\pi l| \gt \frac{c(1+\varepsilon)}{(1-\varepsilon)\pi k}\). Thus, Item 2 of Proposition 6.2 applies.

We want to show that \(f(t)=\Omega (1/t)\). In view of Proposition 6.3, it will suffice to show that f is bounded below by a positive-constant multiple of \(\frac{1}{t}\) for all \(t\in I_k\) with k sufficiently large. To show this we consider the functions \(g_1,g_2\) (see Equations (26) and (27)). Indeed, by Item 2 of Proposition 6.2 we have that \(g_1(2k\pi) \gt \kappa _{c,\varepsilon } \cdot \frac{1}{k}\) for k sufficiently large. But a direct calculation of the derivative of \(g_1\) shows that for k sufficiently large we have \(|g_1^{\prime }(t)| \le t\alpha +1+c \le 7\alpha k\) for all \(t\in I_k\). Thus, for all \(t\in I_k\) we have \(\begin{equation*} g_1(t) \ge \textstyle \kappa _{c,\varepsilon }\cdot \frac{1}{k} - 7 \alpha k \cdot \frac{1}{k^3} = \Omega (1/k) = \Omega (1/t) . \end{equation*}\) Exactly the same reasoning applies to \(g_2(t),\) and since \(f\ge \min (g_1,g_2),\) we obtain the desired lower bound for f.□

The main result of the section immediately follows from Lemma 6.1.

Theorem 6.4.

Fix a positive real algebraic number \(\alpha\). If the Infinite Zeros Problem is decidable, then \(L_\infty (\alpha)\) can be computed to within arbitrary precision.

Remark 6.5.

Let \(g(t)=e^tf(t)\), where f is the function defined in Equation (28). Then by Lemma 6.1, if \(L_\infty (\alpha) \gt \frac{c}{2\pi ^2}\), then g has infinitely many zeros, whereas if \(L_\infty (\alpha) \lt \frac{c}{2\pi ^2}\), then g diverges to infinity as \(t\rightarrow \infty\). Thus, the ability to decide whether an exponential polynomial of order 9 diverges to infinity in absolute value would also allow to compute the Lagrange constants of real algebraic numbers to arbitrary precision.

REFERENCES

  1. [1] Basu Saugata, Pollack Richard, and Roy Marie-Françoise. 2006. Algorithms in Real Algebraic Geometry (Algorithms and Computation in Mathematics). Springer-Verlag.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Bell Paul C., Delvenne Jean-Charles, Jungers Raphaël M., and Blondel Vincent D.. 2010. The continuous Skolem-Pisot problem. Theoretical Computer Science (TCS) 411, 40–42 (2010), 36253634. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Berstel Jean and Mignotte Maurice. 1976. Deux propriétés décidables des suites récurrentes linéaires. Bulletin de la Société Mathématique de France 104 (1976), 175184.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Bierstone Edward and Milman Pierre D.. 1988. Semianalytic and subanalytic sets. Publications Mathématiques de l’Institut des Hautes Études Scientifiques 67, 1 (1988), 542.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bourbaki N.. 1966. Elements of Mathematics: General Topology (Part 2). Addison-Wesley.Google ScholarGoogle Scholar
  6. [6] Chonev V.. 2015. Reachability Problems for Linear Dynamical Systems. Ph.D. Dissertation. University of Oxford.Google ScholarGoogle Scholar
  7. [7] Cohen Henri. 1993. A Course in Computational Algebraic Number Theory. Springer-Verlag.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Cohn Paul M.. 2002. Basic Algebra: Groups, Rings and Fields. Springer.Google ScholarGoogle Scholar
  9. [9] Cox David A., Little John, and O’Shea Donal. 2007. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Guy Richard. 2004. Unsolved Problems in Number Theory (3rd ed.). Springer.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Halava Vesa, Harju Tero, Hirvensalo Mika, and Karhumäki Juhani. 2005. Skolem’s problem – on the border between decidability and undecidability. TUCS Technical Reports 683 (2005).Google ScholarGoogle Scholar
  12. [12] Hardy Godfrey H. and Wright Edward M.. 2008. An Introduction to the Theory of Numbers. Vol. 1. Oxford. p. 503.Google ScholarGoogle Scholar
  13. [13] Just Bettina. 1989. Integer relations among algebraic numbers. In Mathematical Foundations of Computer Science (MFCS ’89), Vol. 379. Springer, 314320.Google ScholarGoogle Scholar
  14. [14] Lang Serge. 1966. Introduction to Transcendental Numbers. Addison-Wesley, Reading, MA.Google ScholarGoogle Scholar
  15. [15] Lehmer D. H.. 1961. A machine method for solving polynomial equations. J. ACM 8, 2 (1961), 151162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Lenstra Arjen K.. 1987. Factoring multivariate polynomials over algebraic number fields. SIAM J. Comput. 16, 3 (1987), 591598.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Macintyre Angus. 2016. Turing meets Schanuel. Annals of Pure and Applied Logic 167, 10 (2016), 901–938.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Macintyre Angus and Wilkie Alex J.. 1996. On the decidability of the real exponential field. In Kreiseliana: About and Around Georg Kreisel, Odifreddi (Ed.). A K Peters, 441–467.Google ScholarGoogle Scholar
  19. [19] Marker David. 2002. Model Theory: An Introduction. Springer.Google ScholarGoogle Scholar
  20. [20] Schmidt Wolfgang. 1980. Diophantine approximation. Lecture Notes in Mathematics, Vol. 785, Springer.Google ScholarGoogle Scholar
  21. [21] Zilber Boris. 2002. Exponential sums equations and the Schanuel conjecture. Journal of the London Mathematical Society 65, 1 (2002), 2744.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Zilber Boris. 2005. Pseudo-exponentiation on algebraically closed fields of characteristic zero. Annals of Pure and Applied Logic 132, 1 (2005), 6795.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. On the Zeros of Exponential Polynomials

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Journal of the ACM
      Journal of the ACM  Volume 70, Issue 4
      August 2023
      213 pages
      ISSN:0004-5411
      EISSN:1557-735X
      DOI:10.1145/3615982
      Issue’s Table of Contents

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 August 2023
      • Online AM: 6 June 2023
      • Accepted: 12 May 2023
      • Revised: 7 April 2023
      • Received: 21 May 2022
      Published in jacm Volume 70, Issue 4

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)1,012
      • Downloads (Last 6 weeks)110

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader