1 Introduction

Discrete multiple orthogonal polynomials, \(\tau \)-functions, generalized hypergeometric series, Pearson equations, Laguerre–Freud equations, and Toda type equations are intriguing mathematical objects that have attracted considerable attention due to their deep connections with various areas of mathematics and physics. These intertwined concepts play a fundamental role in understanding the underlying structures and dynamics of discrete systems, integrable models, and special functions. In this paper, we delve into the rich mathematical theory of discrete multiple orthogonal polynomials, explore their relationship with \(\tau \)-functions and generalized hypergeometric series, and investigate their applications in Laguerre–Freud equations and Toda type equations.

Orthogonal polynomials [18, 26, 46] are a fascinating and powerful tool in Applied Mathematics and Theoretical Physics, with a wide range of applications in various fields. They possess remarkable properties and have been extensively studied for their applications in approximation theory, numerical analysis, and many other areas. Among the vast family of orthogonal polynomials, two special classes that have attracted significant attention are multiple orthogonal polynomials and discrete orthogonal polynomials.

Back in 1895, Pearson [62], in the context of fitting curves to real data, introduced his famous family of frequency curves by means of the differential equation \(\frac{f'(x)}{f (x)} = \frac{p_1(x)}{p_2(x)}\), where f is the probability density and \(p_i\) is a polynomial in x of degree at most i, \(i = 1, 2\). Since then, a vast bibliography has been developed regarding the properties of Pearson distributions. The connection between Pearson equations and orthogonal polynomials lies in the fact that the solutions to Pearson equations often coincide with or are closely related to certain families of orthogonal polynomials. They are characterized by a weight function, which determines the orthogonality properties of the associated orthogonal polynomials. These equations have been extensively studied due to their close relationship with special functions, such as the classical orthogonal polynomials (e.g., Legendre, Hermite, Laguerre) and their generalizations. In this work, we will use the Gauss–Borel approach to orthogonal polynomials, for a review paper see [53]. In previous papers, we have explored the application of this technique to the study of discrete hypergeometric orthogonal polynomials, Toda equations, \(\tau \)-functions, and Laguerre–Freud equations, see [35,36,37, 55].

Multiple orthogonal polynomials [5, 46, 57, 61] arise when multiple weight functions are involved in the orthogonality conditions. Unlike classical orthogonal polynomials, where a single weight function is used, multiple orthogonal polynomials exhibit a more intricate structure due to the interplay of multiple weight functions. These polynomials have been studied extensively and find applications in areas such as simultaneous approximation, random matrix theory, and statistical mechanics. For the appearance and use of the Pearson equation in multiple orthogonality, see [21, 29], see also [22]. On the other hand, discrete orthogonal polynomials [46, 60] emerge when the orthogonality conditions are defined on a discrete set of points. These polynomials find applications in various discrete systems, such as combinatorial optimization, signal processing, and coding theory. Discrete orthogonal polynomials provide a valuable tool for analyzing and solving problems in discrete settings, where the underlying structure is often characterized by a finite or countable set of points. Discrete Pearson equations play an important role in the classification of classical discrete orthogonal polynomials [60]. When a discrete Pearson equation is fulfilled by the weight, we are dealing with semiclassical discrete orthogonal polynomials, see [30, 32,33,34]. Multiple discrete orthogonal polynomials were discussed in [17].

Integrable discrete equations [45] have emerged as a fascinating and important field of study in Mathematical Physics, providing insights into the behavior of discrete dynamical systems with remarkable properties. These equations possess a rich algebraic and geometric structure, allowing for the existence of soliton solutions, conservation laws, and integrability properties. The concept of integrability in discrete equations goes beyond mere solvability and extends to the existence of an abundance of conserved quantities and symmetries [4]. This integrability property enables the development of powerful mathematical methods for studying and understanding their behavior. One of the key features of integrable discrete equations is their connection to the theory of orthogonal polynomials. These polynomials play a central role in the construction of discrete equations with special properties. The link between discrete equations and orthogonal polynomials provides a deep understanding of their solutions, recursion relations, and symmetry properties.

The \(\tau \)-function [44] is a key concept in the theory of integrable systems and serves as a bridge between the spectral theory of linear operators and the dynamics of soliton equations. It has profound connections with algebraic geometry, representation theory, and special functions. Semiclassical orthogonal polynomials, along with their corresponding functionals supported on complex curves, have been extensively studied in relation to isomonodromic tau functions and matrix models by Bertola, Eynard, and Harnad [20]. In the context of discrete multiple orthogonal polynomials, the \(\tau \)-function captures the underlying integrable structure and provides insights into the dynamics and symmetries of the corresponding discrete systems. Its study enables us to uncover deep connections between different mathematical objects and uncover hidden patterns.

Generalized hypergeometric series, another important topic in this research, constitute a class of special functions that arise in diverse mathematical contexts [6, 24]. They possess remarkable properties and have been extensively studied due to their connections with combinatorics, number theory, and mathematical physics. Generalized hypergeometric series play a key role in expressing solutions of differential equations, evaluating integrals, and understanding the behavior of various mathematical models. They have a pivotal role in the Askey scheme that gathers together different important families of orthogonal families within a chain of limits [47]. See [19] for an extension of the Askey scheme to multiple orthogonality. Recently, we have completed a part of the multiple Askey scheme by finding hypergeometric expressions for the Hahn type I multiple orthogonal polynomials and their descendants in the multiple Askey scheme [23] (but for the multiple Hermite case).

Laguerre–Freud equations are a family of differential equations that emerge in the study of orthogonal polynomials associated with exponential weights. For orthogonal polynomials, these Laguerre–Freud equations represent nonlinear relations for the recursion coefficients in the three-term recurrence relation. In other words, we are dealing with a set of nonlinear equations for the coefficients \(\{\beta _n,\gamma _n\}\) of the three-term recursion relations \(zP_n(z) = P_{n+1}(z) + \beta _nP_n(z) + \gamma _nP_{n-1}(z)\) satisfied by the orthogonal polynomial sequence. These Laguerre–Freud equations take the form \(\gamma _{n+1} = \mathcal {G}(n, \gamma _n, \gamma _{n-1}, \dots , \beta _n, \beta _{n-1},\dots )\) and \(\beta _{n+1} = \mathcal {B}(n,\gamma _{n+1},\gamma _n,\dots ,\beta _n,\beta _{n-1},\dots )\). Magnus [49,50,51,52], referring to [42, 48], named these types of relations Laguerre–Freud relations. Several papers discuss Laguerre–Freud relations for the generalized Charlier, generalized Meixner, and type I generalized Hahn cases [27, 30, 38,39,40, 64].

Laguerre–Freud equations are also connected to Painlevé equations. The Painlevé equations are a set of nonlinear ordinary differential equations that were first introduced by the French mathematician Paul Painlevé in the early 20th century. These equations have attracted significant attention due to their integrability properties and the rich mathematical structures associated with their solutions. The connection between orthogonal polynomials and Painlevé equations lies in the fact that certain families of orthogonal polynomials can be related to solutions of specific Painlevé equations. This relationship provides deep insights into the solvability and analytic properties of the Painlevé equations and allows for the construction of explicit solutions using the theory of orthogonal polynomials. For an account of the connection between these fields of study, see [27, 28, 65].

Moreover, building upon the mentioned studies [35,36,37, 55], this paper investigates the relationship between discrete multiple orthogonal polynomials, \(\tau \)-functions, generalized hypergeometric series, and specific equations such as Laguerre–Freud equations and Toda type equations.

Toda type equations, on the other hand, are nonlinear partial differential-difference equations with wide-ranging applications in mathematical physics and integrable systems [44, 45]. Exploring the connections between these equations and the aforementioned mathematical objects sheds light on the interplay between discrete systems, special functions, and integrable models.

In this paper, we aim to provide a comprehensive overview of the theory of discrete multiple orthogonal polynomials, \(\tau \)-functions, generalized hypergeometric series, Laguerre–Freud equations, and Toda type equations. We will delve into the mathematical properties, explicit representations, and recurrence relations of discrete multiple orthogonal polynomials. Furthermore, we will explore the relationship between these polynomials, \(\tau \)-functions, and generalized hypergeometric series. Finally, we will investigate the applications of these concepts to Laguerre–Freud equations and Toda type equations, including both continuous and totally discrete types, showcasing their relevance in the field of mathematical physics and integrable systems.

The layout of the paper is as follows. We continue this section with a basic introduction to discrete multiple orthogonal polynomials and their associated tau functions. In the second section, we delve into Pearson equations for multiple orthogonal polynomials, generalized hypergeometric series, and the corresponding hypergeometric discrete multiple orthogonal polynomials. We discuss their properties, including the Laguerre–Freud matrix that models the shift in the spectral variable, as well as contiguity relations and their consequences.

In the third section, we present two of the main findings discussed in this paper. In Theorem 3.9, we present nonlinear partial differential equations of the Toda type, specifically of the third order, for which the hypergeometric tau functions provide solutions. Additionally, in Theorem 3.20, we provide a completely discrete system, extending the completely discrete Nikhoff–Capel Toda equation, and its solutions in terms of the hypergeometric tau functions.

Finally, in the fourth section, we present explicit Laguerre–Freud equations for the first time for the multiple versions of the generalized Charlier and generalized Meixner II orthogonal polynomials.

1.1 Discrete multiple orthogonality on the step line with two weights

Let us introduce the following vectors of monomials in the independent variable x

figure a

Let us also consider a couple of weights \(w^{(1)},w^{(2)}:\mathbb {N}_0\rightarrow \mathbb {C}\) defined on the homogeneous lattice \(\mathbb {N}_0\). The corresponding moment matrix is given by

$$\begin{aligned} {\mathscr {M}}&{:}{=}\sum ^{\infty }_{k=0}X (k)\big (X^{(1)} (k)w^{(1)}(k)+X^{(2)} (k)w^{(2)}(k)\big )^{{\mathsf {\scriptscriptstyle T}}}. \end{aligned}$$

We also consider the matrices

$$\begin{aligned} I^{(1)}&{:}{=}{\text {diag}}(1,0,1,0,\dots ),&I^{(2)}&{:}{=}{\text {diag}}(0,1,0,1,\dots ). \end{aligned}$$

and shift matrices

figure b

and \(\Lambda ^{(a)}{:}{=}\Lambda ^2 I^{(a)}\), \(a\in \{1,2\}\). The following equations are satisfied

$$\begin{aligned} \Lambda X(x)&=xX(x),&\Lambda ^{(1)}X^{(1)}(x)&=xX^{(1)}(x),&\Lambda ^{(2)}X^{(2)}(x)&=xX^{(2)}(x),&\\ \Lambda ^{(1)}X^{(2)}(x)&=0,&\Lambda ^{(2)}X^{(1)}(x)&=0. \end{aligned}$$

The moment matrix satisfies

$$\begin{aligned} \Lambda {\mathscr {M}}= {\mathscr {M}}(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2, \end{aligned}$$

so that we say that a matrix is bi-Hankel matrix.

The Gauss-Borel or LU factorization of the moment matrix is

$$\begin{aligned} {\mathscr {M}}=S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}} \end{aligned}$$
(1)

where S and \(\tilde{S}\) are lower unitriangular matrices and H is a diagonal matrix.

We define the truncated matrix \({\mathscr {M}}^{[n]}\) as the n-th leading principal submatrix of the moment matrix \( {\mathscr {M}}\), i.e. built up with the first n rows and columns. Then, we introduce the so called \(\tau \)-function, as

$$\begin{aligned} \tau _n=\det {\mathscr {M}}^{[n]}. \end{aligned}$$
(2)

Factorization (1) exists whenever \(\tau _n\ne 0\) for \(n\in \mathbb {N}_0\).

Matrices S and \(\tilde{S}\) expand as power series in \(\Lambda ^{\mathsf {\scriptscriptstyle T}}\) with diagonal matrices coefficients

$$\begin{aligned} \begin{aligned} S&=I+\Lambda ^{\mathsf {\scriptscriptstyle T}}S^{[1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2 S^{[2]}+\cdots ,&S^{-1}&=I+\Lambda ^{\mathsf {\scriptscriptstyle T}}S^{[-1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2 S^{[-2]}+\cdots , \end{aligned} \\ \begin{aligned} \tilde{S}&=I+\Lambda ^{\mathsf {\scriptscriptstyle T}}\tilde{S}^{[1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2 \tilde{S}^{[2]}+\cdots ,&\tilde{S}^{-1}&=I+\Lambda ^{\mathsf {\scriptscriptstyle T}}\tilde{S}^{[-1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2 \tilde{S}^{[-2]}+\cdots . \end{aligned} \end{aligned}$$

In particular, the following equations hold

$$\begin{aligned} S^{[-1]}&=-S^{[1]},&S^{[-2]}&=-S^{[2]}+({\mathfrak {a}}_{-}S^{[1]})S^{[1]}, \end{aligned}$$

and matrices \(\tilde{S}^{[-1]}\) and \(\tilde{S}^{[-2]}\) fulfill analogous relations.

In what follows we will use lowering and rising shift matrices given by

$$\begin{aligned} {\mathfrak {a}}_{-}{\text {diag}}{(m_0,m_1,\dots )}&={\text {diag}}{(m_1,m_2,\dots )}, \\ {\mathfrak {a}}_{+}{\text {diag}}{(m_0,m_1,\dots )}&={\text {diag}}{(0,m_0,\dots )}. \end{aligned}$$

Any diagonal matrix D fulfills the following algebraic properties:

$$\begin{aligned} \Lambda D&=( {\mathfrak {a}}_{-}D)\Lambda ,&\ D\Lambda&=\Lambda {\mathfrak {a}}_{+}D,&D\Lambda ^{\mathsf {\scriptscriptstyle T}}&=\Lambda ^{\mathsf {\scriptscriptstyle T}}{\mathfrak {a}}_{-}D,&\Lambda ^{\mathsf {\scriptscriptstyle T}}D&= {\mathfrak {a}}_{+}D\Lambda ^{\mathsf {\scriptscriptstyle T}}. \end{aligned}$$

Multiple orthogonal polynomials of type II and I on the step-line are given as entries of certain vector of polynomials. Type II multiple orthogonal polynomial \(B_n\) are the entries of the vector B, while type I multiple orthogonal polynomials \(A_n^{(a)}\), \(a\in \{1,2\}\), are the entries of the vectors \(A^{(a)}\) where

$$\begin{aligned} B&{:}{=}SX,&A^{(a)}&{:}{=}H^{-1}\tilde{S}X^{(a)},&a&\in \{1,2\}. \end{aligned}$$

The degrees of these polynomials, for perfect sytems of weights, are easily seen to be

$$\begin{aligned} \deg B_{n}&=n ,&\deg A^{(a)}_{n}&=\bigg \lceil \frac{n+2-a}{2} \bigg \rceil -1,&a&\in \{1,2\}, \end{aligned}$$

here \(\lceil x\rceil \) is the the smallest integer that is greater than or equal to x. The following multiple orthogonality relations are satisfied

$$\begin{aligned} \sum _{a=1}^2 \sum _{k=0}^\infty A^{(a)}_{n} (k) w^{(a)} (k)k^m&=0,&k&\in \left\{ 0 ,\dots , \deg B_{n-1}\right\} , \\ \sum _{k=0}^\infty B_{n} (k) w^{(a)}(k) k^m&=0,&k&\in \left\{ 0 ,\dots , \deg A^{(a)}_{n-1}\right\} ,&a&\in \{1,2\}. \end{aligned}$$

According to [5, Proposition 7] the type II polynomials can be expressed as the following quotient of determinants

(3)

Hence, for the coefficients \(p^j_n\) of the type II multiple orthogonal polynomials

$$\begin{aligned} B_n=x^n+p^1_nx^{n-1}+\dots +p^{n-1}_n x+p_n^n \end{aligned}$$

we get determinantal expressions. For that aim, let us consider the associated \(\tau \)-functions \(\tau ^j_n\), \(j\in \{1,\dots ,n-1\}\), as the determinants of the matrix \({\mathscr {M}}^{[n,j]}\), \(j\in \{1,\dots ,n-1\}\), obtained from \({\mathscr {M}}^{[n]}\) by removing the \((n-j)\)-th row and adding as last row the row . Then, by expanding the determinant in the RHS of (3) by the last column we get

$$\begin{aligned} p^j_n=(-1)^j\frac{\tau ^j_n}{\tau _n}. \end{aligned}$$
(4)

Similarly [5] we have

(5)

The Hessenberg matrix \(T{:}{=}S\Lambda S^{-1}\) and its transposed matrix is \(T^{{\mathsf {\scriptscriptstyle T}}} =H^{-1}\tilde{S}\Lambda ^2 \tilde{S}^{-1}H\) give the 4-term recursion relations related to these multiple orthogonal polynomials. Indeed, the following recurrence relations hold

$$\begin{aligned} TB&=zB,&T^{\mathsf {\scriptscriptstyle T}}A^{(a)}&=zA^{(a)},&a&\in \{1,2\}. \end{aligned}$$
(6)

Notice that T is a tetradiagonal Hessenberg matrix, i.e.,

$$\begin{aligned} T=(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\gamma +\Lambda ^{\mathsf {\scriptscriptstyle T}}\beta +\alpha +\Lambda , \end{aligned}$$
(7)

where \(\alpha \), \(\beta \), and \(\gamma \) are the following diagonal matrices

$$\begin{aligned} \alpha&={\text {diag}}{(\alpha _0,\alpha _1,\dots )},&\beta&={\text {diag}}{(\beta _1,\beta _2,\dots )},&\gamma&={\text {diag}}{(\gamma _2,\gamma _3,\dots )}. \end{aligned}$$

The diagonal matrices \(\alpha \), \(\beta \), and \(\gamma \) are related with S, \(\tilde{S}\) and H matrices by the following equations:

$$\begin{aligned} \left\{ \begin{aligned} \alpha&=S^{[-1]}+{\mathfrak {a}}_{+}S^{[1]}={\mathfrak {a}}_{+}S^{[1]}-S^{[1]}={\mathfrak {a}}_{+}^2\tilde{S}^{[2]}-\tilde{S}^{[2]}+\tilde{S}^{[1]}({\mathfrak {a}}_{-}\tilde{S}^{[1]}-{\mathfrak {a}}_{+}\tilde{S}^{[1]}),\\ \beta&=H^{-1}{\mathfrak {a}}_{-}H({\mathfrak {a}}_{+}\tilde{S}^{[1]}+{\mathfrak {a}}_{-}\tilde{S}^{[-1]})=H^{-1}{\mathfrak {a}}_{-}H({\mathfrak {a}}_{+}\tilde{S}^{[1]}-{\mathfrak {a}}_{-}\tilde{S}^{[1]})={\mathfrak {a}}_{+}S^{[2]}-S^{[2]}-S^{[1]}{\mathfrak {a}}_{-}\alpha ,\\ \gamma&=H^{-1}{\mathfrak {a}}^2_{-}H. \end{aligned}\right. \end{aligned}$$
(8)

We introduce the following matrices \(T^{(1)}\) and \(T^{(2)}\)

$$\begin{aligned} T^{(a)}&{:}{=}H^{-1}\tilde{S}\Lambda ^{(a)}\tilde{S}^{-1}H=T^{\mathsf {\scriptscriptstyle T}}M^{(a)},&M^{(a)}&{:}{=}H^{-1}\tilde{S}I^{(a)}\tilde{S}^{-1}H,&a&\in \{1,2\}. \end{aligned}$$

These matrices satisfy

$$\begin{aligned} T^{(a)} A^{(b)}&=\delta _{a,b} zA^{(b)},&a,b\in \{1,2\}. \end{aligned}$$

as well as \(T^{\mathsf {\scriptscriptstyle T}}=T^{(1)}+T^{(2)}\). Hence, \(T^{(a)}\) are matrices with all its superdiagonals but for the first and the second have zero entries.

The Pascal matrices \(L=(L_{n,m})\) have entries given by

$$\begin{aligned} L_{n,m}&={\left\{ \begin{array}{ll} \left( {\begin{array}{c}n\\ m\end{array}}\right) ,&{} n \ge m, \\ 0,&{} n<m, \end{array}\right. } \end{aligned}$$

while its inverse \(L^{-1}=({\tilde{L}}_{n,m})\) have entries

$$\begin{aligned} {\tilde{L}}_{n,m}={\left\{ \begin{array}{ll} {}(-1)^{n+m} \left( {\begin{array}{c}n\\ m\end{array}}\right) ,&{} n \ge m, \\ 0&{}n<m.\end{array}\right. } \end{aligned}$$

These matrices act on the monomial vectors as follows

$$\begin{aligned} X(z+1)&=LX(z),&X(z-1)&=L^{-1}X(z). \end{aligned}$$

and can be expanded as

$$\begin{aligned} L^{\pm 1}&=I\pm \Lambda ^{{\mathsf {\scriptscriptstyle T}}}D+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2D^{[2]}+\cdots \end{aligned}$$

with \(D{:}{=}{\text {diag}}(1,2,3,\dots )\) and \(D^{[n]}{:}{=}\frac{1}{n}{\text {diag}}(n^{(n)},(n+1)^{(n)},\dots )\) where \(x^{(n)}=x(x-1)\cdots (x-n+1)\). In terms of Pascal matrices we define the dressed Pascal matrices

$$\begin{aligned} \Pi&{:}{=}SLS^{-1},&\Pi ^{-1}&{:}{=}SL^{-1}S^{-1}. \end{aligned}$$

For type II multiple orthogonal polynomials we find the following properties:

$$\begin{aligned} B(z+1)&=\Pi B(z),&B(z-1)&=\Pi ^{-1}B(z). \end{aligned}$$

We can proceed similarly for the type I multiple orthogonal polynomials. We introduce the matrices \(L^{\pm (a)}\), \(a\in \{1,2\}\), with nonzero entries

$$\begin{aligned} L^{\pm (a)}_{2n+a-1,2m+a-1}&=(\pm 1)^{n+m}\left( {\begin{array}{c}n\\ m\end{array}}\right) ,&a&\in \{1,2\},&n,m&\in \mathbb {N}_0,&n&\ge m, \end{aligned}$$

and zero for any other couple sub-indexes. We will also use \(L^{(a)}=L^{+(a)}\). These matrices satisfy

$$\begin{aligned} L^{(a)} L^{-(a)}&=I^{(a)},&a&\in \{1,2\}. \end{aligned}$$

The partial Pascal matrices can be expressed as band matrices, but in this case the even or odd diagonals are null, i.e.,

$$\begin{aligned} L^{\pm (a)}=I^{(a)}\pm (\Lambda ^{\mathsf {\scriptscriptstyle T}})^2D^{(a),[1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^4D^{(a),[2]}+\cdots , \end{aligned}$$

where we have

$$\begin{aligned} D^{(1),[n]}_1&=\frac{1}{n}{\text {diag}}(n^{(n)},0,(n+1)^{(n)},0,\dots ),&D^{(2),[n]}&=\frac{1}{n}{\text {diag}}(0,n^{(n)},0(n+1)^{(n)},\dots ). \end{aligned}$$

Analogously, dressed Pascal matrices are defined by

$$\begin{aligned} \Pi ^{\pm (a)}{:}{=}H^{-1}\tilde{S}L^{\pm (a)}\tilde{S}^{-1}H, \end{aligned}$$

and the following relations are satisfied

$$\begin{aligned} A^{(a)}(z\pm 1)&=\Pi ^{\pm (a)}A^{(a)}(z). \end{aligned}$$

2 Discrete Multiple Orthogonal Polynomials and Pearson equations

2.1 Pearson equation

Let us consider that weights \(w^{(1)},w^{(2)}\) subject to the following discrete Pearson equations

$$\begin{aligned} \theta (k+1) w^{(a)}(k+1)&=\sigma ^{(a)}(k)w^{(a)}(k),&a&\in \{1,2\}. \end{aligned}$$
(9)

where \(\theta \), \(\sigma ^{(1)}\) and \(\sigma ^{(2)}\) are polynomials, and \(\theta (0)=0\). An important result regarding these type of weights is:

Theorem 2.1

Let us assume that the weights \(w^{(1)},w^{(2)}\) solve the Pearson equations (9). Then, the corresponding moment matrix \({\mathscr {M}}\) satisfy the following matrix equation

$$\begin{aligned} \theta (\Lambda ) {\mathscr {M}}=L {\mathscr {M}}\left( L^{(1)}\sigma ^{(1)}(\Lambda ^{(1)})+L^{(2)}\sigma ^{(2)}(\Lambda ^{(2)})\right) ^{\mathsf {\scriptscriptstyle T}}. \end{aligned}$$
(10)

Proof

For the moment matrix is \( {\mathscr {M}}\) we find

$$\begin{aligned} \theta (\Lambda ) {\mathscr {M}}&=\sum ^{\infty }_{k=1} \theta (k)X(k)\big (w^{(1)}X^{(1)}(k)+w^{(2)}X^{(2)}(k)\big )^{\mathsf {\scriptscriptstyle T}}\\&=\sum ^{\infty }_{k=0} \theta (k+1)X(k+1)\big (w^{(1)}(k+1)X^{(1)}(k+1)+w^{(2)}(k+1)X^{(2)}(k+1)\big )^{\mathsf {\scriptscriptstyle T}})\\&=\sum ^{\infty }_{k=0} X(k+1)\big (\sigma ^{(1)}(k)w^{(1)}(k)X^{(1)}(k+1)+\sigma ^{(2)}(k)w^{(2)}(k)X^{(2)}(k+1)\big )^{\mathsf {\scriptscriptstyle T}}\\&=\sum ^{\infty }_{k=0} LX(k)\big (\sigma ^{(1)}(k)w^{(1)}(k)X^{(1)}(k)^{\mathsf {\scriptscriptstyle T}}{L^{(1)}}^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}(k)w^{(2)}(k)X^{(2)}(k)^{\mathsf {\scriptscriptstyle T}}{L^{(2)}}^{\mathsf {\scriptscriptstyle T}}\big )\\&=\sum ^{\infty }_{k=0} LX(k)\big (X^{(1)}(k)^{\mathsf {\scriptscriptstyle T}}w^{(1)}(k)+X^{(2)}(k)^{\mathsf {\scriptscriptstyle T}}w^{(2)}(k)\big )\big (\sigma ^{(1)}({\Lambda ^{(1)}}^{\mathsf {\scriptscriptstyle T}}){L^{(1)}}^{\mathsf {\scriptscriptstyle T}}\\&\quad +\sigma ^{(2)}({\Lambda ^{(2)}}^{\mathsf {\scriptscriptstyle T}}){L^{(2)}}^{\mathsf {\scriptscriptstyle T}}\big )\\&=L {\mathscr {M}}[L^{(1)}\sigma ^{(1)}(\Lambda ^{(1)})+L^{(2)}\sigma ^{(2)}(\Lambda ^{(2)})]^{\mathsf {\scriptscriptstyle T}}. \end{aligned}$$

\(\square \)

This matrix property implies for the tetradiagonal Hessenberg recursion matrix T the following

Proposition 2.2

Let us assume that the weights \(w^{(1)},w^{(2)}\) solve the Pearson equations (9), for the recursion matrix T in (7), the following relation holds

$$\begin{aligned} \Pi ^{-1}\theta (T)=\sigma ^{(1)}\big ({T^{(1)}}^{\mathsf {\scriptscriptstyle T}}\big ){\Pi ^{(1)}}^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}\big ({T^{(2)}}^{\mathsf {\scriptscriptstyle T}}\big ){\Pi ^{(2)}}^{\mathsf {\scriptscriptstyle T}}.\end{aligned}$$
(11)

Proof

Using Eq. (10) and the Gauss–Borel factorization we get

$$\begin{aligned} \theta (\Lambda )S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}=LS^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}\big (\sigma ^{(1)}({\Lambda ^{(1)}}^{\mathsf {\scriptscriptstyle T}}){L^{(1)}}^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}({\Lambda ^{(2)}}^{\mathsf {\scriptscriptstyle T}}){L^{(2)}}^{\mathsf {\scriptscriptstyle T}}\big ), \end{aligned}$$

grouping S on one side and \(\tilde{S} \) on the other side of the equation:

$$\begin{aligned} S\theta (\Lambda )S^{-1}=SLS^{-1}H\tilde{S}^{\mathsf {\scriptscriptstyle T}}(\sigma ^{(1)}({\Lambda ^{(1)}}^{\mathsf {\scriptscriptstyle T}}){L^{(1)}}^{\mathsf {\scriptscriptstyle T}}+ \sigma ^{(2)}( {\Lambda ^{(2)}}^{\mathsf {\scriptscriptstyle T}}){L^{(2)}}^{\mathsf {\scriptscriptstyle T}})\tilde{S}^{{\mathsf {\scriptscriptstyle T}}}H^{-1}, \end{aligned}$$

it can be written as

$$\begin{aligned} \Pi ^{-1}\theta (T)=H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}\Big (\sigma ^{(1)}(T^{\mathsf {\scriptscriptstyle T}}_1)(\tilde{S}^{\mathsf {\scriptscriptstyle T}}H^{-1})(H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}){L^{(1)}}^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}(T^{\mathsf {\scriptscriptstyle T}}_2)(\tilde{S}^{\mathsf {\scriptscriptstyle T}}H^{-1})(H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}){L^{(2)}}^{\mathsf {\scriptscriptstyle T}}\Big )\tilde{S}^{\mathsf {\scriptscriptstyle T}}H^{-1}, \end{aligned}$$

and finally we get Eq. (11). \(\square \)

2.2 Hypergeometric discrete multiple orthogonal polynomials

Let us write the polynomials \(\theta \) and \(\sigma ^{(a)}\), \(a\in \{1,2\}\), in Eq. (9) as follows

$$\begin{aligned} \theta (z)&=z(z+c_1-1)\cdots (z+c_N-1),&\sigma ^{(a)}(z)\nonumber \\&=\eta ^{(a)}\big (z+b^{(a)}_1\big )\cdots \big (z+b^{(a)}_{M^{(a)}}\big ),&M^{(a)},N&\in \mathbb {N}_0,&a&\in \{1,2\}. \end{aligned}$$
(12)

In the subsequent discussion we require of generalized hypergeometric series, see [6, 63],

where we use the Pochhammer symbol \((b)_k{:}{=}b(b+1)\cdots (b+k-1)\) and \((b)_0=1\),

Pearson equations (9) determine uniquely the weights \(w^{(1)},w^{(2)}\), up to multiplicative constant, to be

$$\begin{aligned} w^{(a)}(k)&=\dfrac{\big (b^{(a)}_1\big )_k\cdots \Big (b^{(a)}_{M^{(a)}}\Big )_k}{(c_1)_k\cdots (c_N)_k}\dfrac{{(\eta ^{(a)})}^k}{k!},&a&\in \{1,2\}. \end{aligned}$$
(13)

In what follows we will use logarithmic derivatives

$$\begin{aligned} \vartheta ^{(a)}&{:}{=}\eta ^{(a)}\frac{\partial \quad \quad }{\partial \eta ^{(a)}},&a&\in \{1,2\}, \end{aligned}$$

as well as

$$\begin{aligned} \vartheta&{:}{=}\vartheta ^{(1)}+\vartheta ^{(2)},&{\tilde{\vartheta }}&{:}{=}\vartheta ^{(1)}-\vartheta ^{(2)}. \end{aligned}$$

We introduce \( t^{(a)}{:}{=}\log {\eta ^{(a)}}\), \(a\in \{1,2\}\) and

$$\begin{aligned} t&{:}{=}\frac{t^{(1)}+t^{(2)}}{2},&{\tilde{t}}&{:}{=}\frac{t^{(1)}-t^{(2)}}{2}, \end{aligned}$$

so that \(\eta ^{(1)}={\text {e}}^{t+{\tilde{t}}}\) and \(\eta ^{(2)}={\text {e}}^{t-{\tilde{t}}}\)

Lemma 2.3

For the differential operators \(\vartheta \) and \({\tilde{\vartheta }}\) we find

$$\begin{aligned} \vartheta&=\frac{\partial }{\partial t},&\tilde{\vartheta }=\frac{\partial }{\partial \tilde{t}}. \end{aligned}$$

Proof

It follows from

$$\begin{aligned} \frac{\partial }{\partial t}&=\frac{\partial }{\partial t^{(1)}}+\frac{\partial }{\partial t^{(2)}}=\vartheta ,&\frac{\partial }{\partial \tilde{t}}&=\frac{\partial }{\partial t^{(1)}}-\frac{\partial }{\partial t^{(2)}}=\tilde{\vartheta }. \end{aligned}$$

\(\square \)

An open question is whether the system of weights \(\{w^{(1)},w^{(2)}\}\) is perfect. That is the case if we are dealing with AT systems, see [46]. Following [17, Examples 2.1 and 2.2] we can give two families satisfying this condition:

Lemma 2.4

(Perfect & AT systems) The system of weights \(\{w^{(1)},w^{(2)}\}\) with:

  1. (i)
    $$\begin{aligned} w^{(a)}(k)&= \dfrac{\big (b_1\big )_k\cdots \big (b_M\big )_k}{(c_1)_k\cdots (c_N)_k} \dfrac{(\eta ^{(a)})^k}{k!},&a&\in \{1,2\}, \end{aligned}$$

    where \(\eta ^{(a)},b_i,c_j>0\), and

  2. (ii)
    $$\begin{aligned} w^{(a)}(k)&=\dfrac{\big (b_1\big )_k\cdots \big (b_{M-1}\big )_k(b^{(a)})_k}{(c_1)_k\cdots (c_N)_k}\dfrac{\eta ^k}{k!},&a&\in \{1,2\}, \end{aligned}$$

    with \(\eta ,b_i,c_j,b^{(a)}>0\)

are AT systems and consequently a perfect systems.

Proof

  1. (i)

    This fit in [17, Examples 2.1].

  2. (ii)

    This fit in [17, Examples 2.2].

\(\square \)

Proposition 2.5

In terms of generalized hypergeometric functions, the moment matrix can be written as a block matrix

figure c

where the moments are expressed in terms of generalized hypergeometric series as follows

(15)

Proof

It follows from the definition of the moment matrix \({\mathscr {M}}\). Equation (15) determining the moments in terms of two families of generalized hypergeometric series follows from Eq. (14) and [55].\(\square \)

Given a function \(f=f(\eta )\) we consider the covector . For two functions \(f_1(\eta ^{(1)})\) we consider the double Wronskian of the covectors \(\delta _n(f_1)\) and \(\delta _n(f_2)\) given by

figure d

the double Wronskian of the covectors \(\delta _{n+1}(f_1)\) and \(\delta _n(f_2)\) given by

figure e

We refer to these objects as double \(\delta \)-Wronskians. Then, the \(\tau \)-function (2) can be written as the double \(\delta \)-Wronskian of two generalized hypergeometric series as follows

Double or 2-component Wronskians were first considered, within the context of integrable systems, by Freeman, Gilson and Nimmo in [41], see also [43].

Lemma 2.6

For the associated \(\tau \)-function we find

$$\begin{aligned} \tau ^1_n=\vartheta \tau _n. \end{aligned}$$

Proof

It follows from the multi-linearity of the determinant and Equation (14) in where the even columns depend only on \(\eta ^{(1)}\) and the odd columns on \({\eta ^{(2)}}\). \(\square \)

Proposition 2.7

The following expressions in terms of the \(\tau \) functions hold true

$$\begin{aligned} H_n&=\frac{\tau _{n+1}}{\tau _n},&p^1_n=-\vartheta \log \tau _n. \end{aligned}$$
(16)

Proof

It can be proven [5]

$$\begin{aligned} H_n&=\frac{\tau _{n+1}}{\tau _n},&p^1_n=-\frac{ \tau ^1_n}{\tau _n}. \end{aligned}$$
(17)

Then, using the previous Lemma 2.6 we get the result. \(\square \)

Proposition 2.8

The moment matrix satisfies

$$\begin{aligned} \vartheta ^{(a)}{\mathscr {M}}&={\mathscr {M}}{\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}},&a&\in \{1,2\} . \end{aligned}$$

Proof

It follows from Eq. (15). \(\square \)

Remark 2.9

The previous results apply to more general situations than those provided by the Pearson equation and the generalized hypergeometric series. We only need to assume that \(\vartheta w^{(a)}(k)=kw^{(a)}(k)\). Using the double \(\delta \)-Wronskian, we can construct a tau function as follows:

$$\begin{aligned} \tau _n&={\mathscr {W}}_n[\rho ^{(1)}_0,\rho ^{(2)}_0],&\rho ^{(a)}_0&=\sum _{n=0}^\infty w^{(a)}(k),&a&\in \{1,2\}. \end{aligned}$$

Then, Lemma 2.6 and Propositions 2.7 and 2.8 hold true.

2.3 Laguerre-Freud matrix

Theorem 2.10

(Laguerre–Freud matrix)Let us assume that the weights fulfill discrte Pearson equation (9), where \(\theta \), \(\sigma ^{(1)}\) and \(\sigma ^{(2)}\) are polynomials with \(\deg {\theta }=N\) and \(\max {(\deg {\sigma ^{(1)}},\deg {\sigma ^{(2)}})}=M\).Then, the Laguerre-Freud matrix given by

$$\begin{aligned} \Psi =\Pi ^{-1}\theta (T)=\sigma ^{(1)}({T^{(1)}}^{\mathsf {\scriptscriptstyle T}}){\Pi ^{(1)}}^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}({T^{(2)}}^{\mathsf {\scriptscriptstyle T}}){\Pi ^{(2)}}^{\mathsf {\scriptscriptstyle T}}\end{aligned}$$
(18)

is a banded matrix with lower bandwidth 2M and upper bandwidth N, as follows:

$$\begin{aligned} \Psi =(\Lambda ^{\mathsf {\scriptscriptstyle T}})^{2M}\psi ^{(-2M)}+\cdots +\psi ^{(N)}\Lambda ^N. \end{aligned}$$

Moreover, the following connection formulas are fulfilled:

$$\begin{aligned} \theta (z)B(z-1)&=\Psi B(z), \end{aligned}$$
(19)
$$\begin{aligned} (A^{(a)}(z+1))^{\mathsf {\scriptscriptstyle T}}\sigma ^{(a)}(z)&=(A^{(a)}(z))^{\mathsf {\scriptscriptstyle T}}\Psi ,&a&\in \{1,2\}. \end{aligned}$$
(20)

Proof

If \(\deg {\theta }=N\) then \(\theta (T)=\Lambda ^N+M^{(N-1)}\Lambda ^{N-1}+\cdots \) so it will have N possible nonzero superdiagonals and using the same argument we conclude that it has 2M subdiagonals possibly nonzero.

For the type II polynomials B(z) we have

$$\begin{aligned} \Psi B(z)=\Pi ^{-1} \theta (T)B(z)=\Pi ^{-1}\theta (z)B(z)=\theta (z)B(z-1) \end{aligned}$$

and for the type I polynomials \(A^{(a)}\) we find

$$\begin{aligned} \Psi ^{\mathsf {\scriptscriptstyle T}}A^{(a)}(z)&=\left( \Pi ^{(1)}\sigma ^{(1)}(T^{(1)})+\Pi ^{(2)}\sigma ^{(2)}(T^{(2)})\right) A^{(a)}(z)\\&=\Pi ^{(a)}\sigma ^{(a)}(z)A^{(a)}(z)=\sigma ^{(a)}(z)A^{(a)}(z+1). \end{aligned}$$

\(\square \)

Lemma 2.11

If the matrix M is such that \(MX(z)=0\) then \(M=0\).

Proof

From \(MX(z)=0\) we conclude that \(M \frac{1}{n!}X^{(n)}(0)=0\) but \(\frac{1}{n!}X^{(n)}(0)=e_n\), where \(e_n\) is the vector with all its entries zero but for a unit at the n-th entry. Therefore, we conclude that the n-th column of M has all its entries equal to zero. Consequently, \(M=0\) \(\square \)

Proposition 2.12

(Compatibility I) For recursion and Freud–Laguerre matrices the following compatibility relation is fulfilled

$$\begin{aligned}{}[\Psi ,T]&=\Psi . \end{aligned}$$
(21)

Proof

Using (19) and (6) we find

$$\begin{aligned} T\Psi B(z)&=T\theta (z)B(z-1)=\theta (z)(z-1)\theta (z)B(z-1), \\ \Psi (T-I)B(z)&=\Psi (z-1)B(z)=(z-1)\theta (z) B(z-1). \end{aligned}$$

Therefore, \(([\Psi ,T]-\Psi )B(z)=0\). Consequently, we get \(([\Psi ,T]-\Psi )S X(z)=0\) and according to Lemma 2.11 we deduce that \(([\Psi ,T]-\Psi )S =0\), and as S is an unitriangular matrix, with inverses, we deduce that \([\Psi ,T]-\Psi \) is the zero matrix. \(\square \)

2.4 Contiguous hypergeometric relations

For generalized hypergeometric series we have the following contiguous relations

Let us denote by \( {}_i\Theta ^{(a)}\) the shifting the \(b^{(a)}_i\rightarrow b^{(a)}_i+1\) and by \(\Theta _j\) the shifting \(c_j\rightarrow c_j+1\). These contiguity relations translate into the moment matrix as follows:

Theorem 2.13

(Contiguous relations for the moment matrix) The following equations for the moment matrix hold:

$$\begin{aligned} {\mathscr {M}}{\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}}+b^{(a)}_i{\mathscr {M}}&=b^{(a)}_i {}_i\Theta ^{(a)}{\mathscr {M}},&i&\in \{1,\dots ,M^{(a)}\},&a&\in \{1,2\}, \end{aligned}$$
(22)
$$\begin{aligned} {\mathscr {M}}{\Lambda ^2}^{{\mathsf {\scriptscriptstyle T}}}+(c_j-1){\mathscr {M}}&=(c_j-1)\Theta _j{\mathscr {M}} ,&j&\in \{1,\dots ,N\} \end{aligned}$$
(23)

Proof

To prove Eq. (22) we notice that, according to (14), we can write the following equations for moment matrix entries

$$\begin{aligned} \big (\vartheta ^{(1)}+b_i^{(1)}\big ){\mathscr {M}}_{n,2m}&=b_i^{(1)}\Theta _i^{(1)}{\mathscr {M}}_{n,2m}, \\ \big (\vartheta ^{(1)}+b_i^{(1)}\big ){\mathscr {M}}_{n,2m+1}&=b_i^{(1)} {\mathscr {M}}_{n,2m+1}=\Theta _i^{(1)}{\mathscr {M}}_{n,2m+1},\\ \big (\vartheta ^{(2)}+b_i^{(2)}\big ){\mathscr {M}}_{n,2m}&=b_i ^{(2)}{\mathscr {M}}_{n,2m}=b_i^{(2)}\Theta _i^{(2)}{\mathscr {M}}_{n,2m+1},\\ \big (\vartheta ^{(2)}+b_i^{(2}\big ){\mathscr {M}}_{n,2m+1}&=b_i^{(2)}\Theta _i^{(2)}{\mathscr {M}}_{n,2m+1}. \end{aligned}$$

with \(n,m\in \mathbb {N}_0\). Let us prove Eq. (23) by noticing that the entries of the moment matrix satisfy

$$\begin{aligned}&(\vartheta +c_j-1){\mathscr {M}}_{(n,2m)}=(c_j-1)\Theta _i{\mathscr {M}}_{(n,2m)},\\&(\vartheta +c_j-1){\mathscr {M}}_{(n,2m+1)}=(c_j-1)\Theta _i{\mathscr {M}}_{(n,2m+1)}. \end{aligned}$$

so that

$$\begin{aligned} {\mathscr {M}}({\Lambda ^{(1)}}^{\mathsf {\scriptscriptstyle T}}+{\Lambda ^{(2)}}^{\mathsf {\scriptscriptstyle T}})+(c_j-1){\mathscr {M}}&=(c_j-1)\Theta _j{\mathscr {M}}. \end{aligned}$$

\(\square \)

2.5 Connection matrices

Definition 2.14

Connection matrices are defined as

$$\begin{aligned} _i\omega ^{(a)}&{:}{=}(_i\Theta ^{(a)}H)^{-1}( _i\Theta ^{(a)}\tilde{S})(\Lambda ^{(a)}+b_i^{(a)})\tilde{S}^{-1}H,&i&\in \{1,\dots ,M^{(a)}\},&a&\in \{1,2\}, \end{aligned}$$
(24)
$$\begin{aligned} _i\Omega ^{(a)}&{:}{=}S( _i\Theta ^{(a)}S)^{-1},&i&\in \{1,\dots ,M^{(a)}\},&a&\in \{1,2\}, \end{aligned}$$
(25)
$$\begin{aligned} \omega _j&{:}{=}(\Theta _jH)^{-1}(\Theta _j\tilde{S})(\Lambda ^2+c_j-1)\tilde{S}^{-1}H,&j&\in \{1,\dots ,N\} ,\end{aligned}$$
(26)
$$\begin{aligned} \Omega _j&{:}{=}S(\Theta _jS)^{-1},&j&\in \{1,\dots ,N\}. \end{aligned}$$
(27)

Lemma 2.15

The following relations between connection matrices are fulfilled

$$\begin{aligned} _i{\omega ^{(a)}}^{\mathsf {\scriptscriptstyle T}}&=b_i^{(a)} {}_i\Omega ^{(a)},&i&\in \{1,\dots ,M^{(a)}\},&a&\in \{1,2\}, \end{aligned}$$
(28)
$$\begin{aligned} {\omega _j}^{\mathsf {\scriptscriptstyle T}}&=(c_j-1)\Omega _j,&j&\in \{1,\dots ,N\}. \end{aligned}$$
(29)

Proof

Let us prove Eq. (28). From the definition (22) and the Gauss–Borel factorization (1) of the moment matrix \({\mathscr {M}}\) we find

$$\begin{aligned} S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}{\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}}+b_i^{(a)}S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}&= b_i^{(a)}({}_i\Theta ^{(a)}S)^{-1}({}_i\Theta ^{(a)}H)({}_i\Theta ^{(a)}\tilde{S})^{-{\mathsf {\scriptscriptstyle T}}},\\ i&\in \{1,\dots ,M^{(a)}\}, a \in \{1,2\}, \end{aligned}$$

that after some cleaning leads to

$$\begin{aligned}&H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}({\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}}+b_i^{(a)})({}_i\Theta ^{(a)}\tilde{S})^{\mathsf {\scriptscriptstyle T}}({}_i\Theta ^{(a)}H)^{-1}&=b_i^{(a)}S({}_i\Theta ^{(a)}S)^{-1},&i&\in \{1,\dots ,M^{(a)}\},\\&\quad a \in \{1,2\}. \end{aligned}$$

Analogously, Eq. (29) can be proved using (23) and (1). \(\square \)

Proposition 2.16

Connection matrices have a banded triangular structure with three non-null diagonals. In the one hand, for \(i\in \{1,\dots ,M^{(a)}\}\), \(a\in \{1,2\}\), we have

$$\begin{aligned} {}_i\omega ^{(a)}&=b^{(a)}_iI+\big ({}_i\omega ^{(a)}\big )^{[1]}\Lambda +\big ({}_i\omega ^{(a)}\big )^{[2]}\Lambda ^2, \end{aligned}$$
(30)
$$\begin{aligned} {}_i\Omega ^{(a)}&={{}\Lambda ^{\mathsf {\scriptscriptstyle T}}}^2\frac{1}{b^{(a)}_i} \big ({}_i\omega ^{(a)}\big )^{[2]}+\Lambda ^{\mathsf {\scriptscriptstyle T}}\frac{1}{b^{(a)}_i} \big ({}_i\omega ^{(a)}\big )^{[1]}+I, \end{aligned}$$
(31)

with

$$\begin{aligned} \big ({}_i\omega ^{(a)}\big )^{[1]}&{:}{=}b_i^{(a)}(S^{[1]}-{}_i\Theta ^{(a)}S^{[1]})=({}_i\Theta ^{(a)}H)^{-1}({\mathfrak {a}}_{-}H)\big ({\mathfrak {a}}_{+}({}_i\Theta ^{(a)}\tilde{S}^{[1]})I^{(\pi (a))}-I^{(a)}({\mathfrak {a}}_{-}\tilde{S}^{[1]})\big ),\\ \big ({}_i\omega ^{(a)}\big )^{[2]}&{:}{=}({}_i\Theta ^{(a)}H)^{-1}I^{(a)}{\mathfrak {a}}_{-}^2H, \end{aligned}$$

where \(\pi \in S_2\) is the permutation \(\pi (1)=2\) and \(\pi (2)=1\). In the other hand, for \(j\in \{1,\dots ,N\}\), we find

$$\begin{aligned} \omega _j&=(c_j-1)I+ \omega _j^{[1]}\Lambda + \omega _j^{[2]}\Lambda ^2, \end{aligned}$$
(32)
$$\begin{aligned} \Omega _j&=I+\Lambda ^{\mathsf {\scriptscriptstyle T}}\frac{1}{c_j-1}\omega _j^{[1]}+(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2 \frac{1}{c_j-1} \omega _j^{[2]}, \end{aligned}$$
(33)

with

$$\begin{aligned} \omega _j^{[1]}&{:}{=}(c_j-1)(S^{[1]}-\Theta _jS^{[1]})={\mathfrak {a}}_{-}H(\Theta _jH)^{-1}\big ({\mathfrak {a}}_{+}(\Theta _j\tilde{S}^{[1]})-{\mathfrak {a}}_{-}\tilde{S}^{[1]}\big ) ,\\ \omega _j^{[2]}&{:}{=}(\Theta _jH)^{-1}{\mathfrak {a}}_{-}^2H. \end{aligned}$$

Proof

Above relations are found using Eq. (28), (24) and (25). Let us show only the cases \({}_i\omega ^{(1)}\) and \({}_i\Omega ^{(1)}\), as the others cases are proven similarly. Departing from Eqs. (24) and (25) we see that

$$\begin{aligned} {}_i\omega ^{(a)}&= ({}_i\Theta ^{(a)}H)^{-1}{}_i\Theta ^{(a)}(I+\Lambda ^{\mathsf {\scriptscriptstyle T}}\tilde{S}^{[1]} +(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\tilde{S}^{[2]}+\cdots )(I^{(a)}\Lambda ^2+b_i^{(a)})(I+\Lambda ^{\mathsf {\scriptscriptstyle T}}\tilde{S}^{[-1]}\\&\quad +(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\tilde{S}^{[-2]}+\cdots )H ,\\ {}_i\Omega ^{(a)}&=(I+\Lambda ^{\mathsf {\scriptscriptstyle T}}S^{[1]}+\cdots ){}_i\Theta ^{(a)}(I+\Lambda ^{\mathsf {\scriptscriptstyle T}}S^{[-1]}+\cdots ), \end{aligned}$$

and using (28) we get the result. \(\square \)

Theorem 2.17

Vectors of polynomials \(A^{(a)}\), \(a\in \{1,2\}\), and B fulfill the following connection formulas

$$\begin{aligned} _i\omega ^{(a)}A^{(a)}(z)&=(z+b_i^{(a)})\big (_i\Theta ^{(a)}A^{(a)}(z)\big ), \end{aligned}$$
(34)
$$\begin{aligned} _i \Omega ^{(1)} {}_i\Theta ^{(a)}B(z)&=B(z), \end{aligned}$$
(35)
$$\begin{aligned} \omega _jA^{(a)}(z)&=(z+c_i-1)\big (\Theta _i A^{(a)}(z)\big ), \end{aligned}$$
(36)
$$\begin{aligned} \Omega _j\Theta _jB(z)&=B(z), \end{aligned}$$
(37)

with \(i\in \{1,\dots ,M^{(a)}\}\), \(a\in \{1,2\}\), and \(j\in \{1,\dots ,N\}\).

Proof

Equations (34), (35), (36) and (37) can be proved through the action of connection matrix on vectors \(A^{(a)}\) and B. Let us check (34):

$$\begin{aligned} {}_i\omega ^{(a)}A^{(a)}(z)&=({}_i\Theta ^{(a)}H)^{-1}({}_i\Theta ^{(a)}\tilde{S}) (\Lambda ^{(a)}+b_i^{(a)})X^{(a)}(z)\\&=({}_i\Theta ^{(a)}H)^{-1}({}_i\Theta ^{(a)}\tilde{S}) \big (z+b_i^{(a)}\big )X^{(a)}(z), \end{aligned}$$

and immediately (34) is proved. \(\square \)

3 Multiple Toda systems and hypergeometric \(\tau \)-functions

3.1 Continuous case

Let us now explore how the hypergeometric discrete multiple orthogonal polynomials leads to solutions, in terms of \(\tau \)-functions, which are double Wronskians of generalized hypergeometric series, to multiple Toda type systems.

Let us start by introducing the strictly lower triangular matrices

$$\begin{aligned} \phi ^{(a)}&{:}{=}(\vartheta ^{(a)}S)S^{-1},&\tilde{\phi }^{(a)}&{:}{=}(\vartheta ^{(a)}\tilde{S})\tilde{S}^{-1},&a&\in \{1,2\}. \end{aligned}$$

For them we have the following two lemmas

Lemma 3.1

The following relations are fulfilled

$$\begin{aligned} \vartheta ^{(a)}B&=\phi ^{(a)}B,&a&\in \{1,2\}. \end{aligned}$$
(38)

Proof

From \(B=SX\) we get

$$\begin{aligned} \vartheta ^{(a)} B=(\vartheta ^{(a)}S)X=(\vartheta ^{(a)}S)S^{-1}SX=(\vartheta ^{(a)}S)S^{-1}B. \end{aligned}$$

\(\square \)

Lemma 3.2

The following relations are fulfilled

$$\begin{aligned} \vartheta ^{(a)}(HA^{(a)})&=\tilde{\phi }^{(a)}HA^{(a)},&a&\in \{1,2\}. \end{aligned}$$
(39)

Proof

We have that

$$\begin{aligned} \vartheta ^{(a)}(HA^{(a)})=\vartheta (\tilde{S})X^{(a)}=\tilde{\phi }HA^{(a)}. \end{aligned}$$

\(\square \)

Proposition 3.3

The following equations are fulfilled

$$\begin{aligned} \vartheta ^{(a)}H-\phi ^{(a)}H- H{{}{\tilde{\phi }}^{(a)}}^{\mathsf {\scriptscriptstyle T}}&={T^{(a)}}^{\mathsf {\scriptscriptstyle T}}H,&a&\in \{1,2\}. \end{aligned}$$
(40)

Proof

Recall that \({\mathscr {M}}=S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}\) so that

$$\begin{aligned} \vartheta ^{(a)}(S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}})&=-S^{-1}(\vartheta ^{(a)}S)S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}} +S^{-1}(\vartheta ^{(a)}H)\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}-S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}(\vartheta ^{(a)}\tilde{S})^{{\mathsf {\scriptscriptstyle T}}}\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}\\&=S^{-1}H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}{\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}}\end{aligned}$$

so

$$\begin{aligned} \vartheta ^{(a)}H-(\vartheta ^{(a)}S)S^{-1}H-H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}}(\vartheta ^{(a)}\tilde{S})^{\mathsf {\scriptscriptstyle T}}=H\tilde{S}^{-{\mathsf {\scriptscriptstyle T}}} {\Lambda ^{(a)}}^{\mathsf {\scriptscriptstyle T}}\tilde{S}^{\mathsf {\scriptscriptstyle T}}H^{-1}H={T^{(a)}}^{\mathsf {\scriptscriptstyle T}}H. \end{aligned}$$

\(\square \)

We will denote:

$$\begin{aligned} \phi&{:}{=}\phi ^{(1)}+\phi ^{(2)},&\tilde{\phi }&{:}{=}\tilde{\phi }^{(1)}+\tilde{\phi }^{(2)}. \end{aligned}$$

Proposition 3.4

The following equations hold

$$\begin{aligned} (\vartheta H)H^{-1}&=\alpha , \end{aligned}$$
(41a)
$$\begin{aligned} -\phi&=(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\gamma +\Lambda ^{\mathsf {\scriptscriptstyle T}}\beta , \end{aligned}$$
(41b)
$$\begin{aligned} -{\tilde{\phi }}&=\Lambda ^{\mathsf {\scriptscriptstyle T}}{\mathfrak {a}}_- H H^{-1} \end{aligned}$$
(41c)

Proof

From (40) we obtain

$$\begin{aligned} \vartheta H-\phi H-H\tilde{\phi }^{\mathsf {\scriptscriptstyle T}}=TH=(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\gamma H+\Lambda ^{\mathsf {\scriptscriptstyle T}}\beta H+\alpha H+{\mathfrak {a}}_{-}H\Lambda \end{aligned}$$

Diagonal by diagonal we get the equations. \(\square \)

Proposition 3.5

The following relations are fulfilled

$$\begin{aligned} \vartheta S^{[n]}&=-(S^{[n-2]}{\mathfrak {a}}_{-}^{n-2}\gamma +S^{[n-1]}{\mathfrak {a}}_{-}^{n-1}\beta ), \end{aligned}$$
(42a)
$$\begin{aligned} \vartheta S^{[1]}&=-\beta ,\end{aligned}$$
(42b)
$$\begin{aligned} \vartheta \tilde{S}^{[n]}&=-{\mathfrak {a}}_{-}^nH{\mathfrak {a}}_{-}^{n-1}H^{-1}\tilde{S}^{[n-1]},\end{aligned}$$
(42c)
$$\begin{aligned} \vartheta \tilde{S}^{[1]}&=-H^{-1}{\mathfrak {a}}_{-}H. \end{aligned}$$
(42d)

Proof

Relations (42a) and (42b) can be obtained from (41b) if we consider the equation in the form \(\vartheta S=-((\Lambda ^\top )^2\gamma +\Lambda ^\top )S\) equating by diagonals. Analogously, (42d) and (42c) con be obtained from (41c). \(\square \)

Proposition 3.6

For the step-line recursion coefficients the following expressions hold

$$\begin{aligned} \left\{ \begin{aligned} \alpha = (\vartheta H )H^{-1}&= {\mathfrak {a}}_{+}S^{[1]}-S^{[1]},\\ \beta =-\vartheta S^{[1]}&= {\mathfrak {a}}_{+}S^{[2]}-S^{[2]}-S^{[1]}{\mathfrak {a}}_{-}\big ((\vartheta H )H^{-1}\big ), \\ \gamma =-\vartheta S^{[2]}+(\vartheta {\mathfrak {a}}_{-}S^{[1]})S^{[1]}&=H^{-1}{\mathfrak {a}}^2_{-}H, \end{aligned}\right. \end{aligned}$$
(43)

Proof

It is proven using Eq. (8) and Proposition 3.3, just split (41b) by diagonals. \(\square \)

Corollary 3.7

In terms of tau functions the entries in recursion Hessenberg matrix can be expressed as follows

$$\begin{aligned} \alpha _n&=\vartheta \log \frac{\tau _{n+1}}{\tau _n},&\beta _n&=\vartheta ^2\log \tau _n,&\gamma _n&=\frac{\tau _{n+3}\tau _n}{\tau _{n+2}\tau _{n+1}}. \end{aligned}$$

Proof

It follows from Eqs. (43) and (16). \(\square \)

Proposition 3.8

(Multiple Toda equations) The functions \(q_n{:}{=}\log H_n\) and \(f_n{:}{=}S^{[1]}_n=p^1_{n+1}\) solve the system

$$\begin{aligned} \left\{ \begin{aligned} \vartheta q_n&=f_{n-1}-f_n,\\ \vartheta ^2 f_n-(2f_n-f_{n+1}-f_{n-1})\vartheta f_n&={\text {e}}^{q_{n+1}-q_{n-1}}-{\text {e}}^{q_{n+2}-q_n}. \end{aligned}\right. \end{aligned}$$
(44)

Proof

Using Eq. (43) we write

$$\begin{aligned}&\vartheta S^{[1]}= -{\mathfrak {a}}_{+}S^{[2]}+S^{[2]}+S^{[1]}(\vartheta {\mathfrak {a}}_{-}H )({\mathfrak {a}}_{-}H)^{-1},\\&\vartheta S^{[2]}=(\vartheta {\mathfrak {a}}_{-}S^{[1]})S^{[1]}-H^{-1}{\mathfrak {a}}^2_{-}H. \end{aligned}$$

Acting with \(\vartheta \) on the first equation an using then the second equation we get

$$\begin{aligned}&(\vartheta {\mathfrak {a}}_{-}S^{[1]})S^{[1]}-(\vartheta S^{[1]}){\mathfrak {a}}_{+}S^{[1]}+{\mathfrak {a}}_{+}H^{-1}{\mathfrak {a}}_{-}H-H^{-1}{\mathfrak {a}}^2_{-}H\\&\quad =\vartheta ^2 S^{[1]}-\vartheta \left( S^{[1]}\big (S^{[1]}-{\mathfrak {a}}_-S^{[1]}\big )\right) \end{aligned}$$

Then, the equation reads

$$\begin{aligned}&f_{n-1} \vartheta f_n-f_n\vartheta f_{n+1}+{\text {e}}^{q_{n+2}-q_n}-{\text {e}}^{q_{n+1}-q_{n-1}}\\&\quad =-\vartheta ^2 f_n+(f_n-f_{n+1})\vartheta f_n+f_n(\vartheta f_n-\vartheta f_{n+1}) \end{aligned}$$

that after some clearing leads to Eq. (44). \(\square \)

Theorem 3.9

The \(\tau \)-function satisfies the following nonlinear third order differential–difference equation

$$\begin{aligned}&\vartheta ^3\tau _{n+1} - \left( \frac{\vartheta \tau _{n+2}}{\tau _{n+2}}+\frac{\vartheta \tau _{n+1}}{\tau _{n+1}}+\frac{\vartheta \tau _{n}}{\tau _{n}}\right) \vartheta ^2\tau _{n+1}+\frac{(\vartheta \tau _{n+1})^2}{\tau _{n+1}} \left( \frac{\vartheta \tau _{n+2}}{\tau _{n+2}}-\frac{\vartheta \tau _{n}}{\tau _{n}}\right) \nonumber \\&\quad =\frac{\tau _{n+3}\tau _{n}}{\tau _{n+2}}- \frac{\tau _{n+2}\tau _{n-1}}{\tau _n}. \end{aligned}$$
(45)

Proof

Notice that the first equation \(\vartheta q_n=f_{n-1}-f_n\), when the \(\tau \) function is used, becomes an identity, indeed

$$\begin{aligned} \vartheta q_n&=\vartheta \log \tau _{n+1}-\vartheta \log \tau _n,&f_{n-1}-f_n&=p^1_n-p^1_{n+1}=-\vartheta \log \tau _n+\vartheta \log \tau _{n+1}. \end{aligned}$$

Then, the second equation reads

$$\begin{aligned} -\vartheta ^3 \log \tau _{n+1}-\vartheta (2\log \tau _{n+1}-\log \tau _{n+2}-\log \tau _{n})\vartheta ^2 \log \tau _{n+1}&=\frac{\tau _{n+2}\tau _{n-1}}{\tau _{n+1}\tau _n} -\frac{\tau _{n+3}\tau _{n}}{\tau _{n+2}\tau _{n+1}}, \end{aligned}$$

that expanding the differentiations leads to

$$\begin{aligned}&\frac{2(\vartheta \tau _{n+1})^3}{\tau _{n+1}^3}-3\frac{(\vartheta \tau _{n+1})\vartheta ^2\tau _{n+1}}{\tau _{n+1}^2}+\frac{\vartheta ^3\tau _{n+1}}{\tau _{n+1}}\\&\quad +\left( -\frac{(\vartheta \tau _{n+1})^2}{\tau _{n+1}^2} + \frac{\vartheta ^2\tau _{n+1}}{\tau _{n+1}} \right) \left( 2\frac{\vartheta \tau _{n+1}}{\tau _{n+1}}-\frac{\vartheta \tau _{n+2}}{\tau _{n+2}}-\frac{\vartheta \tau _{n}}{\tau _{n}}\right) =\frac{\tau _{n+3}\tau _{n}}{\tau _{n+2}\tau _{n+1}}-\frac{\tau _{n+2}\tau _{n-1}}{\tau _{n+1}\tau _n}, \end{aligned}$$

and after some clearing we get

$$\begin{aligned}&-\frac{(\vartheta \tau _{n+1})\vartheta ^2\tau _{n+1}}{\tau _{n+1}^2}+\frac{\vartheta ^3\tau _{n+1}}{\tau _{n+1}} +\left( -\frac{(\vartheta \tau _{n+1})^2}{\tau _{n+1}^2}+ \frac{\vartheta ^2\tau _{n+1}}{\tau _{n+1}} \right) \left( -\frac{\vartheta \tau _{n+2}}{\tau _{n+2}}-\frac{\vartheta \tau _{n}}{\tau _{n}}\right) \\&\quad =\frac{\tau _{n+3}\tau _{n}}{\tau _{n+2}\tau _{n+1}}-\frac{\tau _{n+2}\tau _{n-1}}{\tau _{n+1}\tau _n}, \end{aligned}$$

from where we immediately find Eq. (45). \(\square \)

Remark 3.10

For the discrete hypergeometric orthogonal polynomials the standard Toda equation, see for example [55, Equation (168)], can be written in terms of the \(\tau \)-function as the following nonlinear second order differential–difference equation

$$\begin{aligned} \vartheta ^2\tau _{n+1}-\frac{(\vartheta \tau _{n+1})^2}{\tau _{n+1}}=\frac{\tau _{n+2}\tau _{n}}{\tau _{n+1}}. \end{aligned}$$

We clearly see that (45) is an extension to the multiple orthogonal realm of the standard Toda equation.

Next we write this multiple Toda equations as the following Toda system for the recursion coefficients \(\alpha \), \(\beta \) and \(\gamma \):

Proposition 3.11

For multiple orthogonality with two weights we have the following Toda type system:

$$\begin{aligned} \vartheta \alpha&=\beta -{\mathfrak {a}}_{+}\beta , \end{aligned}$$
(46)
$$\begin{aligned} \vartheta \beta&=\gamma -{\mathfrak {a}}_{+}\gamma +\beta ({\mathfrak {a}}_{-}\alpha -\alpha ), \end{aligned}$$
(47)
$$\begin{aligned} \vartheta \gamma&=\gamma ({\mathfrak {a}}_{-}^2\alpha -\alpha ). \end{aligned}$$
(48)

Proof

Equation (46) is immediately obtained from (41a). From (41b), analyzing diagonal by diagonal we can obtain two different equations:

$$\begin{aligned} \vartheta S^{[1]}&=-\beta , \\ \gamma&=-\vartheta S^{[2]}+(\vartheta {\mathfrak {a}}_{-}S^{[1]})S^{[1]}, \end{aligned}$$

using equations above and expressions for \(\alpha \), \(\beta \) and \(\gamma \) depending of \(S^{[2]}\) and \(S^{[1]}\) we obtain (46), (47) and (48). \(\square \)

Proposition 3.12

(Lax pair) The matrices T and \(\phi \) are a Lax pair, i.e. the following Lax equation is satisfied

$$\begin{aligned} \vartheta T=[\phi , T]=[T_{+},T], \end{aligned}$$
(49)

where

$$\begin{aligned} T_{-}&=(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\gamma +\Lambda ^{\mathsf {\scriptscriptstyle T}}\beta , \\ T_{+}&=T-T_{-}=\alpha +\Lambda . \end{aligned}$$

Proof

We have \(T=S\Lambda S^{-1}\) so that

$$\begin{aligned} \vartheta ^{(a)}T=(\vartheta ^{(a)}S)\Lambda S^{-1}-S\Lambda S^{-1}(\vartheta ^{(a)}S)S^{-1}=\underbrace{(\vartheta ^{(a)}S)S^{-1}}_{\phi ^{(a)}}\underbrace{S\Lambda S^{-1}}_{T}-\underbrace{S\Lambda S^{-1}}_{T}\underbrace{(\vartheta ^{(a)}S)S^{-1}}_{\phi ^{(a)}} \end{aligned}$$

and we get Eq. (49). The other form can be proved with (41b) and:

$$\begin{aligned} {[}-T_{-},T]=[T_{+}-T,T]=[T_{+},T].\end{aligned}$$

\(\square \)

Remark 3.13

As mentioned in Remark 2.9, the previous results are applicable to more general situations beyond generalized hypergeometric series. The only requirement is that \(\vartheta w^{(a)}(k)=kw^{(a)}(k)\). Once again, we utilize the tau function defined as \(\tau _n={\mathscr {W}}_n[\rho ^{(1)}_0,\rho ^{(2)}_0]\), where \(\rho ^{(a)}_0=\sum _{n=0}^\infty w^{(a)}(k)\).

Coming back to the hypergeometric situation, there is another compatibility equation for Laguerre-Freud matrix, when we make calculations for concrete cases of multiple orthogonal polynomials we will name it Compatibility II.

Proposition 3.14

The following compatibility conditions hold for recursion and Laguerre-Freud matrices

$$\begin{aligned} \vartheta \Psi&=[\phi ,\Psi ]=[-T_{-},\Psi ]=[\Psi ,T_{-}], \end{aligned}$$
(50)
$$\begin{aligned} \vartheta \Psi ^{\mathsf {\scriptscriptstyle T}}&=\Psi ^{\mathsf {\scriptscriptstyle T}}+[\mu ,\Psi ^{\mathsf {\scriptscriptstyle T}}]=\Psi ^{\mathsf {\scriptscriptstyle T}}+[-T_{+}^{\mathsf {\scriptscriptstyle T}},\Psi ^{\mathsf {\scriptscriptstyle T}}], \end{aligned}$$
(51)

with lower triangular matrices \(\mu {:}{=}\mu ^{(1)}+\mu ^{(2)}\) and \(\mu ^{(a)}{:}{=}(\vartheta _{(a)}H^{-1})H+H^{-1}\tilde{\phi }^{(a)}H.\)

Proof

We have that

$$\begin{aligned} {} \theta (z)B(z-1)&=\Psi B(z),\\ \vartheta B(z)&=\phi B(z). \end{aligned}$$

Therefore,

$$\begin{aligned} \vartheta (\theta (z)B(z-1))=\vartheta (\Psi B(z))=(\vartheta \Psi )B(z)+\Psi \vartheta B(z)=((\vartheta \Psi )-\Psi \phi )B(z), \end{aligned}$$

then

$$\begin{aligned} \vartheta (\theta (z)B(z-1))=((\vartheta \Psi )+\Psi \phi )B(z). \end{aligned}$$

We also have

$$\begin{aligned} \vartheta (\theta (z)B(z-1))=\theta (z)\phi B(z-1)=\phi \Psi B(z). \end{aligned}$$

Equating above relations and grouping terms and using Lemma 2.11 we find that

$$\begin{aligned} \vartheta \Psi =[\phi ,\Psi ]. \end{aligned}$$

To prove (51) we need to show first \(\vartheta ^{(1)}A^{(1)}=\mu ^{(1)}A^{(1)}\), \(\vartheta ^{(2)}A^{(1)}=\mu ^{(2)}A^{(1)}\), \(\vartheta ^{(1)}A^{(2)}=\mu ^{(1)}A^{(2)}\) and \(\vartheta ^{(2)}A^{(2)}=\mu ^{(2)}A^{(2)}\).We will prove equations for \(A^{(1)}\) because for \(A^{(2)}\) it is analogous:

$$\begin{aligned} \vartheta ^{(2)}A^{(1)}= & {} \vartheta ^{(2)}(H^{-1}\tilde{S})X^{(1)}=((\vartheta ^{(2)}H^{-1}) \tilde{S}+H^{-1}(\vartheta ^{(2)}\tilde{S}))X^{(1)}\\= & {} (\vartheta ^{(2)}H^{-1})HH^{-1}\tilde{S}X^{(1)}\\{} & {} +H^{-1}(\vartheta ^{(2)}\tilde{S})(H^{-1}\tilde{S})^{-1}(H^{-1}\tilde{S})X^{(1)}\\= & {} ((\vartheta ^{(2)}H^{-1})H+H^{-1}(\vartheta ^{(2)}\tilde{S})\tilde{S}^{-1}H)A^{(1)}. \end{aligned}$$

Hence, we get

$$\begin{aligned} \vartheta ^{(2)}A^{(1)}=\underbrace{((\vartheta ^{(2)}H^{-1})H+H^{-1}\tilde{\phi }^{(2)}H)}_{\mu ^{(2)}}A^{(1)}, \end{aligned}$$

and

$$\begin{aligned} \vartheta ^{(1)}A^{(1)}= & {} \vartheta ^{(1)}(H^{-1}\tilde{S})X^{(1)} =((\vartheta ^{(1)}H^{-1})\tilde{S}+H^{-1}(\vartheta ^{(1)}\tilde{S}))X^{(1)}\\= & {} (\vartheta ^{(1)}H^{-1})HH^{-1}\tilde{S}X^{(1)}\\{} & {} +H^{-1}(\vartheta ^{(1)}\tilde{S})(H^{-1}\tilde{S})^{-1}(H^{-1}\tilde{S})X^{(1)}\\= & {} ((\vartheta ^{(1)}H^{-1})H+H^{-1}(\vartheta ^{(1)}\tilde{S})\tilde{S}^{-1}H)A^{(1)}. \end{aligned}$$

Consequently,

$$\begin{aligned} \vartheta ^{(1)}A^{(1)}=\underbrace{((\vartheta ^{(1)}H^{-1})H+H^{-1}\tilde{\phi }^{(1)}H)}_{\mu ^{(1)}}A^{(1)}, \end{aligned}$$

if we add both last equations we have \(\vartheta A^{(1)}=\mu A^{(1)}\) (and for \(A^{(2)}\) there is the same equation). The compatibility Toda–Pearson must hold:

$$\begin{aligned} \sigma ^{(1)}(z)A^{(1)}(z+1)&=\Psi ^{\mathsf {\scriptscriptstyle T}}A^{(1)}(z),\\ \vartheta A^{(1)}(z)&=\mu A^{(1)}(z), \end{aligned}$$

On the one hand

$$\begin{aligned} \vartheta (\sigma ^{(1)}(z)A^{(1)}(z+1))&=\vartheta (\Psi ^{\mathsf {\scriptscriptstyle T}}A^{(1)}(z))=\vartheta (\Psi ^{\mathsf {\scriptscriptstyle T}})A^{(1)}(z)+\Psi ^{\mathsf {\scriptscriptstyle T}}\vartheta A^{(1)}(z)\\&=\vartheta (\Psi ^{\mathsf {\scriptscriptstyle T}})A^{(1)}(z)+\Psi ^{\mathsf {\scriptscriptstyle T}}\mu A^{(1)}(z). \end{aligned}$$

On the other hand

$$\begin{aligned} \vartheta (\sigma ^{(1)}(z)A^{(1)}(z+1))=\sigma ^{(1)}(z)A^{(1)}(z+1)+\sigma ^{(1)}(z)\mu A^{(1)}(z+1)=(I+\mu )\Psi ^{\mathsf {\scriptscriptstyle T}}A^{(1)}(z). \end{aligned}$$

Consequently, equating both:

$$\begin{aligned} \vartheta (\Psi ^{\mathsf {\scriptscriptstyle T}})A^{(1)}(z)+\Psi ^{\mathsf {\scriptscriptstyle T}}\mu A^{(1)}(z)=(I+\mu )\Psi ^{\mathsf {\scriptscriptstyle T}}A^{(1)}(z), \end{aligned}$$

and using Lemma 3.21 we finally get

$$\begin{aligned} \vartheta (\Psi ^{\mathsf {\scriptscriptstyle T}})=\Psi ^{\mathsf {\scriptscriptstyle T}}+[\mu ,\Psi ^{\mathsf {\scriptscriptstyle T}}]. \end{aligned}$$

\(\square \)

3.2 Multiple Nijhoff–Capel discrete Toda equations

We will now study the compatibility conditions between shifts in the three families of hypergeometric parameters:

$$\begin{aligned} \lbrace b_i^{(1)}\rbrace ^{M^{(1)}}_{i=1}, \lbrace b_i^{(2)}\rbrace ^{M^{(2)}}_{i=1}, \lbrace c_j\rbrace ^N_{j=1}, \end{aligned}$$

for functions \(u_n\). We will use the following notation:

  1. (i)

    \({\hat{u}}\) denotes a shift in one and only one parameter in the first family of parameters, i.e., \(b^{(1)}_r\rightarrow b^{(1)}_r+1\) for only one given \(r\in \{1,\dots ,M^{(1)}\}\). The corresponding tridiagonal connection matrix is denoted by \(\Omega ^{(r)}\), and we define \({\hat{d}}{:}{=}b^{(1)}_r\).

  2. (ii)

    \({\check{u}}\) denotes a shift in one and only one parameter in the second family of parameters, i.e., \(b^{(2)}_s\rightarrow b^{(2)}_s+1\) for only one given \(q\in \{1,\dots ,M^{(1)}\}\). The corresponding tridiagonal connection matrix is denoted by \(\Omega ^{(q)}\), and we define \({\check{d}}{:}{=}b^{(2)}_q\).

  3. (iii)

    \({\tilde{u}}\) denotes a shift in one and only one parameter in the third family of parameters, i.e., \(c_s\rightarrow c_s-1\) for only one given \(s\in \{1,\dots ,N\}\). The corresponding tridiagonal connection matrix is denoted by \(\Omega ^{(s)}\), and we define \({\tilde{d}}{:}{=}c_s-1\).

  4. (iv)

    \({\bar{u}}\) denotes the shift \(n\rightarrow n+2\).

Lemma 3.15

Connection matrices fulfill the following compatibility conditions:

$$\begin{aligned} \Omega ^{(s)}\tilde{\Omega }^{(r)}=\Omega ^{(r)}\hat{\Omega }^{(s)}, \end{aligned}$$
(52a)
$$\begin{aligned} \Omega ^{(s)}\tilde{\Omega }^{(q)}=\Omega ^{(q)}\check{\Omega }^{(s)}, \end{aligned}$$
(52b)
$$\begin{aligned} \Omega ^{(r)}\hat{\Omega }^{(q)}=\Omega ^{(q)}\check{\Omega }^{(r)}. \end{aligned}$$
(52c)

Proof

In the one hand, connection formulas can be written as follows

$$\begin{aligned} \Omega ^{(r)}\hat{B}&=B,&\Omega ^{(s)}\tilde{B}=B, \end{aligned}$$

so that \(\tilde{\Omega }^{(r)}\tilde{\hat{B}}=\tilde{B} \) and, consequently, \(\Omega ^{(s)}\tilde{\Omega }^{(r)}\tilde{\hat{B}}=\Omega ^{(s)}\tilde{B}=B\). On the other hand, we have \(\hat{\Omega }^{(s)}\hat{\tilde{B}}=\hat{B}\) so that \(\Omega ^{(r)}\hat{\Omega }^{(s)}\hat{\tilde{B}}=\Omega ^{(r)}\hat{B}=B\). Hence, the compatibility \(\tilde{\hat{P}}=\hat{\tilde{P}}\) leads to (52a). Relations (52b) and (52c) are proven analogously. \(\square \)

From this point we will use the following notation:

$$\begin{aligned} H_{2n}&{=}{:}u,&H_{2n+1}&{=}{:}v,&S_{2n}^{[1]}&{=}{:}f,&S_{2n+1}^{[1]}&{=}{:}g,&S_{2n}^{[2]}&{=}{:}F,&S_{2n+1}^{[2]}&{=}{:}G. \end{aligned}$$

Theorem 3.16

(Multiple Nijhoff–Capel Toda systems) The four functions uvf and g satisfy the following 3D system of four nonlinear difference equations:

  1. (i)

    For the shifts \(\hat{}\), \( \tilde{}\) and \(\bar{}\):

    $$\begin{aligned} g(\tilde{f}-\hat{f})+\tilde{g}(\tilde{\hat{f}}-\tilde{f})+\hat{g}(\hat{f}-\hat{\tilde{f}})&= \frac{1}{\tilde{d}}\left( \frac{\hat{\bar{u}}}{\hat{\tilde{u}}}-\frac{\bar{u}}{\tilde{u}}\right) + \frac{1}{\hat{d}}\left( \frac{\bar{u}}{\hat{u}}-\frac{\tilde{\bar{u}}}{\tilde{\hat{u}}}\right) , \end{aligned}$$
    (53a)
    $$\begin{aligned} \bar{f}(\hat{g}-\tilde{g})+\tilde{\bar{f}}(\tilde{g}-\tilde{\hat{g}})+\hat{\bar{f}}(\hat{\tilde{g}}-\hat{g})&= \frac{1}{\tilde{d}}\left( \frac{\bar{v}}{\tilde{v}}-\frac{\hat{\bar{v}}}{\hat{\tilde{v}}}\right) ,\end{aligned}$$
    (53b)
    $$\begin{aligned} \frac{1}{\tilde{d}}\frac{\hat{\bar{u}}}{\hat{\tilde{u}}}(\hat{\bar{f}}-\bar{f})+\frac{1}{\hat{d}}\frac{\tilde{\bar{u}}}{\tilde{\hat{u}}}(\bar{f}-\tilde{\bar{f}})+\frac{1}{\tilde{d}}\frac{\bar{v}}{\tilde{v}}(\tilde{f}-\tilde{\hat{f}})&=0,\end{aligned}$$
    (53c)
    $$\begin{aligned} \frac{1}{\hat{d}}\frac{\bar{\bar{u}}}{\hat{\bar{u}}}(\hat{g} -\hat{\tilde{g}})+\frac{1}{\tilde{d}} \frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\tilde{\hat{g}}-\tilde{g})+\frac{1}{\tilde{d}} \frac{\hat{\bar{v}}}{\hat{\tilde{v}}}(\bar{g}-\hat{\bar{g}})&=0. \end{aligned}$$
    (53d)
  2. (ii)

    For the shifts \(\check{}\), \( \tilde{}\) and \(\bar{}\):

    $$\begin{aligned} g(\tilde{f}-\check{f})+\tilde{g}(\check{\tilde{f}}-\tilde{f}) +\check{g}(\check{f}-\check{\tilde{f}})&= \frac{1}{\tilde{d}}\left( \frac{\check{\bar{u}}}{\check{\tilde{u}}} -\frac{\bar{u}}{\tilde{u}}\right) \end{aligned}$$
    (54a)
    $$\begin{aligned} \bar{f}(\tilde{g}-\check{g})+\tilde{\bar{f}}(\tilde{\check{g}}-\tilde{g}) +\check{\bar{f}}(\check{g}-\check{\tilde{g}})&= \frac{1}{\tilde{d}} \left( \frac{\check{\bar{v}}}{\check{\tilde{v}}}-\frac{\bar{v}}{\tilde{v}}\right) +\frac{1}{\check{d}}\left( \frac{\bar{v}}{\check{v}}-\frac{\tilde{\bar{v}}}{\check{\tilde{v}}}\right) ,\end{aligned}$$
    (54b)
    $$\begin{aligned} \frac{1}{\tilde{d}}\frac{\check{\bar{u}}}{\check{\tilde{u}}}(\bar{f}-\check{\bar{f}})+\frac{1}{\check{d}}\frac{\bar{v}}{\check{v}}(\check{f}-\check{\tilde{f}})+\frac{1}{\tilde{d}}\frac{\bar{v}}{\tilde{v}}(\tilde{\check{f}}-\tilde{f})&=0,\end{aligned}$$
    (54c)
    $$\begin{aligned} \frac{1}{\tilde{d}}\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\tilde{g}-\tilde{\check{g}}) +\frac{1}{\check{d}}\frac{\tilde{\bar{v}}}{\tilde{\check{v}}}(\bar{g}-\tilde{\bar{g}}) +\frac{1}{\tilde{d}}\frac{\check{\bar{v}}}{\check{\tilde{v}}}(\check{\bar{g}}-\bar{g})&=0. \end{aligned}$$
    (54d)
  3. (iii)

    For the shifts \(\hat{}\), \( \check{}\) and \(\bar{}\):

    $$\begin{aligned} \frac{1}{{\hat{d}}}\frac{\check{\bar{u}}}{\check{\hat{u}}}(\bar{f}-\check{\bar{f}})+ \frac{1}{{\check{d}}}\frac{\bar{v}}{\check{v}}(\check{f}-\check{\hat{f}})&=0, \end{aligned}$$
    (55a)
    $$\begin{aligned} \frac{1}{{\hat{d}}}\frac{\bar{\bar{u}}}{\hat{\bar{u}}}(\hat{\check{g}}-\hat{g})+ \frac{1}{{\check{d}}}\frac{\hat{\bar{v}}}{\check{\hat{v}}}(\hat{\bar{g}}-\bar{g})&= 0, \end{aligned}$$
    (55b)
    $$\begin{aligned} g(\check{f}-\hat{f})+\check{g}(\check{\hat{f}}-\check{f})+\hat{g}(\hat{f}-\check{\hat{f}})&=\frac{1}{\hat{d}}\left( \frac{\bar{u}}{\hat{u}}-\frac{\check{\bar{u}}}{\hat{\check{u}}}\right) , \end{aligned}$$
    (55c)
    $$\begin{aligned} \bar{f}(\check{g}-\hat{g})+\check{\bar{f}}(\check{\hat{g}}-{\check{g}})+\hat{\bar{f}}(\hat{g}-\check{\hat{g}})&= \frac{1}{\check{d}}\left( \frac{\hat{\bar{v}}}{\check{\hat{v}}}-\frac{\bar{v}}{\check{v}}\right) . \end{aligned}$$
    (55d)

Proof

  1. (i)

    From (52a) we can find the following two non-trivial equations for matrices \(S^{[1]}\) and H from second and third subdiagonals:

    $$\begin{aligned}{} & {} \frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2\hat{H})\hat{\tilde{H}}^{-1}+({\mathfrak {a}}_{-}S^{[1]} -{\mathfrak {a}}_{-}\hat{S}^{[1]})(\hat{S}^{[1]}-\hat{\tilde{S}}^{[1]}) +\frac{1}{\hat{d}}I^{(1)}\hat{H}^{-1}{\mathfrak {a}}_{-}^2H\nonumber \\{} & {} \quad =\frac{1}{\hat{d}}\tilde{\hat{H}}^{-1}({\mathfrak {a}}_{-}^2 \tilde{H})I^{(1)}+({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}\hat{S}^{[1]}) (\tilde{S}^{[1]}-\tilde{\hat{S}}^{[1]})+\frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2H)\tilde{H}^{-1}\qquad \end{aligned}$$
    (56)

    and

    $$\begin{aligned}{} & {} \frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2\hat{H})\hat{\tilde{H}}^{-1}({\mathfrak {a}}_{-}^2S^{[1]} -{\mathfrak {a}}_{-}^2\hat{S}^{[1]})+\frac{1}{\hat{d}}I^{(2)} ({\mathfrak {a}}_{-}^3H)({\mathfrak {a}}_{-}\hat{H}^{-1}) (\hat{S}^{[1]}-\tilde{\hat{S}}^{[1]})\nonumber \\{} & {} \quad =\frac{1}{\hat{d}}I^{(1)}({\mathfrak {a}}_{-}^2\tilde{H}) \tilde{\hat{H}}^{-1}+\frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^3H) ({\mathfrak {a}}_{-}\tilde{H}^{-1})(\tilde{S}^{[1]}-\tilde{\hat{S}}^{[1]}). \end{aligned}$$
    (57)

    From (56), equating by one side the terms that multiplied \(I^{(1)}\) we obtain (53a) and from the terms multiplied by \(I^{(2)}\) it is obtained (53b). From (57), equating terms in \(I^{(1)}\) it is obtained (53c) and from terms in \(I^{(2)}\) we get (53d).

  2. (ii)

    From (52b), we find two non-trivial equations for matrices \(S^{[1]}\) and H from second and third subdiagonals:

    $$\begin{aligned}{} & {} \frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2\check{H})\check{\tilde{H}}^{-1} +({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-} \check{S}^{[1]})(\check{S}^{[1]}-\check{\tilde{S}}^{[1]}) +\frac{1}{\check{d}}I^{(2)}\check{H}^{-1}{\mathfrak {a}}_{-}^2H\nonumber \\{} & {} \quad =\frac{1}{\check{d}}I^{(2)}\tilde{\check{H}}^{-1}{\mathfrak {a}}_{-}^2\tilde{H}+({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}\tilde{S}^{[1]})(\tilde{S}^{[1]}-\tilde{\check{S}}^{[1]})+\frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2H)\tilde{H}^{-1} \end{aligned}$$
    (58)

    and

    $$\begin{aligned}{} & {} \frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2\check{H})\check{\tilde{H}}^{-1}({\mathfrak {a}}_{-}^2S^{[1]} -{\mathfrak {a}}_{-}^2\check{S}^{[1]})+\frac{1}{\check{d}}I^{(1)} ({\mathfrak {a}}_{-}\check{H}^{-1})({\mathfrak {a}}_{-}^3H)(\check{S}^{[1]} -\check{\tilde{S}}^{[1]})\nonumber \\{} & {} \quad =\frac{1}{\check{d}}I^{(2)}\tilde{\check{H}}^{-1}{\mathfrak {a}}_{-}^2\tilde{H}+\frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^3H)({\mathfrak {a}}_{-}\tilde{H}^{-1})(\tilde{S}^{[1]}-\tilde{\check{S}}^{[1]}). \end{aligned}$$
    (59)

    From (58), equating by one side the terms that multiplied \(I^{(1)}\) we obtain (54a) and from the terms multiplied by \(I^{(2)}\) it is obtained (53b). From Eq. (59), equating terms in \(I^{(1)}\) it is obtained (54c) and from terms in \(I^{(2)}\) we obtain (54d).

  3. (iii)

    From (52c), from third subdiagonal we can obtain

    $$\begin{aligned}{} & {} \frac{1}{\tilde{d}}({\mathfrak {a}}_{-}^2\tilde{H}^{-1})I^{(1)}({\mathfrak {a}}_{-}^3H) (\tilde{S}^{[1]}-\tilde{\hat{S}}^{[1]})+\frac{1}{\hat{d}}I^{(1)} \tilde{\hat{H}}^{-1}({\mathfrak {a}}_{-}^2\tilde{H})({\mathfrak {a}}_{-}^2S^{[1]} -{\mathfrak {a}}_{-}^2\tilde{S}^{[1]})\nonumber \\{} & {} \quad =\frac{1}{\hat{d}}I^{(2)}({\mathfrak {a}}_{-}\hat{H}^{-1})({\mathfrak {a}}_{-}^3H)(\hat{S}^{[1]}-\hat{\tilde{S}}^{[1]})+\frac{1}{\tilde{d}}I^{(2)}\hat{\tilde{H}}^{-1}{\mathfrak {a}}_{-}^2\hat{H} \end{aligned}$$
    (60)

    and from the terms of \(I^{(1)}\) we can obtain (55a), and from terms of \(I^{(2)}\) we can obtain (55b).

    From second subdiagonal we get the following equation:

    $$\begin{aligned}{} & {} \frac{1}{ \tilde{d}}I^{(2)}\tilde{H}^{-1}{\mathfrak {a}}_{-}^2H+({\mathfrak {a}}_{-}S^{[1]})\tilde{S}^{[1]}-({\mathfrak {a}}_{-}\tilde{S}^{[1]})(\tilde{S}^{[1]}-\tilde{\hat{S}}^{[1]})+\frac{1}{\hat{d}}I^{(1)}\tilde{\hat{H}}^{-1}{\mathfrak {a}}_{-}^2\tilde{H}\nonumber \\{} & {} \quad =\frac{1}{\hat{d}}I^{(1)}\hat{H}^{-1}{\mathfrak {a}}_{-}^2H+({\mathfrak {a}}_{-}S^{[1]})\hat{S}^{[1]}-({\mathfrak {a}}_{-}\hat{S}^{[1]})(\hat{S}^{[1]}-\hat{\tilde{S}}^{[1]})+\frac{1}{\tilde{d}}I^{(2)}\hat{\tilde{H}}^{-1}{\mathfrak {a}}_{-}^2\hat{H},\nonumber \\ \end{aligned}$$
    (61)

    from terms of \(I^{(1)}\) we obtain (55c) and from terms of \(I^{(2)}\) we deduce (55d).

\(\square \)

Remark 3.17

Each set of Eqs. (3.16), (3.16) and (3.16) is a nonlinear system of four equations of discrete partial difference equations in three variables for four functions. The first two systems (3.16) and (3.16) are equivalent under the interchange of \(\hat{}\longleftrightarrow \check{}\), \(u\longleftrightarrow v\) and \(f\longleftrightarrow g\).

Remark 3.18

Notice that Eqs. (3.16) and (3.16) can be considered for the AT systems of weights described Lemma 2.4. Therefore, we are sure that the multiple orthogonal polynomials do exist. For other cases, one needs to proof perfectness or that the system is AT, which is still a open issue.

Remark 3.19

In [55], we discovered the Nijhoff–Capel Toda equation (Equation (128)) for standard non-multiple hypergeometric orthogonal polynomials [55]. It can be expressed as:

$$\begin{aligned} \frac{{\bar{u}}}{{\tilde{u}}}-\frac{\hat{{\bar{u}}}}{\hat{{\tilde{u}}}}=\frac{{\bar{u}}}{{\hat{u}}}-\frac{\tilde{{\bar{u}}}}{\tilde{{\hat{u}}}} \end{aligned}$$

This equation is obtained by considering Equation (53a) with \(v=g=0\). Consequently, we refer to these systems as multiple Nijhoff–Capel Toda systems, namely (3.16), (3.16), and (3.16). The mentioned equation is the fifth equation among the canonical types for consistency equations corresponding to an octahedral sub-lattice, as described in [4] and [45, Equation (3.107c) and §3.9].

Theorem 3.20

The multiple Nijhoff–Capel system (3.16) reads

$$\begin{aligned}&\frac{\tau ^1_{2n+1}}{\tau _{2n+1}}\left( \frac{{\tilde{\tau }}^1_{2n}}{{\tilde{\tau }}_{2n}}-\frac{\hat{\tau }^1_{2n}}{\hat{\tau }_{2n}}\right) +\frac{{\tilde{\tau }}^1_{2n+1}}{\tilde{{\tau }}_{2n+1}}\left( \frac{\tilde{\hat{\tau }}^1_{2n}}{\tilde{ \hat{\tau }}_{2n}}-\frac{{\tilde{\tau }}^1_{2n}}{{\tilde{\tau }}_{2n}}\right) +\frac{\hat{\tau }^1_{2n+1}}{\hat{{\tau }}_{2n+1}}\left( \frac{ {\hat{\tau }}^1_{2n}}{{ \hat{\tau }}_{2n}}-\frac{\hat{{\tilde{\tau }}}^1_{2n}}{\hat{{\tilde{\tau }}}_{2n}}\right) \\&\quad = \frac{1}{\tilde{d}}\left( \frac{\hat{\bar{\tau }}_{2n+1}}{\hat{\bar{\tau }}_{2n}}\frac{\hat{\tilde{\tau }}_{2n}}{\hat{\tilde{\tau }}_{2n+1}}- \frac{ {\bar{\tau }}_{2n+1}}{ {\bar{\tau }}_{2n}} \frac{{\tilde{\tau }}_{2n}}{{\tilde{\tau }}_{2n+1}}\right) + \frac{1}{\hat{d}}\left( \frac{ {\bar{\tau }}_{2n+1}}{ {\bar{\tau }}_{2n}}\frac{\hat{{\tau }}_{2n}}{\hat{{\tau }}_{2n+1}}- \frac{ \tilde{\bar{\tau }}_{2n+1}}{ \tilde{\bar{\tau }}_{2n}} \frac{\tilde{\hat{\tau }}_{2n}}{\tilde{\hat{\tau }}_{2n+1}}\right) ,\\&\qquad \frac{{\bar{\tau }}^1_{2n}}{{\bar{\tau }}_{2n}}\left( \frac{{\tilde{\tau }}^1_{2n+1}}{{\tilde{\tau }}_{2n+1}}+\frac{\hat{\tau }^1_{2n+1}}{\hat{\tau }_{2n+1}}\right) +\frac{\tilde{\bar{\tau }}^1_{2n}}{\tilde{{\bar{\tau }}}_{2n}}\left( \frac{\tilde{\hat{\tau }}^1_{2n+1}}{\tilde{ \hat{\tau }}_{2n+1}}-\frac{{\tilde{\tau }}^1_{2n+1}}{{\tilde{\tau }}_{2n+1}}\right) +\frac{\hat{\bar{\tau }}^1_{2n}}{\hat{{\bar{\tau }}}_{2n}}\left( \frac{ {\hat{\tau }}^1_{2n+1}}{{ \hat{\tau }}_{2n+1}}-\frac{\hat{{\tilde{\tau }}}^1_{2n+1}}{\hat{{\tilde{\tau }}}_{2n+1}}\right) \\&\quad = \frac{1}{\tilde{d}}\left( \frac{\hat{\bar{\bar{\tau }}}_{2n}}{\hat{\bar{\tau }}_{2n+1}}\frac{\hat{\tilde{\tau }}_{2n+1}}{\hat{\tilde{\bar{\tau }}}_{2n}}- \frac{ {\bar{\bar{\tau }}}_{2n}}{ {\bar{\tau }}_{2n+1}} \frac{{\tilde{\tau }}_{2n+1}}{\tilde{\bar{\tau }}_{2n}}\right) ,\\&\qquad \frac{1}{\tilde{d}} \frac{\hat{\bar{\tau }}_{2n+1}}{\hat{\bar{\tau }}_{2n}}\frac{\hat{\tilde{\tau }}_{2n}}{\hat{\tilde{\tau }}_{2n+1}} \left( \frac{ {\hat{\bar{\tau }}}^1_{2n}}{{ \hat{\bar{\tau }}}_{2n}}-\frac{{\bar{\tau }}^1_{2n}}{{\bar{\tau }}_{2n}}\right) +\frac{1}{\hat{d}} \frac{ \tilde{\bar{\tau }}_{2n+1}}{ \tilde{\bar{\tau }}_{2n}} \frac{\tilde{\hat{\tau }}_{2n}}{\tilde{\hat{\tau }}_{2n+1}} \left( \frac{{\bar{\tau }}^1_{2n}}{{\bar{\tau }}_{2n}}-\frac{ {\tilde{\bar{\tau }}}^1_{2n}}{{ \tilde{\bar{\tau }}}_{2n}}\right) \\&\qquad +\frac{1}{\tilde{d}} \frac{ {\bar{\bar{\tau }}}_{2n}}{ {\bar{\tau }}_{2n+1}} \frac{{\tilde{\tau }}_{2n+1}}{\tilde{\bar{\tau }}_{2n}}\left( \frac{{\tilde{\tau }}^1_{2n}}{{\tilde{\tau }}_{2n}}-\frac{\tilde{\hat{\tau }}^1_{2n}}{\tilde{ \hat{\tau }}_{2n}}\right) =0,\\&\qquad \frac{1}{\hat{d}}\frac{ {\bar{\bar{\tau }}}_{2n+1}}{ {\bar{\bar{\tau }}}_{2n}}\frac{\hat{\bar{\tau }}_{2n}}{\hat{{\bar{\tau }}}_{2n+1}}\left( \frac{ {\hat{\tau }}^1_{2n+1}}{{ \hat{\tau }}_{2n+1}}-\frac{\hat{{\tilde{\tau }}}^1_{2n+1}}{\hat{{\tilde{\tau }}}_{2n+1}}\right) +\frac{1}{\tilde{d}}\frac{ {\bar{\bar{\tau }}}_{2n+1}}{ {\bar{\bar{\tau }}}_{2n}}\frac{\tilde{\bar{\tau }}_{2n}}{\tilde{{\bar{\tau }}}_{2n+1}}\left( \frac{\tilde{\hat{\tau }}^1_{2n+1}}{\tilde{ \hat{\tau }}_{2n+1}}-\frac{{\tilde{\tau }}^1_{2n+1}}{{\tilde{\tau }}_{2n+1}}\right) \\&\qquad +\frac{1}{\tilde{d}}\frac{ \tilde{\bar{\tau }}_{2n+1}}{ \tilde{\bar{\tau }}_{2n}} \frac{\tilde{\hat{\tau }}_{2n}}{\tilde{\hat{\tau }}_{2n+1}}\left( \frac{ { \bar{\tau }}^1_{2n+1}}{{ \bar{\tau }}_{2n+1}}-\frac{\hat{ \bar{\tau }}^1_{2n+1}}{\hat{ \bar{\tau }}_{2n+1}}\right) =0. \end{aligned}$$

Notice that we have large families of solutions to this system of equations in terms of double Wronskians of generalized hypergeometric functions.

Proof

Use the \(\tau \)-function parametrization

$$\begin{aligned} u&=\frac{\tau _{2n+1}}{\tau _{2n}},&v&=\frac{\tau _{2n+2}}{\tau _{2n+1}},&f&=\frac{\tau ^1_{2n}}{\tau _{2n}},&g&=\frac{\tau ^1_{2n+1}}{\tau _{2n+1}}. \end{aligned}$$

\(\square \)

Lemma 3.21

If a matrix M is such that \(MX^{(1)}(z)=MX^{(2)}(z)=0\) then \(M=0\).

Proof

We have that

$$\begin{aligned} M\frac{1}{n!}\frac{{\text {d}}^n X^{(1)}(z)}{{\text {d}}z^n}\Big |_{z=0}=&0,&M\frac{1}{n!}\frac{{\text {d}}^n X^{(2)}(z)}{{\text {d}}z^n}\Big |_{z=0}=&0. \end{aligned}$$

Observe that \(\frac{1}{n!}\frac{{\text {d}}^n X^{(1)}(z)}{{\text {d}}z^n}\big |_{z=0}\) is the vector with all its entries equal to zero but for a 1 in the 2n-th entry. Similarly, \(\frac{1}{n!}\frac{{\text {d}}^n X^{(2)}(z)}{{\text {d}}z^n}\big |_{z=0}\) is the vector with all its entries equal to zero but for a 1 in the \((2n+1)\)-th entry. Therefore, all the columns of M have 0 all its entries. \(\square \)

Lemma 3.22

The following equations are satisfied

$$\begin{aligned} \hat{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(r)}&=\omega ^{(r)}T^{\mathsf {\scriptscriptstyle T}}\end{aligned}$$
(62)
$$\begin{aligned} \check{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(q)}&=\omega ^{(q)}T^{\mathsf {\scriptscriptstyle T}}\end{aligned}$$
(63)
$$\begin{aligned} \tilde{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(s)}&=\omega ^{(s)}T^{\mathsf {\scriptscriptstyle T}}\end{aligned}$$
(64)

holds.

Proof

From eigenvalue equation for \(T^{\mathsf {\scriptscriptstyle T}}\) and \(A^{(a)}\), \({A^{(a)}}^{\mathsf {\scriptscriptstyle T}}T =z{A^{(a)}}^{\mathsf {\scriptscriptstyle T}}\), and (34) for this case, \(\hat{A}^{(a)}(z)=\frac{\omega ^{(r)}}{z+\hat{d}}A^{(a)}(z)\) it can be seen

$$\begin{aligned} {{}{{\hat{A}}}^{(a)}}^{\mathsf {\scriptscriptstyle T}}\hat{T}=z{{}{{\hat{A}}}^{(a)}}^{\mathsf {\scriptscriptstyle T}}, \end{aligned}$$

so that

$$\begin{aligned} \hat{T}^{\mathsf {\scriptscriptstyle T}}\frac{\omega ^{(r)}}{z+\hat{d}}A^{(a)}(z)=z\frac{\omega ^{(r)}}{z+\hat{d}}A^{(a)}(z)=\frac{\omega ^{(r)}}{z+\hat{d}}T^{\mathsf {\scriptscriptstyle T}}A^{(a)}. \end{aligned}$$

Hence, if we denote \(M=(\hat{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(r)}-\hat{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(r)}T)H^{-1}{\tilde{S}}\) we find \(MX^{(a)}=0\) so that, according to Lemma 3.21, we get that \(M=0\) and as \(H^{-1}{\tilde{S}}\) is a lower triangular invertible matrix we conclude the result. and finally we get

$$\begin{aligned} \hat{T}^{\mathsf {\scriptscriptstyle T}}\omega ^{(r)}=\omega ^{(r)}T^{\mathsf {\scriptscriptstyle T}}. \end{aligned}$$

For Eqs. (63) and (64) it can be proven analogously. \(\square \)

Theorem 3.23

The six functions uvfgF and G fulfill the following system of six 2D nonlinear difference equations

  1. (i)

    For the shifts \( \tilde{}\) and \(\bar{}\):

    $$\begin{aligned}&\tilde{d}(G-\bar{F}+\tilde{\bar{F}}-\tilde{G}+\tilde{\bar{f}}(\bar{g}-\bar{\tilde{g}})+\tilde{g}(\tilde{\bar{f}}-\bar{f}))=\frac{\bar{v}}{\tilde{v}}-\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}, \end{aligned}$$
    (65a)
    $$\begin{aligned}&\tilde{d}(F-G+\tilde{G}-\tilde{F}+\tilde{g}(\bar{f}-\tilde{\bar{f}})+\tilde{f}(\tilde{g}-g))=\frac{\bar{u}}{\tilde{u}}-\frac{\bar{v}}{\tilde{v}},\end{aligned}$$
    (65b)
    $$\begin{aligned}&\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\tilde{g}-\tilde{\bar{f}})+\tilde{d}(\bar{g}-\tilde{\bar{g}}) (\tilde{G}-\tilde{\bar{F}}-\tilde{\bar{f}}(\tilde{\bar{f}}-\tilde{\bar{g}})) +\tilde{d}\frac{\tilde{\bar{\bar{u}}}}{\tilde{\bar{u}}} =\tilde{d}\frac{\bar{\bar{u}}}{\bar{u}}+\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\bar{g}-\bar{\bar{f}})\nonumber \\&\quad +\tilde{d}(\bar{f}-\tilde{\bar{f}})(\bar{F}-\bar{G}-\bar{g}(\bar{g}-\bar{\bar{f}})),\end{aligned}$$
    (65c)
    $$\begin{aligned}&\frac{\bar{v}}{\tilde{v}}(\tilde{f}-\tilde{g})+\tilde{d}(\bar{f}-\tilde{\bar{f}})(\tilde{F}-\tilde{G} -\tilde{g}(\tilde{g}-\tilde{\bar{f}}))+\tilde{d}\frac{\tilde{\bar{v}}}{\tilde{v}} =\tilde{d}\frac{\bar{v}}{v}+\frac{\bar{v}}{\tilde{v}}(\bar{f}-\bar{g})\nonumber \\&\quad +\tilde{d}(g-\tilde{g})(G-\bar{F}-\bar{f}(\bar{f}-\bar{g})), \end{aligned}$$
    (65d)
    $$\begin{aligned}&\frac{\bar{\bar{v}}}{\tilde{\bar{v}}}(\tilde{G}-\tilde{\bar{F}}-\tilde{\bar{f}}(\tilde{\bar{f}} -\tilde{\bar{g}}))+\tilde{d}\frac{\tilde{\bar{\bar{u}}}}{\tilde{\bar{u}}}(\bar{f}-\tilde{\bar{f}}) =\tilde{d}\frac{\bar{\bar{v}}}{\bar{v}}(\bar{f}-\tilde{\bar{f}}) +\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\bar{G}-\bar{\bar{F}}-\bar{\bar{f}}(\bar{\bar{f}}-\bar{\bar{g}})), \end{aligned}$$
    (65e)
    $$\begin{aligned}&\frac{\bar{\bar{u}}}{\tilde{\bar{u}}}(\tilde{F}-\tilde{G}-\tilde{g}(\tilde{g}-\tilde{\bar{f}}))+\tilde{d}\frac{\tilde{\bar{v}}}{\tilde{v}}(\bar{g}-\tilde{\bar{g}})=\tilde{d}\frac{\bar{\bar{u}}}{\bar{u}}(g-\tilde{g})+\frac{\bar{v}}{\tilde{v}}(\bar{F}-\bar{G}-\bar{g}(\bar{g}-\bar{\bar{f}})). \end{aligned}$$
    (65f)
  2. (ii)

    For the shifts \(\check{}\) and \(\bar{}\):

    $$\begin{aligned}&\frac{\bar{v}}{\check{v}}=\check{d}(G-\bar{F}-\check{G}+\check{\bar{F}} +\check{\bar{f}}(\bar{g}-\check{\bar{g}})+\check{g}(\check{\bar{f}}-\bar{f})), \end{aligned}$$
    (66a)
    $$\begin{aligned}&\check{d}(F-G-\check{F}+\check{G}+\check{g}(\bar{f}-\check{\bar{f}}) +\check{f}(\check{g}-g)=-\frac{\bar{v}}{\check{v}}, \end{aligned}$$
    (66b)
    $$\begin{aligned}&\frac{\check{\bar{\bar{u}}}}{\check{\bar{u}}}+(\bar{g}-\check{\bar{g}})(\check{G} -\check{\bar{F}}-\check{\bar{f}}(\check{\bar{f}}-\check{\bar{g}})) =\frac{\bar{\bar{u}}}{\bar{u}}+(\bar{f}-\check{\bar{f}})(\bar{F}-\bar{G}-\bar{g}(\bar{g} -\bar{\bar{f}})), \end{aligned}$$
    (66c)
    $$\begin{aligned}&\frac{\bar{v}}{\check{v}}(\check{f}-\check{g})+\check{d}(\bar{f} -\check{\bar{f}})(\check{F}-\check{G}-\check{g}(\check{g}-\check{\bar{f}})) +\check{d}\frac{\check{\bar{v}}}{\check{v}}\nonumber \\&\quad =\check{d}\frac{\bar{v}}{v}+\check{d}(g-\check{g})(G-\bar{F} -\bar{f}(\bar{f}-\bar{g}))+\frac{\bar{v}}{\check{v}}(\bar{f}-\bar{g}), \end{aligned}$$
    (66d)
    $$\begin{aligned}&\frac{\bar{\bar{v}}}{\check{\bar{v}}}(\check{G}-\check{\bar{F}}-\check{\bar{f}}(\check{\bar{f}} -\check{\bar{g}}))+\check{d}\frac{\check{\bar{\bar{u}}}}{\check{\bar{u}}}(\bar{\bar{f}} -\check{\bar{\bar{f}}})=\check{d}\frac{\bar{\bar{v}}}{\bar{v}}(\bar{f}-\check{\bar{f}}), \end{aligned}$$
    (66e)
    $$\begin{aligned}&\check{d}\frac{\check{\bar{v}}}{\check{v}}(\bar{g}-\check{\bar{g}}) =\check{d}\frac{\bar{\bar{u}}}{\bar{u}}(g-\check{g})+\frac{\bar{v}}{\check{v}} (\bar{F}-\bar{G}-\bar{g}(\bar{g}-\bar{\bar{f}})) .\end{aligned}$$
    (66f)
  3. (iii)

    For the shifts \( \hat{}\) and \(\bar{}\):

    $$\begin{aligned}&\hat{d}(G-\bar{F}-\hat{G}+\hat{\bar{F}}+\hat{g}(\hat{\bar{f}}-\bar{f}) +\hat{\bar{f}}(\bar{g}-\hat{\bar{g}}))=-\frac{\bar{\bar{u}}}{\hat{\bar{u}}}, \end{aligned}$$
    (67a)
    $$\begin{aligned}&\hat{d}(F-G-\hat{\bar{F}}+\hat{G}+\hat{f}(\hat{g}-g)+\hat{g}(\bar{f}-\hat{\bar{f}}) =\frac{\bar{u}}{\hat{u}},\end{aligned}$$
    (67b)
    $$\begin{aligned}&\hat{d}\left( \frac{\hat{\bar{\bar{u}}}}{\hat{\bar{u}}}+(\bar{g}-\hat{\bar{g}}) (\hat{G}-\hat{\bar{F}}-\hat{\bar{f}}(\hat{\bar{f}}-\hat{\bar{g}}))\right) +\frac{\bar{\bar{u}}}{\hat{\bar{u}}}(\hat{g}-\hat{\bar{f}})\nonumber \\&\quad =\frac{\bar{\bar{u}}}{\hat{\bar{u}}}(\bar{g}-\bar{\bar{f}}) +\hat{d}\left( \frac{\bar{\bar{u}}}{\bar{u}}+(\bar{f}-\hat{\bar{f}}) (\bar{F}-\bar{G}-\bar{g}(\bar{g}-\bar{\bar{f}}))\right) ,\end{aligned}$$
    (67c)
    $$\begin{aligned}&\frac{\hat{\bar{v}}}{\hat{v}}+(\bar{f}-\hat{\bar{f}})(\hat{F}-\hat{G} -\hat{g}(\hat{g}-\hat{\bar{f}}))=\frac{\bar{v}}{v}+(g-\hat{g})(G-\bar{F}-\bar{f}(\bar{f}-\bar{g})),\end{aligned}$$
    (67d)
    $$\begin{aligned}&\hat{d}\frac{\hat{\bar{u}}}{\hat{u}}(\bar{f}-\hat{\bar{f}})=\hat{d} \frac{\bar{v}}{v}(f-\hat{f})+\frac{\bar{u}}{\hat{u}}(G-\bar{F}-\bar{f}(\bar{f}-\bar{g})), \end{aligned}$$
    (67e)
    $$\begin{aligned}&\frac{\bar{v}}{\hat{v}}(\hat{F}-\hat{G}-\hat{g}(\hat{g}-\hat{\bar{f}}))+\hat{d} \frac{\hat{\bar{v}}}{\hat{v}}(\bar{g}-\hat{\bar{g}})=\hat{d}\frac{\bar{\bar{u}}}{\bar{u}}(g-\hat{g}). \end{aligned}$$
    (67f)

Proof

From (64) we obtained three non-null diagonals. From first diagonal

$$\begin{aligned}&\tilde{d}({\mathfrak {a}}_{+}S^{[2]}-S^{[2]}+\tilde{S}^{[2]} -{\mathfrak {a}}_{+}\tilde{S}^{[2]}+\tilde{S}^{[1]}({\mathfrak {a}}_{-}S^{[1]} -{\mathfrak {a}}_{-}\tilde{S}^{[1]})+{\mathfrak {a}}_{+} \tilde{S}^{[1]}(\tilde{S}^{[1]}-S^{[1]}))\\&\quad =\frac{{\mathfrak {a}}_{-}H}{{\mathfrak {a}}_{+}\tilde{H}}-\frac{{\mathfrak {a}}_{-}^2H}{\tilde{H}}, \end{aligned}$$

if we take \(I^{(1)}\) part we obtain (65a) and from \(I^{(2)}\) we obtain (65b); from second diagonal

$$\begin{aligned}{} & {} ({\mathfrak {a}}_{+}\tilde{S}^{[1]}-\tilde{S}^{[1]})\frac{{\mathfrak {a}}_{-}^2H}{\tilde{H}}+\tilde{d}({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}\tilde{S}^{[1]})({\mathfrak {a}}_{+}\tilde{S}^{[2]}-\tilde{S}^{[2]}-\tilde{S}^{[1]}(\tilde{S}^{[1]}-{\mathfrak {a}}_{-}\tilde{S}^{[1]}))\\{} & {} \quad +\tilde{d}\frac{{\mathfrak {a}}_{-}^2\tilde{H}}{\tilde{H}}=\tilde{d}\frac{{\mathfrak {a}}_{-}^2H}{H}+ \frac{{\mathfrak {a}}_{-}^2 H}{\tilde{H}}({\mathfrak {a}}_{-}S^{[1]} -{\mathfrak {a}}_{-}^2S^{[1]})\\{} & {} \quad +\tilde{d}(S^{[1]}-\tilde{S}^{[1]})(S^{[2]}-{\mathfrak {a}}_{-}S^{[2]}-{\mathfrak {a}}_{-}S^{[1]}({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}^2S^{[1]})), \end{aligned}$$

from part coming from \(I^{(1)}\) it can be obtained (65c) and from part of \(I^{(2)}\) it can be obtained (65d); from third superdiagonal

$$\begin{aligned}{} & {} \frac{{\mathfrak {a}}^3_{-}H}{{\mathfrak {a}}_{-}\tilde{H}} ({\mathfrak {a}}_{+}\tilde{S}^{[2]}-\tilde{S}^{[2]}-\tilde{S}^{[1]}(\tilde{S}^{[1]} -{\mathfrak {a}}_{-}\tilde{S}^{[1]}))+\tilde{d} \frac{{\mathfrak {a}}_{-}^2\tilde{H}}{\tilde{H}}({\mathfrak {a}}_{-}^2S^{[1]} -{\mathfrak {a}}_{-}^2\tilde{S}^{[1]})\\{} & {} \quad =\tilde{d} \frac{{\mathfrak {a}}_{-}^3H}{{\mathfrak {a}}_{-}H}(S^{[1]}-\tilde{S}^{[1]})\\{} & {} \quad +\frac{{\mathfrak {a}}_{-}^2H}{\tilde{H}}({\mathfrak {a}}_{-}S^{[2]}-{\mathfrak {a}}_{-}^2S^{[2]}-{\mathfrak {a}}_{-}^2S^{[1]}({\mathfrak {a}}_{-}^2S^{[1]}-{\mathfrak {a}}_{-}^3S^{[1]})), \end{aligned}$$

from part of \(I^{(1)}\) it can be obtained (65e) and from part of \(I^{(2)}\) it can be obtained (65f). From (63) we can obtain three non-null diagonals. From first superdiagonal

$$\begin{aligned} I^{(1)}\frac{{\mathfrak {a}}_{-}H}{{\mathfrak {a}}_{+}\check{H}}-I^{(2)}\frac{{\mathfrak {a}}_{-}^2H}{\check{H}}= & {} \check{d}({\mathfrak {a}}_{+}S^{[2]}-S^{[2]}-{\mathfrak {a}}_{+}\check{S}^{[2]}+\check{S}^{[2]}+\check{S}^{[1]}({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}\check{S}^{[1]})\\{} & {} +{\mathfrak {a}}_{+}\check{S}^{[1]}(\check{S}^{[1]}-S^{[1]})) \end{aligned}$$

from part of \(I^{(1)}\) it can be obtained (66a) and from part of \(I^{(2)}\) it can be obtained (66b); from second superdiagonal

$$\begin{aligned}{} & {} I^{(2)}\frac{{\mathfrak {a}}_{-}^2H}{\check{H}}({\mathfrak {a}}_{+} \check{S}^{[1]}-\check{S}^{[1]})+\check{d}({\mathfrak {a}}_{-}S^{[1]} -{\mathfrak {a}}_{-}\check{S}^{[1]})({\mathfrak {a}}_{+}\check{S}^{[2]} -\check{S}^{[2]}-\check{S}^{[1]}(\check{S}^{[1]} -{\mathfrak {a}}_{-}\check{S}^{[1]}))\\{} & {} \qquad +\check{d}\frac{{\mathfrak {a}}_{-}^2\check{H}}{\check{H}}\\{} & {} \quad =\check{d}\frac{{\mathfrak {a}}_{-}^2H}{H}+\check{d}(S^{[1]} -\check{S}^{[1]})(S^{[2]}-{\mathfrak {a}}_{-}{S}^{[2]}-{\mathfrak {a}}_{-}{S}^{[1]} ({\mathfrak {a}}_{-}{S}^{[1]}-{\mathfrak {a}}_{-}^2{S}^{[1]}))\\{} & {} \qquad +I^{(2)} \frac{{\mathfrak {a}}_{-}^2H}{\check{H}}({\mathfrak {a}}_{-}S^{[1]}-{\mathfrak {a}}_{-}^2S^{[1]}) \end{aligned}$$

from part of \(I^{(1)}\) it can be obtained (66c) and from part of \(I^{(2)}\) it can be obtained (66d); and from third superdiagonal

$$\begin{aligned}{} & {} I^{(1)}\frac{{\mathfrak {a}}_{-}^3H}{{\mathfrak {a}}_{-}\check{H}} ({\mathfrak {a}}_{+}\check{S}^{[2]}-\check{S}^{[2]} -\check{S}^{[1]}(\check{S}^{[1]}-{\mathfrak {a}}_{-} \check{S}^{[1]}))+\check{d}\frac{{\mathfrak {a}}_{-}^2 \check{H}}{\check{H}} ({\mathfrak {a}}_{-}^2S^{[1]}-{\mathfrak {a}}_{-}^2\check{S}^{[1]})\\{} & {} \quad =\check{d}(S^{[1]}-\check{S}^{[1]})\frac{{\mathfrak {a}}_{-}^3H}{{\mathfrak {a}}_{-}H}+I^{(2)}\frac{{\mathfrak {a}}_{-}^2H}{\check{H}}({\mathfrak {a}}_{-}S^{[2]}-{\mathfrak {a}}_{-}^2S^{[2]]}-{\mathfrak {a}}_{-}^2S^{[1]}]({\mathfrak {a}}_{-}^2 S^{[1]}-{\mathfrak {a}}_{-}^3S^{[1]})) \end{aligned}$$

from part of \(I^{(1)}\) we can obtain (66e) and from \(I^{(2)}\) we can obtain (66f). \(\square \)

Remark 3.24

Notice that

$$\begin{aligned} F&=\frac{\tau ^2_{2n}}{\tau _{2n}},&G&=\frac{\tau ^2_{2n+1}}{\tau _{2n+1}}, \end{aligned}$$

As we did in Proposition 3.20 for the multiple Nijhoff–Capel Toda system we can get \(\tau \)-function representation of the system (3.23).

4 Laguerre–Freud equations for generalized multiple Charlier and Meixner II

We will begin by applying the previous developments to the multiple orthogonal polynomials discussed in [17], specifically the multiple Charlier and Meixner II sequences of multiple orthogonal polynomials. It has been proven in [17] that in all these cases, the system is AT, indicating that we are dealing with perfect systems. Subsequently, we will proceed with the study of generalizations of these two cases. The original polynomials were found in [25] and [58], respectively.

We will demonstrate that it is possible to discuss Laguerre–Freud equations in all these examples. In this multiple scenario, we extend the concept of Laguerre–Freud equations to encompass nonlinear equations satisfied by the coefficients of the recursion relation in the form:

$$\begin{aligned} \gamma _n&={\mathscr {F}}_n(\gamma _{n-1},\beta _{n-1},\alpha _{n-1},\dots ), \end{aligned}$$
(68a)
$$\begin{aligned} \beta _n&={\mathscr {G}}_n(\gamma _n,\gamma _{n-1},\beta _{n-1}, \alpha _{n-1}, \dots ), \end{aligned}$$
(68b)
$$\begin{aligned} \alpha _n&={\mathscr {H}}_n(\gamma _n,\beta _n,\gamma _{n-1},\beta _{n-1},\alpha _{n-1},\dots ). \end{aligned}$$
(68c)

4.1 Charlier multiple orthogonal polynomials

In this case we set

$$\begin{aligned} w^{(1)}(k)&=\frac{{\eta ^{(1)}}^k}{k!},&w^{(2)}(k)&=\frac{{\eta ^{(2)}}^k}{k!}, \end{aligned}$$

if \({\eta ^{(1)}},{\eta ^{(2)}}>0\), it is an AT system, and polynomials \(\theta \), \(\sigma ^{(1)}\) and \(\sigma ^{(2)}\) are:

$$\begin{aligned} \theta (k)&=k,&\sigma ^{(1)}(k)&={\eta ^{(1)}},&\sigma ^{(2)}(k)&={\eta ^{(2)}}. \end{aligned}$$

Now, \(\sigma ^{(1)}\) and \(\sigma ^{(2)}\) are degree zero polynomials and \(\theta \) is a degree one polynomial, consequently there are not non-zero subdiagonals and there is only one non-zero superdiagonal, this is:

$$\begin{aligned} \Psi =\psi ^{(0)}+\psi ^{(1)}. \end{aligned}$$

To obtain Laguerre-Freud matrix firstly it is used \(\Psi =\sigma ^{(1)}(T^{\mathsf {\scriptscriptstyle T}}_1)\Pi ^{\mathsf {\scriptscriptstyle T}}_1+\sigma ^{(2)}(T^{\mathsf {\scriptscriptstyle T}}_2)\Pi ^{\mathsf {\scriptscriptstyle T}}_2\), from this expression it is obtained:

$$\begin{aligned} \psi ^{(0)}&=I^{(1)}{\eta ^{(1)}}+I^{(2)}{\eta ^{(2)}},&\psi ^{(1)}&={\eta ^{(1)}}D_1+{\eta ^{(2)}}D_2. \end{aligned}$$

From the expression \(\Psi =\Pi ^{-1}\theta (T)\) we get

$$\begin{aligned} \psi ^{(1)}&=I,&\psi ^{(0)}&=\alpha -{\mathfrak {a}}_{+}D. \end{aligned}$$

Attending to the above results the Laguerre-Freud structure matrix is

$$\begin{aligned} \Psi ={\eta ^{(1)}} I^{(1)}+{\eta ^{(2)}} I^{(2)}+\Lambda . \end{aligned}$$

Let us discuss \([\Psi ,T]=\Psi \). First we obtain that \([\Psi ,T]=\Lambda ^{\mathsf {\scriptscriptstyle T}}\tilde{\psi }^{(-1)}+\tilde{\psi }^{(0)}+\tilde{\psi }^{(1)}\Lambda \), i.e.,

$$\begin{aligned} \tilde{\psi }^{(-1)}&=\gamma -{\mathfrak {a}}_{+}\gamma -\beta ({\eta ^{(1)}}-{\eta ^{(2)}})(I^{(1)}-I^{(2)}),\\ \tilde{\psi }^{(0)}&=\beta -{\mathfrak {a}}_{+}\beta , \\ \tilde{\psi }^{(1)}&={\mathfrak {a}}_{-}\alpha -\alpha +({\eta ^{(1)}}-{\eta ^{(2)}})(I^{(1)}-I^{(2)}), \end{aligned}$$

equating these to Laguerre–Freud matrix \(\Psi \) we obtain three non-trivial equations:

$$\begin{aligned} {\mathfrak {a}}_{-}\alpha -\alpha&=I+({\eta ^{(1)}}-{\eta ^{(2)}})(I^{(2)}-I^{(1)}), \\ \beta -{\mathfrak {a}}_{+}\beta&={\eta ^{(1)}} I^{(1)}+{\eta ^{(2)}}I^{(2)}, \\ \gamma -{\mathfrak {a}}_{+}\gamma&=\beta ({\eta ^{(1)}}-{\eta ^{(2)}})(I^{(1)}-I^{(2)}).\end{aligned}$$

From compatibility equations we get

$$\begin{aligned} \alpha _{2n}&=2n+\alpha _0,&\alpha _{2n+1}&=2n+1+{\eta ^{(2)}}-{\eta ^{(1)}}+\alpha _0, \\ \beta _{2n}&=n({\eta ^{(1)}}+{\eta ^{(2)}}),&\beta _{2n+1}&=n(({\eta ^{(1)}}+{\eta ^{(2)}})+{\eta ^{(1)}},\\ \gamma _{2n+1}&=n{\eta ^{(2)}}({\eta ^{(2)}}-{\eta ^{(1)}}),&\gamma _{2n+2}&=(n+1){\eta ^{(1)}}({\eta ^{(1)}}-{\eta ^{(2)}}), \end{aligned}$$

where \(n \in \lbrace 0,1,2,\dots \rbrace \). These results have been found previously in [17].

4.2 Meixner of second kind multiple orthogonal polynomials

In this case we have:

$$\begin{aligned} \theta (k)&=k,&\sigma ^{(1)}(k)&=\eta (k+b_1),&\sigma ^{(2)}(k)&=\eta (k+b_2). \end{aligned}$$

The weights are:

$$\begin{aligned} w^{(1)}(k)&=(b_1)_k\frac{\eta ^k}{k!},&w^{(2)}(k)&=(b_2)_k\frac{\eta ^k}{k!}, \end{aligned}$$

if \(b_1,b_2,\eta >0\), according with Lemma 2.4, it is an AT system.

The Laguerre-Freud matrix has four non-trivial diagonals:

$$\begin{aligned} \Psi =(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\psi ^{(-2)}+\dots +\psi ^{(1)}\Lambda . \end{aligned}$$

The main diagonal and the first super diagonal can be obtained by means of \(\Psi =\Pi ^{-1}\theta (T)\):

$$\begin{aligned} \psi ^{(1)}&=I,&\psi ^{(0)}&=\alpha -{\mathfrak {a}}_{+}D. \end{aligned}$$

Both subdiagonals are obtained through \(\Psi =\sigma ^{(1)}(T_1^{\mathsf {\scriptscriptstyle T}})\Pi _1^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}(T^{\mathsf {\scriptscriptstyle T}}_2)\Pi _2^{\mathsf {\scriptscriptstyle T}}\):

$$\begin{aligned} \psi ^{(0)}&=\eta (\alpha +{\mathfrak {a}}_{+}^2D_1^{[1]}I^{(1)}+{\mathfrak {a}}_{+}^2D_2^{[1]}I^{(2)}+b_1I^{(1)}+b_2I^{(2)}),&\psi ^{(-1)}&=\eta \beta ,&\psi ^{(-2)}&=\eta \gamma . \end{aligned}$$

Using Compatibility I we can obtain three non trivial equations from first superdiagonal, first subdiagonals and the main diagonal, respectively:

$$\begin{aligned} {\mathfrak {a}}_{-}\alpha -\alpha&=\frac{1}{1-\eta }(I+\eta (I^{(2)}+(b_1-b_2)(I^{(2)}-I^{(1)})),\\ {\mathfrak {a}}_{+}\gamma -\gamma&=\frac{\eta }{\eta -1}\beta (I^{(1)}+(b_1-b_2)(I^{(1)}-I^{(2)})),\\ {\mathfrak {a}}_{+}\beta -\beta&=\frac{1}{\eta -1}(\alpha -{\mathfrak {a}}_{+}D). \end{aligned}$$

Equating both expressions for \(\psi ^{(0)}\) we obtain expressions for \(\alpha \):

$$\begin{aligned} \alpha _{2n}&=\frac{1}{1-\eta }2n+\frac{\eta }{1-\eta }(n+b_1),&\alpha _{2n+1}&=\frac{1}{1-\eta }(2n+1)+\frac{\eta }{1-\eta }(n+b_2) \end{aligned}$$

From compatibility conditions we obtain:

$$\begin{aligned} \beta _{2n}&=\frac{\eta }{(1-\eta )^2}(3n^2+n(b_1+b_2-2)), \\ \beta _{2n+1}&=\frac{\eta }{(1-\eta )^2}(3n^2+n(b_1+b_2+1)+b_1),\\ \gamma _{2n}&=\frac{\eta ^2}{(1-\eta )^3}n(n+b_1-1)(n+b_1-b_2),\\ \gamma _{2n+1}&=\frac{\eta ^2}{(1-\eta )^3}n(n+b_2-1)(n+b_2-b_1). \end{aligned}$$

As we can see, results for \(\alpha _{2n}, \alpha _{2n+1}, \beta _{2n}, \beta _{2n+1}, \gamma _{2n}\) and \(\gamma _{2n+1}\) agree with results obtained in [17].

4.3 Generalized Charlier multiple orthogonal polynomials

In this case we have that:

$$\begin{aligned} \theta (k)&=k(k+c),&\sigma ^{(1)} (k)&=\eta ^{(1)},&\sigma ^{(2)}(k)&=\eta ^{(2)}. \end{aligned}$$

The weights are

$$\begin{aligned} w^{(1)}(k)&= \frac{1}{(c+1)_k} \dfrac{{\eta ^{(1)}}^k}{k!},&w^{(2)}(k)&=\frac{1}{(c+1)_k}\dfrac{{\eta ^{(2)}}^k}{k!} , \end{aligned}$$

are for \(c>-1,\eta ^{(1)},\eta ^{(2)}>0\), according to the Lemma 2.4, AT systems. Due to the \(\sigma ^{(1)}\), \(\sigma ^{(2)}\) and \(\theta \) polynomials degrees, Laguerre-Freud matrix has three non-trivial diagonals:

$$\begin{aligned} \Psi =\psi ^{(0)}+\psi ^{(1)}\Lambda +\psi ^{(2)}\Lambda ^2. \end{aligned}$$

By means of \(\Psi =\Pi ^{-1}\theta (T)\) we can obtain expressions for first and second subdiagonals and through \(\Psi =\Pi ^{\mathsf {\scriptscriptstyle T}}_1\sigma ^{(1)}(T_1^{\mathsf {\scriptscriptstyle T}})+\Pi ^{\mathsf {\scriptscriptstyle T}}_2\sigma ^{(2)}(T_2^{\mathsf {\scriptscriptstyle T}})\) the main diagonal is obtained:

$$\begin{aligned} \psi ^{(0)}&=\eta ^{(1)}I^{(1)}+\eta ^{(2)}I^{(2)},&\psi ^{(1)}&={\mathfrak {a}}_{-}\alpha +\alpha +c-{\mathfrak {a}}_{+}D,&\psi ^{(2)}&=I.\end{aligned}$$

From Compatibility I non-trivial equations for first superdiagonal, main diagonal and first subdiagonal are obtained:

$$\begin{aligned} {\mathfrak {a}}_{-}\beta -{\mathfrak {a}}_{+}\beta&=(\alpha +I-{\mathfrak {a}}_{-}\alpha ) ({\mathfrak {a}}_{-}\alpha +\alpha +c-{\mathfrak {a}}_{+}D)\\&\quad +(\eta ^{(1)}-{\eta ^{(2)}})(I^{(2)}-I^{(1)}), \\ \eta ^{(1)}I^{(1)}+{\eta ^{(2)}}I^{(2)}&=\gamma -{\mathfrak {a}}_{+}^2\gamma +\beta ({\mathfrak {a}}_{-}\alpha +\alpha +c-{\mathfrak {a}}_{+}D)\\&\quad -{\mathfrak {a}}_{+}\beta (\alpha +{\mathfrak {a}}_{+} \alpha +c-{\mathfrak {a}}_{+}^2D), \\ \beta (\eta ^{(1)}-{\eta ^{(2)}})(I^{(1)}-I^{(2)})&=\gamma ({\mathfrak {a}}_{-}^2\alpha +{\mathfrak {a}}_{-}\alpha +c-D)\\&\quad -{\mathfrak {a}}_{+}\gamma (\alpha +{\mathfrak {a}}_{+} \alpha +c-{\mathfrak {a}}_{+}^2D). \end{aligned}$$

These equations can be written element by element respectively as

$$\begin{aligned}&\beta _{n+2}-\beta _n=(\alpha _n+1-\alpha _{n+1})(\alpha _{n+1}+\alpha _n+c-n)+(-1)^{n+1}(\eta ^{(1)} -{\eta ^{(2)}}),\\&\frac{\eta ^{(1)}+{\eta ^{(2)}}}{2}+(-1)^n\frac{\eta ^{(1)}-{\eta ^{(2)}}}{2}=\gamma _{n+2} -\gamma _n+\beta _{n+1}(\alpha _{n+1}+\alpha _n+c-n)\\&\quad -\beta _n(\alpha _n+\alpha _{n-1}+c-n+1),\\&(-1)^n\beta _{n+1}(\eta ^{(1)}-{\eta ^{(2)}})=\gamma _{n+2} (\alpha _{n+2}+\alpha _{n+1}+c-n-1)\\&\quad -\gamma _{n+1}(\alpha _n+\alpha _{n-1}+c-n+1). \end{aligned}$$

From Compatibility II we obtain three non-trivial equations from first superdiagonal, main diagonal and first subdiagonal, respectively:

$$\begin{aligned}&{\mathfrak {a}}_{-}\beta -{\mathfrak {a}}_{+}\beta =(\alpha +I-{\mathfrak {a}}_{-}\alpha )({\mathfrak {a}}_{-}\alpha +\alpha +c-{\mathfrak {a}}_{+}D)+(\eta ^{(1)}-{\eta ^{(2)}})(I^{(2)}-I^{(1)}), \\&\gamma -{\mathfrak {a}}_{+}^2\gamma =\eta ^{(1)}I^{(1)}+{\eta ^{(2)}} I^{(2)}+{\mathfrak {a}}_{+}\beta (\alpha +{\mathfrak {a}}_{+}\alpha +c-{\mathfrak {a}}_{+}^2D)\\&\quad -\beta ({\mathfrak {a}}_{-}\alpha +\alpha +c-{\mathfrak {a}}_{+}D), \\&\beta (\eta ^{(1)}-{\eta ^{(2)}})(I^{(1)}-I^{(2)})=\gamma ({\mathfrak {a}}_{-}^2\alpha +{\mathfrak {a}}_{-}\alpha +c-D)-{\mathfrak {a}}_{+}\gamma (\alpha +{\mathfrak {a}}_{+}\alpha +c-{\mathfrak {a}}_{+}^2D) . \end{aligned}$$

Proposition 4.1

(Laguerre–Freud equations) Laguerre–Freud Equations (68) for generalized Charlier multiple orthogonal polynomials are

$$\begin{aligned} \alpha _{n+2}&=n+1-c-\alpha _{n+1}\\&\quad +\gamma _{n+2}^{-1}\left( (-1)^{n}\beta _{n+1}(\eta ^{(1)}-{\eta ^{(2)}})+\gamma _{n+1}(\alpha _n+\alpha _{n-1}+c-n+1)\right) ,\\ \beta _{n+2}&=\beta _n+(\alpha _n+1-\alpha _{n+1})(\alpha _{n+1}+\alpha _n+c-n)+(-1)^{n+1}(\eta ^{(1)}-{\eta ^{(2)}}),\\ \gamma _{n+2}&=\gamma _n+\beta _n(\alpha _n+\alpha _{n-1}+c-n+1) -\beta _{n+1}(\alpha _{n+1}+\alpha _n+c-n)\\&\quad +\frac{\eta ^{(1)}+{\eta ^{(2)}}}{2}+(-1)^{n}\frac{\eta ^{(1)}-{\eta ^{(2)}}}{2}. \end{aligned}$$

4.4 Generalized Meixner of second kind multiple orthogonal polynomials

In this case we have

$$\begin{aligned} \theta (k)&=k(k+c),&\sigma ^{(1)}(k)&=\eta (k+b_1),&\sigma ^{(2)}(k)&=\eta (k+b_2). \end{aligned}$$

The weights are:

$$\begin{aligned} w^{(1)}(k)&=\frac{(b_1)_k}{(c+1)_k}\frac{\eta ^k}{k!},&w^{(2)}(k)&=\frac{(b_2)_k}{(c+1)_k}\frac{\eta ^k}{k!}, \end{aligned}$$

if \(b_1, b_2, \eta >0\) and \(c>-1\), according with Lemma 2.4, it is an AT system.

In this case Laguerre–Freud matrix has five non-null diagonals, it is:

$$\begin{aligned} \Psi =(\Lambda ^{\mathsf {\scriptscriptstyle T}})^2\psi ^{(-2)}+\dots +\psi ^{(2)}\Lambda ^2. \end{aligned}$$

Second and first superdiagonals are obtained using \(\Psi =\Pi ^{-1}\theta (T)\):

$$\begin{aligned} \psi ^{(2)}&=I, \quad \psi ^{(1)}=\alpha +{\mathfrak {a}}_{-}\alpha +c-{\mathfrak {a}}_{+}D,\\ \psi ^{(0)}&={\mathfrak {a}}_{+}\beta +\beta +\alpha (\alpha +c)-{\mathfrak {a}}_{+}D({\mathfrak {a}}_{+} \alpha +\alpha +c)+{\mathfrak {a}}_{+}^2\Pi ^{[-2]}. \end{aligned}$$

Using \(\Psi =\sigma ^{(1)}(T_1^{\mathsf {\scriptscriptstyle T}})\Pi _1^{\mathsf {\scriptscriptstyle T}}+\sigma ^{(2)}(T^{\mathsf {\scriptscriptstyle T}}_2)\Pi _2^{\mathsf {\scriptscriptstyle T}}\):

$$\begin{aligned} \psi ^{(-2)}&=\eta \gamma ,&\psi ^{(-1)}&=\eta \beta ,&\psi ^{(0)}&=\eta (\alpha +I^{(1)}({\mathfrak {a}}_{+}^2D_1^{[1]}+b_1)+I^{(2)}({\mathfrak {a}}_{+}^2D_2^{[1]}+b_2)). \end{aligned}$$

From Compatibility I we get three non-trivial equations from the first subdiagonal, the main diagonal and the first superdiagonal, respectively:

$$\begin{aligned}{} & {} \eta ({\mathfrak {a}}_{+}\gamma -\gamma )+\gamma ({\mathfrak {a}}_{-}\alpha +{\mathfrak {a}}_{-}^2\alpha +c-D) -{\mathfrak {a}}_{+}\gamma ({\mathfrak {a}}_{+}\alpha +\alpha +c-{\mathfrak {a}}_{+}^2D)\\{} & {} \quad =\eta \beta \left( I^{(1)}+(b_1-b_2)(I^{(1)}-I^{(2)})\right) ,\\{} & {} \qquad \gamma -{\mathfrak {a}}_{+}^2\gamma =\eta \left( \alpha +\beta -{\mathfrak {a}}_{+}\beta +I^{(1)}({\mathfrak {a}}_{+}^2D_1^{[1]}+b_1) +I^{(2)}({\mathfrak {a}}_{+}^2D_2^{[1]}+b_2)\right) \\{} & {} \qquad +{\mathfrak {a}}_{+}\beta ({\mathfrak {a}}_{+}\alpha +\alpha +c-{\mathfrak {a}}_{+}^2D)\\{} & {} \qquad -\beta (\alpha +{\mathfrak {a}}_{-}\alpha +c-{\mathfrak {a}}_{+}D),\\{} & {} \qquad {\mathfrak {a}}_{-}\beta -{\mathfrak {a}}_{+}\beta +({\mathfrak {a}}_{-}\alpha -\alpha ) (\alpha -{\mathfrak {a}}_{-}\alpha +c-{\mathfrak {a}}_{+}D)\\{} & {} \qquad +\eta \left( \alpha -{\mathfrak {a}}_{-}\alpha +(b_1-b_2)(I^{(1)}-I^{(2)})-I^{(2)}\right) =\alpha +{\mathfrak {a}}_{-}\alpha +c-{\mathfrak {a}}_{+}D. \end{aligned}$$

Equations above can be expressed as

$$\begin{aligned}{} & {} \eta (\gamma _{n+1}-\gamma _{n+2})+\gamma _{n+2}(\alpha _{n+1}+\alpha _{n+2}+c-(n+1))-\gamma _{n+1} (\alpha _{n}+\alpha _{n+1}+c-n)\\{} & {} \quad =\eta \beta _{n+1}\left( \frac{1+(-1)^n}{2}+(-1)^n(b_1-b_2)\right) ,\\{} & {} \qquad \gamma _{n+2}-\gamma _n+\beta _{n+1}(\alpha _n+\alpha _{n+1}+c-n) -\beta _n(\alpha _{n-1}\\{} & {} \quad +\alpha _n+c-(n-1))+\eta (\beta _n-\beta _{n+1})\\{} & {} \quad =\eta \left( \alpha _n+\frac{n-1}{2}+\left( \frac{1}{2}+b_2\right) \frac{1-(-1)^n}{2}+\frac{b_1}{2}(1+(-1)^n)\right) ,\\{} & {} \qquad \alpha _n+\alpha _{n+1}+c-n=\beta _{n+2}-\beta _n+(\alpha _{n+1}-\alpha _n) (\alpha _n-\alpha _{n+1}+c-n)\\{} & {} \qquad +\eta \left( \alpha _n-\alpha _{n+1}+(-1)^n(b_1-b_2)+\frac{(-1)^n-1}{2}\right) , \end{aligned}$$

respectively.

From Compatibility II we obtain

$$\begin{aligned}{} & {} \eta ({\mathfrak {a}}_{+}\gamma -\gamma )+\gamma ({\mathfrak {a}}_{-}\alpha +{\mathfrak {a}}_{-}^2\alpha +c-D)-{\mathfrak {a}}_{+}\gamma ({\mathfrak {a}}_{+}\alpha +\alpha +c-{\mathfrak {a}}_{+}^2D)\\{} & {} \quad =\eta \beta (I^{(1)}+(b_1-b_2)(I^{(1)}-I^{(2)})),\\{} & {} \qquad \gamma -{\mathfrak {a}}_{+}^2\gamma =\eta [\alpha +\beta -{\mathfrak {a}}_{+}\beta +I^{(1)} ({\mathfrak {a}}_{+}^2D_1^{[1]}+b_1)+I^{(2)}({\mathfrak {a}}_{+}^2D_2^{[1]}+b_2)]\\{} & {} \qquad +{\mathfrak {a}}_{+}\beta ({\mathfrak {a}}_{+}\alpha +\alpha +c-{\mathfrak {a}}_{+}^2D)\\{} & {} \qquad -\beta (\alpha +{\mathfrak {a}}_{-}\alpha +c-{\mathfrak {a}}_{+}D)\\{} & {} \qquad \eta (I+{\mathfrak {a}}_{-}^2\alpha -\alpha ) =T^2_{-}\beta +{\mathfrak {a}}_{-}\beta -\beta -{\mathfrak {a}}_{+}\beta +{\mathfrak {a}}_{-}^2\alpha ({\mathfrak {a}}_{-}^2\alpha +c)-\alpha (\alpha +c)\\{} & {} \qquad -{\mathfrak {a}}_{-}D({\mathfrak {a}}_{-}^2\alpha +{\mathfrak {a}}_{-}\alpha ) +{\mathfrak {a}}_{+}D(\alpha +{\mathfrak {a}}_{+}\alpha )\\{} & {} \qquad -2cI+\pi ^{[-2]}-{\mathfrak {a}}_{+}^2\pi ^{[-2]}. \end{aligned}$$

as we can see that first and second equations appear in compatibility I.

Proposition 4.2

(Laguerre–Freud equations) Laguerre–Freud equations (68) for generalized Meixner II multiple orthogonal polynomials are

$$\begin{aligned} \alpha _{n+2}&=n+1-c-\alpha _{n+1}+\gamma _{n+2}^{-1}\left( \gamma _{n+1} \left( \alpha _n+\alpha _{n+1}+c-n\right) +\eta (\gamma _{n+2}-\gamma _{n+1})\right. \\&\quad \left. +\eta \beta _{n+1}\left( \frac{1+(-1)^n}{2}+(-1)^{n}(b_1-b_2)\right) \right) ,\\ \beta _{n+2}&= \beta -n+\alpha _n+\alpha _{n+1}+c-n-(\alpha _{n+1}-\alpha _n)(\alpha -n-\alpha _{n+1}+c-n)\\&\quad -\eta \Big (\alpha _n-\alpha _{n+1}+(-1)^n(b_1-b_2)+\frac{(-1)^n-1}{2}\Big ),\\ \gamma _{n+2}&= \gamma _n+\beta _n(\alpha _{n-1}+\alpha _n+c-n+1)-\beta _{n+1}(\alpha _n+\alpha _{n+1}+c-n) -\eta (\beta _n-\beta _{n+1})\\&\quad +\eta \left( \alpha _n+\frac{n-1}{2}+\left( \frac{1}{2}+b_2\right) \frac{1-(-1)^n}{2} +\frac{b_1}{2}(1+(-1)^n)\right) . \end{aligned}$$

5 Conclusions and outlook

Adler and van Moerbeke extensively utilized the Gauss–Borel factorization of the moment matrix in their studies of integrable systems and orthogonal polynomials [1,2,3]. We have also employed these ideas in various contexts, including CMV orthogonal polynomials, matrix orthogonal polynomials, multiple orthogonal polynomials, and multivariate orthogonal polynomials [5, 7,8,9,10,11,12,13,14,15,16]. For a comprehensive overview, refer to [53].

In a recent work [55], we applied this approach to investigate the implications of the Pearson equation on the moment matrix and Jacobi matrices. To describe these implications, we introduced a new banded matrix called the Laguerre-Freud structure matrix, which encodes the Laguerre–Freud relations for the recurrence coefficients. We also discovered that the contiguous relations satisfied generalized hypergeometric functions, determining the moments of the weight described by the squared norms of the orthogonal polynomials. This led to a discrete Toda hierarchy known as the Nijhoff–Capel equation [59].

In [54], we further examined the role of Christoffel and Geronimus transformations in describing the aforementioned contiguous relations and used Geronimus–Christoffel transformations to characterize shifts in the spectral independent variable of the orthogonal polynomials. Building on this program, we delved deeper into the study of discrete semiclassical cases, discovering Laguerre–Freud relations for the recursion coefficients of three types of discrete orthogonal polynomials: generalized Charlier, generalized Meixner, [37] and generalized Hahn of type I [36].

In [35], we completed this program by obtaining Laguerre-Freud structure matrices and equations for three families of hypergeometric discrete orthogonal polynomials.

In this paper, we extend the previous lines of research to multiple orthogonal polynomials on the step line with two weights. We derive hypergeometric \(\tau \)-function representations for these multiple orthogonal polynomials and introduce a third-order integrable nonlinear Toda-type equation for these \(\tau \)-functions. Additionally, we discover systems of nonlinear discrete Nijhoff–Capel Toda-type equations satisfied by these objects. Finally, we apply the Laguerre–Freud matrix structure and compatibilities to derive Laguerre–Freud equations for the multiple generalized Charlier and multiple generalized Meixner of the second type.

As a future outlook, we highlight the following five lines of research:

  1. (i)

    Finding larger families of hypergeometric AT systems.

  2. (ii)

    Analyzing the case of p weights with \(p>2\).

  3. (iii)

    Studying the connections between the topics discussed in this paper and the transformations presented in [21, 22] and quadrilateral lattices [31, 56].

  4. (iv)

    Exploring the relationship with multiple versions of discrete and continuous Painlevé equations involving the coefficients \(\alpha _n\), \(\beta _n\), and \(\gamma _n\). The results presented in this paper constitute progress in this direction, as identifying this connection is a nontrivial problem that can be further investigated in future works.

  5. (v)

    Discussing higher-order integrable flows and finding applications to multicomponent KP-type equations.