1 Introduction

In this paper, we will study the existence of solutions to the system of integral equations

$$\begin{aligned} \left\{ \begin{aligned} u(t)=&\displaystyle \int _{a}^{b} G_{1}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&+\displaystyle \int _{a}^{b} G_{2}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),\\&\quad v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s,\;\; t\in I:=[a,b],\\ v(t)=&\displaystyle \int _{a}^{b} G_{3}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&+\displaystyle \int _{a}^{b} G_{4}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s,\;\; t\in I, \end{aligned}\right. \end{aligned}$$
(1)

where \(n, m\in {\mathbb N}\), \(G_i:I\times I\rightarrow {\mathbb R}\), \(i=1, 2, 3, 4\), are the kernel functions satisfying certain regularity conditions such that \(G_r\in W^{n-1,1}(I\times I)\), \(r=1, 2\), \(G_l\in W^{m-1,1}(I\times I)\), \(l=3, 4\), and \(f_i:I\times {\mathbb R}^{n+m}\rightarrow [0,\infty )\) are \(L^{1}\)-Carathéodory functions.

We emphasize that this type of problem resembles the Hammerstein equations that have been studied by some authors regarding the existence, non-existence, and multiplicity of solutions, and applications to real phenomena of integral equations (see [1, 4,5,6, 8,9,10, 12, 16] and references therein).

The problems of existence and multiplicity of solutions of nonlinear differential systems with separate variables subject to boundary conditions can be posed as a problem of existence and multiplicity of solutions of a system of integral equations. In particular, if we have the following nonlinear problem with linear separated boundary conditions

$$\begin{aligned} \left\{ \begin{aligned}&u^{(n)}(t)+a_{1}(t)\, u^{(n-1)}(t)+\dots +a_{n}(t)\, u(t)\\&\quad =f_{1}(t,u(t),\dots , u^{(n-1)}(t), v(t),\dots , v^{(m-1)}(t)),\quad t\in I,\\&v^{(m)}(t)+b_{1}(t)\, v^{(m-1)}(t)+\dots +b_{m}(t)\, v(t)\\&\quad =f_{2}(t,u(t),\dots , u^{(n-1)}(t), v(t),\dots , v^{(m-1)}(t)),\quad t\in I,\\&B_{i}\left( u\right) =h_{i},\quad i=1,\ldots ,n,\\&B_{j}\left( v\right) =q_{j},\quad j=1,\ldots ,m, \end{aligned} \right. \end{aligned}$$
(2)

where \(a_{k}\), \(b_{r}\) are continuous functions for all \(k=1,\ldots ,n\), \(r=1,\ldots ,m\), \(h_{i}, q_{j}\in \mathbb {R}\) for all \(i=1,\ldots ,n\), \(j=1,\ldots ,m\), \(f_{l}:I\times {\mathbb R}^{n+m}\rightarrow {\mathbb R}\), \(l=1, 2\), satisfy certain asymptotic conditions and \(B_i\) (the same goes for \(B_j\)) covers the general two-point linear boundary conditions, i.e.,

$$\begin{aligned} B_{i}\left( u\right) =\displaystyle \sum _{j=0}^{n-1} \left( \alpha _{j}^{i} u^{\left( j\right) }\left( a\right) +\beta _{j}^{i} u^{\left( j\right) }\left( b\right) \right) ,\quad i=1,\ldots ,n, \end{aligned}$$

with \(\alpha _{j}^{i},\;\; \beta _{j}^{i}\), being real constants for all \(i=1,\ldots ,n,\;\; j=0,\ldots ,n-1\). Then its equivalent integral problem consists of studying the existence and the multiplicity of fixed points of the integral system

$$\begin{aligned} \left\{ \begin{aligned} u(t)&=\displaystyle \int _{a}^{b} G_1(t,s)\,f_{1}(s,u(s),\dots ,u^{n-1}(s), v(s),\dots ,v^{m-1}(s))\,\textrm{d}s,\\ v(t)&=\displaystyle \int _{a}^{b} G_2(t,s)\,f_{2}(s,u(s),\dots ,u^{n-1}(s), v(s),\dots ,v^{m-1}(s))\,\textrm{d}s, \end{aligned} \right. \end{aligned}$$

where \(G_p\), \(p=1, 2\), denote the Green’s functions related to problem (2) satisfying some appropriate regularity properties. In particular, \(G_1\) will be the Green’s function related to problem

$$\begin{aligned} \left\{ \begin{aligned} u^{(n)}(t)+a_{1}(t)\, u^{(n-1)}(t)+\dots +a_{n}(t)\, u(t)&=0,\quad t\in I,\\ B_{i}\left( u\right)&=h_{i},\quad i=1,\ldots ,n,\\ \end{aligned} \right. \end{aligned}$$

and \(G_2\) will be the Green’s function related to

$$\begin{aligned} \left\{ \begin{aligned}&v^{(m)}(t)+b_{1}(t)\, v^{(m-1)}(t)+\dots +b_{m}(t)\, v(t)=0,\quad t\in I,\\&B_{j}\left( v\right) =q_{j},\quad j=1,\ldots ,m. \end{aligned} \right. \end{aligned}$$

In a more general framework, problem (1) derives from a coupled system of two linear differential equations, one of order n and the other one of order m, both of them depending on u and v, and where the non homogeneous parts \(f_i\), \(i=1, 2\), are nonlinear functions together with \(n+m\) non-local linear boundary conditions. That is, fixed points of problem (1) correspond to solutions of

$$\begin{aligned} \left\{ \begin{aligned} L_1(u,v)(t)&=\sigma _1(t),\quad t\in I,\\ L_2(u,v)(t)&=\sigma _2(t),\\ B_{i}(u,v)&=\delta _{i}\, C_i(u,v),\quad i=1,\ldots ,n+m, \end{aligned} \right. \end{aligned}$$
(3)

where

$$\begin{aligned} \begin{aligned} L_1(u,v)(t):&=u^{(n)}(t)+\displaystyle \sum _{k=0}^{n-1}a_{k}\, u^{(k)}(t)+\displaystyle \sum _{i=0}^{m-1}b_{i}\, v^{(i)}(t),\\ L_2(u,v)(t):&=v^{(m)}(t)+\displaystyle \sum _{k=0}^{n-1}c_{k}\, u^{(k)}(t)+\displaystyle \sum _{i=0}^{m-1}d_{i}\, v^{(i)}(t). \end{aligned} \end{aligned}$$
(4)

Here \(\sigma _{1}\) and \(\sigma _{2}\) are continuous functions on I, \(\delta _{i}\in {\mathbb R}\) for all \(i=1,\ldots ,n+m\), \(a_{k}, c_{k}\in {\mathbb R}\) for all \(k=0,\ldots ,n-1\) and \(b_{i}, d_{i}\in \mathbb {R}\) for all \(i=0,\ldots ,m-1\).

Moreover, \(C_i:C^{n}(I)\times C^{m}(I)\rightarrow \mathbb {R}\) are linear continuous operators and \(B_i\) covers the general two-point linear boundary conditions, i.e.,

$$\begin{aligned} B_{i}(u,v)=&\,\displaystyle \sum _{j=0}^{n-1} \left( \alpha _{j}^{i}\, u^{(j)}(a)+\beta _{j}^{i}\, u^{(j)}(b) \right) \\&+\displaystyle \sum _{l=0}^{m-1} \left( \tilde{\alpha }_{l}^{i}\, v^{(l)}(a)+\tilde{\beta }_{l}^{i}\, v^{(l)}(b) \right) ,\quad i=1,\ldots ,n+m, \end{aligned}$$

where \(\alpha _{j}^{i}\), \(\beta _{j}^{i}\), \(\tilde{\alpha }_{l}^{i}\) and \(\tilde{\beta }_{l}^{i}\) real constants for all \(j=0,\ldots ,n-1\), \(l=0,\ldots ,m-1\) and \(i=1,\ldots ,n+m\). Note that in this case (contrary to problem (2)), the left-hand side of the equations (that is, operators \(L_1\) and \(L_2\)) may depend both on u and v and its derivatives.

In [15], the authors studied the existence of solutions to the following coupled system of integral equations of the Hammerstein type

$$\begin{aligned} \left\{ \begin{aligned} u_1(t)&=\displaystyle \int _{a}^{b} k_{1}(t,s)\,g_1(s)\,f_{1}(s,u(s),\dots ,u^{(m_1)}(s),v(s),\dots ,v^{(n_1)}(s))\,\textrm{d}s,\\ u_2(t)&=\displaystyle \int _{a}^{b} k_{2}(t,s)\,g_2(s)\,f_{2}(s,u(s),\dots ,u^{(m_2)}(s),v(s),\dots ,v^{(n_2)}(s))\,\textrm{d}s, \end{aligned}\right. \end{aligned}$$
(5)

where \(k_i:[0,1]^{2}\rightarrow {\mathbb R}\), \(i=1, 2\), are the kernel functions such that \(k_i\in W^{r_i,1}([0,1]^{2})\), \(r_1=\max \{m_1,m_2\}\), \(r_2=\max \{n_1,n_2\}\), with \(m_i, n_i\ge 0\), \(g_i\in L^{1}([0,1])\) with \(g_i(t)\ge 0\) for a.e. \(t\in [0,1]\), and \(f_i:[0,1]\times {\mathbb R}^{m_i+n_i+2} \rightarrow [0,\infty )\) are \(L^{\infty }\)-Carathéodory functions.

The solutions of (5) have been obtained by the Krasnoselskii–Guo fixed-point Theorem. In this case, the integral kernels \(k_1\) and \(k_2\) considered, and their partial derivatives with respect to the first variable up to certain orders, can be discontinuous and change sign in the square \([0,1]\times [0,1]\), since a condition is required that they are positive in subintervals of [0, 1] that can be degenerate.

In [11], the authors studied the existence and the multiplicity of positive solutions to the nonlinear differential system with perturbed integral boundary conditions

$$\begin{aligned} \left\{ \begin{aligned} x''(t)+f_1(t, x(t), y(t))&=0,\quad t\in (0,1),\\ y''(t)+f_2(t, x(t), y(t))&=0,\quad t\in (0,1),\\ x(0)=0,\quad x(1)&=\lambda \displaystyle \int _{0}^{1} x(s)\,\textrm{d}s,\\ y(0)=0,\quad y(1)&=\lambda \displaystyle \int _{0}^{1} y(s)\,\textrm{d}s, \end{aligned} \right. \end{aligned}$$

where \(\lambda \in (0,2)\) and \(f_1, f_2:(0,1)\times (0,\infty )\times (0,\infty )\rightarrow [0,\infty )\) are continuous functions, which may admit a singularity at the origin, that is,

$$\lim \limits _{(x,y)\rightarrow (0,0)} f_i(t,x,y)=+\infty \quad \text {uniformly in } \;\;t,\;\; i=1, 2. $$

This problem can be seen as a particular case of Hammerstein-type equations. By constructing two classes of cones, and based on the Leray–Schauder Theorem and a well-known fixed-point theorem in cones, the existence of at least one positive solution was obtained.

In [14], Minhós and Coxe investigated the fourth-order coupled system

$$\begin{aligned} \left\{ \begin{aligned} u^{(4)}(t)&=f(t,u(t),u'(t),u''(t),u'''(t),v(t),v'(t),v''(t),v'''(t)),\quad t\in [0,1],\\ v^{(4)}(t)&=h(t,u(t),u'(t),u''(t),u'''(t),v(t),v'(t),v''(t),v'''(t)),\quad t\in [0,1], \end{aligned} \right. \end{aligned}$$

with \(f,h: [0,1]\times {\mathbb R}^{8}\rightarrow {\mathbb R}\) being \(L^{1}\)-Carathéodory functions and satisfying the boundary conditions

$$\begin{aligned} \left\{ \begin{aligned}&u(0)=u'(0)=u''(0)=u''(1)\\&v(0)=v'(0)=v''(0)=v''(1). \end{aligned} \right. \end{aligned}$$

The existence of at least one solution of the previous problem was obtained by transforming it into an equivalent integral system with a certain kernel G and the construction of lower and upper coupled solutions.

Motivated by the works above, we consider the problem (1), and we will try to study the existence of its solution. The difference between our work and those mentioned above is that our case considers four Green’s functions and the variables may be not separated, while previous works consider only two Green’s functions with separate variables.

It should be noted that there are problems of type (1) in which our results are applicable considering four Green’s functions and that can be transformed into coupled systems with separate variables like the previous works with two Green’s functions, by passing linear terms to the right-hand side of the equation and subtracting them from the \(f_i\), \(i=1, 2\). However, in such cases, the results of the previous works are not applicable. This is due to the fact that the two new Green’s functions do not satisfy the regularity conditions or the new functions of the nonlinear part do not satisfy the required asymptotic conditions. This will be illustrated in Example 4.6.

The existence of cases of this type of problems verifying what we have just noticed is what makes our method more general than the previous ones and presents advantages over the one that considers only two Green’s functions, since this is not always applicable. This shows novelty and interest of our method.

This paper is organized in the following way: Sect. 2 contains some preliminaries that we need to develop the article. In Sect. 3, we will obtain the main result on existence and uniqueness of solutions to the linear problem with an example of an application of the result. Finally, the last section deals with the existence of solutions of the nonlinear problem under certain conditions of regularity of the Green’s functions and asymptotic conditions of the nonlinear parts. An example is also given that illustrates our result on existence, and an example in which considering two Green’s functions, it is not possible to apply the results of the previous works as we mentioned earlier.

2 Preliminaries

In this paper, we are interested in studying the linear system coupled to non-local boundary conditions involving certain parameters (3).

In this section, we develop the basic tools that we need to deduce the existence and the uniqueness of the solution to problem (3).

Making a change of variable of type

$$\begin{aligned} \begin{aligned} x_1&=u,\;\;x_2=x'_1,\,\ldots ,\,x_{n}=x'_{n-1},\\ y_1&=v,\;\;y_2=y'_1,\,\ldots ,\,y_{m}=y'_{m-1}, \end{aligned} \end{aligned}$$

in system (3), we obtain the following equivalent first-order differential \((n+m)\)-dimensional linear system

$$\begin{aligned} z'(t)=A\,z(t)+f(t),\;\; t\in I, \end{aligned}$$
(6)

together with the boundary conditions

$$\begin{aligned} B_i(z)=\delta _i\,C_i(z),\quad i=1,\ldots ,n+m, \end{aligned}$$
(7)

where

$$\begin{aligned} z(t)=\begin{pmatrix} x_1(t)\\ \vdots \\ x_{n-1}(t)\\ x_n(t)\\ y_1(t)\\ \vdots \\ y_{m-1}(t)\\ y_{m}(t) \end{pmatrix},\quad A=\left( \begin{array}{c|c} A_1 &{} A_2 \\ \hline A_3 &{} A_4 \end{array}\right) ,\quad f(t)=\begin{pmatrix} 0\\ \vdots \\ 0\\ \sigma _{1}(t)\\ 0\\ \vdots \\ 0\\ \sigma _{2}(t) \end{pmatrix},\, t\in I, \end{aligned}$$

with

$$\begin{aligned} A_1= & {} \begin{pmatrix} 0 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 1 \\ -a_{0} &{} -a_{2} &{}-a_{3} &{} \cdots &{} -a_{n-1} \end{pmatrix},\quad A_{2}=\begin{pmatrix} 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ -b_{0} &{} -b_{2} &{}-b_{3} &{} \cdots &{} -b_{m-1} \end{pmatrix},\\ A_3= & {} \begin{pmatrix} 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ -c_{0} &{} -c_{2} &{}-c_{3} &{} \cdots &{} -c_{n-1} \end{pmatrix} \quad \text {and}\quad A_{4}=\begin{pmatrix} 0 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 1\\ -d_{0} &{} -d_{2} &{}-d_{3} &{} \cdots &{} -d_{m-1} \end{pmatrix}. \end{aligned}$$

Remark 2.1

Notice that the boundary conditions \(B_i\), \(i=1,\ldots ,n+m\), do not only depend on u and v, but also on \(u',\ldots ,u^{(n-1)}\) and \(v',\ldots ,v^{(m-1)}\). Therefore, we can use the notation \(B_{i}(u,\dots ,u^{(n-1)},v,\dots ,v^{(m-1)})\equiv B_{i}(u,v)\), \(i=1,\ldots , n+m\). The same holds for the operators \(C_i\), i.e., \(C_{i}(u,\dots ,u^{(n-1)},v,\dots ,v^{(m-1)})\equiv C_{i}(u,v)\), \(i=1,\ldots , n+m\).

Remark 2.2

We should note that once system (3) is transformed into problem (6)–(7), the boundary conditions \(B_i\), \(i=1,\ldots n+m\), become of the form \(E\, z(a)+F\, z(b)\) with \(E, F\in \mathcal {M}_{(n+m)\times (n+m)} \) given by

$$\begin{aligned}&E=\begin{pmatrix} \alpha _{0}^{1} &{}\quad \cdots &{}\quad \alpha _{n-1}^{1} &{}\quad \tilde{\alpha }_{0}^{1}&{}\quad \cdots &{}\quad \tilde{\alpha }_{m-1}^{1}\\ \vdots &{}\quad \ddots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \alpha _{0}^{n+m} &{}\quad \cdots &{}\quad \alpha _{n-1}^{n+m} &{}\quad \tilde{\alpha }_{0}^{n+m}&{}\quad \cdots &{}\quad \tilde{\alpha }_{m-1}^{n+m} \end{pmatrix} \quad \text {and}\\&F=\begin{pmatrix} \beta _{0}^{1} &{}\quad \cdots &{}\quad \beta _{n-1}^{1} &{}\quad \tilde{\beta }_{0}^{1}&{}\quad \cdots &{}\quad \tilde{\beta }_{m-1}^{1}\\ \vdots &{}\quad \ddots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \beta _{0}^{n+m} &{}\quad \cdots &{} \quad \beta _{n-1}^{n+m} &{}\quad \tilde{\beta }_{0}^{n+m}&{}\quad \cdots &{}\quad \tilde{\beta }_{m-1}^{n+m} \end{pmatrix}. \end{aligned}$$

Let us consider the following homogeneous problem

$$\begin{aligned} z'(t)=A\,z(t),\;\; t\in I,\quad E\, z(a)+F\, z(b)=0, \end{aligned}$$
(8)

and let \(\phi :I\rightarrow \mathcal {M}_{(n+m)\times (n+m)}\) be its fundamental matrix, that is, \(\phi \) is a regular functional matrix and a solution of the linear matrix equation

$$\begin{aligned} \phi '(t)=A\,\phi (t),\quad t\in I. \end{aligned}$$

Next, we state a result on existence and uniqueness of solution and the characterization of the spectrum of problem (6)–(7) for \(\delta _i=0\), \(i=1,\ldots ,n+m\).

Theorem 2.3

[2, Theorem 1.2.3] Problem (6)–(7) with \(\delta _i=0\), \(i=1,\ldots , n+m\), that is,

$$\begin{aligned} z'(t)=A\,z(t)+f(t),\; t\in I,\quad B_i(z)=0,\quad i=1,\ldots ,n+m, \end{aligned}$$
(9)

has a unique solution \(z\in C(I,{\mathbb R}^{n+m})\) for any \(f\in C(I,{\mathbb R}^{n+m})\) if and only if

$$\begin{aligned} \det (M_{\phi })\ne 0, \end{aligned}$$
(10)

where \(\phi \) is any fundamental matrix of system (8) and \(M_{\phi }:=E\,\phi (a)+F\,\phi (b)\).

Remark 2.4

The matrices E and F are not unique since we can take as E and F a multiple \(k\,E\) and \(k\,F\) with k a nonzero real number. It is immediate to verify that condition (10) is independent of the chosen fundamental matrix. Also, condition (10) holds if and only if problem (8) has only the trivial solution.

Let \(\{u_{1},\ldots ,u_{n+m} \}\) denote the set of linearly independent solutions of \(z'(t)=A\, z(t)\) formed by the columns of \(\phi \). As a consequence

$$\begin{aligned} M_{\phi }=E\,\phi (a)+F\,\phi (b)=\begin{pmatrix} B_1(u_1)&{}\quad \cdots &{}\quad B_1(u_{n+m})\\ \vdots &{}\quad \ddots &{}\quad \vdots \\ B_{n+m}(u_1)&{}\quad \cdots &{}\quad B_{n+m}(u_{n+m}) \end{pmatrix}. \end{aligned}$$

Therefore, condition (10) is satisfied if and only if the determinant of the above matrix is not zero. In this case, problem (9) has a unique solution.

Example 2.5

We consider the following system

$$\begin{aligned} \left\{ \begin{aligned} u'(t)+\lambda \,u(t)&=\sigma _1(t),\;\; t\in [0,1],\\ v'(t)+\mu \,v(t)&=\sigma _2(t),\\ u(0)+v(1)&=\delta _{1}\, \int _{0}^{1} u(s)\,\textrm{d}s,\\ u(1)+v(0)&=\delta _{2}\, \int _{0}^{1} u(s)\,\textrm{d}s, \end{aligned} \right. \end{aligned}$$
(11)

with \(\sigma _1\), \(\sigma _2\) two continuous functions, \(\delta _{i}\in {\mathbb R}\), \(i=1,2\) and \(\lambda , \mu \in {\mathbb R}\). In this case, \(B_{1}(u,v)=u(0)+v(1)\), \(B_{2}(u,v)=u(1)+v(0)\) and \(C_{1}(u,v)=C_{2}(u,v)=\int _{0}^{1} u(s)\,\textrm{d}s\). In addition,

$$\begin{aligned} u_{1}(t)=\begin{pmatrix} e^{-\lambda t}\\ 0 \end{pmatrix},\quad u_{2}(t)=\begin{pmatrix} 0\\ e^{-\mu t} \end{pmatrix}, \end{aligned}$$

are two linearly independent solutions of

$$\begin{aligned} \left\{ \begin{aligned} u'(t)+\lambda \,u(t)&=0,\;\; t\in [0,1],\\ v'(t)+\mu \,v(t)&=0,\;\; t\in [0,1], \end{aligned} \right. \end{aligned}$$

that are the columns of the fundamental matrix \(\phi (t)=\begin{pmatrix} e^{-\lambda t}&{}0\\ 0&{}e^{-\mu t} \end{pmatrix}\). Thus, problem (11) for \(\delta _1=\delta _2=0\) has a unique solution if and only if

$$\begin{aligned} \det \begin{pmatrix} B_1(u_1)&{} B_1(u_2)\\ B_2(u_1)&{} B_2(u_2) \end{pmatrix}=\det \begin{pmatrix} 1&{}e^{-\mu }\\ e^{-\lambda }&{}1 \end{pmatrix}=1-e^{-(\lambda +\mu )}\ne 0, \end{aligned}$$

that is, \(\lambda +\mu \ne 0\).

3 Solutions to the problem (3)

In this section, we will study the existence and uniqueness of solution to the non-local problem (3).

Suppose that problem (8) has only the trivial solution. We know from Theorem 2.3 that problem (9) has a unique solution for any \(f\in C(I,{\mathbb R}^{n+m})\). In such a case, this solution is given by

$$\begin{aligned} w(t)=\displaystyle \int _{a}^{b} g(t,s)\,f(s)\,\textrm{d}s, \end{aligned}$$
(12)

where g denotes the Green’s function matrix related to problem (8) and is given by the expression

$$\begin{aligned} g(t,s)=\left\{ \begin{aligned}&-\phi (t)\,M_{\phi }^{-1}\,F\,\phi (b)\,\phi ^{-1}(s)+\phi (t)\,\phi ^{-1}(s),\;\;&a\le s\le t\le b,\\&-\phi (t)\,M_{\phi }^{-1}\,F\,\phi (b)\,\phi ^{-1}(s),\;\;&a\le t< s\le b. \end{aligned} \right. \end{aligned}$$
(13)

See [2, page 15] for more details.

As a direct consequence of Theorem 2.3, we obtain the following lemma:

Lemma 3.1

There exists a unique Green’s function matrix g, related to Problem (8), if and only if for any \(i\in \{1, \dots ,n+m\}\), the following problem

$$\begin{aligned} \left\{ \begin{aligned} z'(t)&=A\, z(t),\quad t\in I,\\ B_{j}(z)&=0,\quad j\ne i,\\ B_{i}(z)&=1, \end{aligned} \right. \end{aligned}$$
(14)

has a unique solution, that we denote as \(z_{i}(t)\), \(t\in I\).

Now, we are in a position to prove the existence and uniqueness of solution to problem (6)–(7) under condition (10) in the following result, which generalizes the ideas and the formulas obtained for the scalar case in [3].

Theorem 3.2

Suppose that problem (8) has only the trivial solution and let g be its related Green’s function given in (13). Let \(f \in C(I,{\mathbb R}^{n+m})\), and \(\delta _{i}\in {\mathbb R},\) \(i=1,\dots , n+m\), be such that

$$\begin{aligned} \det (I_{n+m}-D)\ne 0, \end{aligned}$$
(15)

with \(I_{n+m}\) the identity matrix of order \(n+m\) and \(D=(d_{ij})_{(n+m)\times (n+m)}\in \mathcal {M}_{(n+m)\times (n+m)}\) given by

$$d_{ij}=\delta _{j}\,C_i(z_{j}), \quad i,\; j \in \{1, \ldots ,n+m\}.$$

Then problem (6)–(7) has a unique solution \(z\in C(I,{\mathbb R}^{n+m})\), given by the expression

$$\begin{aligned} z(t)=\displaystyle \int _{a}^{b} G(t,s,\delta _{1},\dots ,\delta _{n+m})\, f(s)\, \textrm{d}s, \end{aligned}$$
(16)

where

$$\begin{aligned} G(t,s,\delta _{1},\dots ,\delta _{n}):=g(t,s)+\displaystyle \sum _{i=1}^{n+m} \displaystyle \sum _{j=1}^{n+m} \delta _{i}\,r_{ij}\, z_{i}(t)\,h_{j}(s), \end{aligned}$$
(17)

with \(z_j\in C(I,{\mathbb R}^{n+m})\) defined in Lemma 3.1, \(R=\left( r_{ij}\right) _{(n+m)\times (n+m)}=(I_{n+m}-D)^{-1}\), and

$$\begin{aligned}{} & {} h_{j}(s):=\begin{pmatrix} C_{j}(g^{1}(\cdot ,s))&C_{j}(g^{2}(\cdot ,s))&\cdots&C_{j}(g^{n+m}(\cdot ,s)) \end{pmatrix} \in C(I,\mathcal {M}_{1\times (n+m)}),\\{} & {} \quad \quad j=1,\ldots ,n+m, \end{aligned}$$

where \(g^{k}\), \(k=1,\ldots , n+m\), denotes the k-th column of the matrix g.

Proof

Since problem (8) has only the trivial solution, we know that problem (9) has a unique solution w given by the expression (12). By Lemma 3.1, there exists a unique solution \(z_i\), \(i=1,\ldots ,n+m\), of problem (14). Thus, any solution to problem (6)–(7) has the following form

$$\begin{aligned} z(t)=w(t)+\displaystyle \sum _{i=1}^{n+m} z_{i}(t)\, \delta _{i} \, C_i(z),\quad t\in I. \end{aligned}$$
(18)

Applying the linear operator \(C_j\) to both sides of the previous equality, we obtain that

$$\begin{aligned} C_j(z)= & {} C_j(w)+C_j\left( \displaystyle \sum _{i=1}^{n+m} \delta _{i} \,z_{i}(t)\,C_i(z)\right) =C_j(w)+\displaystyle \sum _{i=1}^{n+m} \delta _{i} \, C_j(z_{i})\, C_i(z),\nonumber \\{} & {} \quad j=1,\dots ,n+m, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \begin{aligned} C_j(z) - \displaystyle \sum _{i=1}^{n+m} \delta _{i}\, C_j(z_{i})\, C_i(z) =C_j(w), \quad j=1,\dots ,n+m. \end{aligned} \end{aligned}$$

Therefore, the above equalities are equivalent to the following system:

$$\begin{aligned} (I_{n+m}-D) \, \left( \begin{array}{c} C_1(z) \\ C_2(z) \\ \vdots \\ C_{n+m}(z)\end{array}\right) = \left( \begin{array}{c} C_1(w) \\ C_2(w) \\ \vdots \\ C_{n+m}(w) \end{array}\right) . \end{aligned}$$

Since \(I_{n+m}-D\) is an invertible matrix, we have

$$\begin{aligned} C_i(z)= \displaystyle \sum _{j=1}^{n+m} r_{ij}\, C_j(w), \quad i=1,\dots ,n+m. \end{aligned}$$

Thus, equality (18) becomes

$$\begin{aligned} z(t)=&\, w(t)+ \displaystyle \sum _{i=1}^{n+m} \delta _i\, z_{i}(t) \left( \displaystyle \sum _{j=1}^{n+m} r_{ij}\, C_j(w) \right) \\ =&\, w(t)+ \displaystyle \sum _{i=1}^{n+m} \sum _{j=1}^{n+m} \delta _i\, r_{ij} \, z_{i}(t) \,C_j(w), \quad t \in I. \end{aligned}$$

Due to the fact that \(C_j\) is a continuous linear operator, we have

$$\begin{aligned} C_j(w)=C_j\left( \displaystyle \int _{a}^{b} g(\cdot ,s)\, f(s)\, \textrm{d}s\right) =\displaystyle \int _{a}^{b} C_j\left( g(\cdot ,s)\, f(s) \right) \, \textrm{d}s. \end{aligned}$$
(19)

Since

$$\begin{aligned} \begin{aligned} g(\cdot ,s)\,f(s)&=\begin{pmatrix} g^{1}(\cdot ,s)&g^{2}(\cdot ,s)&\cdots&g^{n+m}(\cdot ,s) \end{pmatrix}\,\begin{pmatrix} f_1(s) \\ f_2(s) \\ \vdots \\ f_{n+m}(s) \end{pmatrix}\\&=f_{1}(s)\,g^{1}(\cdot ,s)+\dots +f_{n+m}(s)\,g^{n+m}(\cdot ,s), \end{aligned} \end{aligned}$$

we have that

$$\begin{aligned} \begin{aligned} C_j\left( g(\cdot ,s)\, f(s) \right)&=f_{1}(s)\,C_{j}(g^{1}(\cdot ,s))+\dots +f_{n+m}(s)\,C_{j}(g^{n+m}(\cdot ,s))\\&=\begin{pmatrix} C_{j}(g^{1}(\cdot ,s))&C_{j}(g^{2}(\cdot ,s))&\cdots&C_{j}(g^{n+m}(\cdot ,s)) \end{pmatrix}\,\begin{pmatrix} f_1(s) \\ f_2(s) \\ \vdots \\ f_{n+m}(s) \end{pmatrix} \\&= h_{j}(s)\,f(s). \end{aligned} \end{aligned}$$

Substituting the previous expression into (19), we obtain that \( C_j(w)=\displaystyle \int _{a}^{b} h_{j}(s)\, f(s)\, \textrm{d}s.\) Therefore,

$$\begin{aligned} z(t)&= w(t)+ \displaystyle \sum _{i=1}^{n+m} \sum _{j=1}^{n+m} \delta _i\, r_{ij} \, z_{i}(t) \,C_j(w) \\&= \displaystyle \int _{a}^{b} g(t,s)\, f(s)\, \textrm{d}s+ \displaystyle \sum _{i=1}^{n+m} \sum _{j=1}^{n+m} \delta _i\, r_{ij} \, z_{i}(t) \, \displaystyle \int _{a}^{b} h_{j}(s)\, f(s)\, \textrm{d}s\\&=\displaystyle \int _{a}^{b} \left( g(t,s)+ \sum _{i=1}^{n+m} \sum _{j=1}^{n+m} \delta _{i} \, r_{ij} \, z_{i}(t)\, h_j(s)\right) \, f(s) \, \textrm{d}s \\&=\displaystyle \int _{a}^{b} G(t,s,\delta _{1},\dots ,\delta _{n})\, f(s) \, \textrm{d}s. \end{aligned}$$

Thus, we have proved the existence of at least one solution to problem (6)–(7) given by (16).

Finally, we prove the uniqueness of the solution of problem (6)–(7). Suppose that problem (6)–(7) has two different solutions \(w_1\) and \(w_2\). Thus,

$$\begin{aligned} \left\{ \begin{array}{rlll} (w_1-w_2)'(t)&{}=&{}A\,(w_1-w_2)(t),&{} t\in I,\\ B_{i}(w_1-w_2)&{}=&{} \delta _{i} \, C_i(w_1-w_2),&{} i=1,\ldots ,n+m. \end{array} \right. \end{aligned}$$
(20)

So, from (18), we know that the solution \(w_1-w_2\) is given by

$$\begin{aligned} (w_1-w_2)(t)=\displaystyle \sum _{i=1}^{n+m} \delta _{i} \, z_{i}(t) \, C_i(w_1-w_2), \quad t \in I. \end{aligned}$$

By applying the operator \(C_{j}\) to both sides of the previous equality, we obtain

$$\begin{aligned} C_j(w_1-w_2)=\displaystyle \sum _{i=1}^{n+m} \delta _{i}\, C_j(z_{i}) \, C_i(w_1-w_2), \quad j=1,\dots ,n+m, \end{aligned}$$

or, equivalently,

$$\begin{aligned} (I_{n+m}-D) \, \left( \begin{array}{c} C_1(w_1-w_2) \\ C_2(w_1-w_2) \\ \vdots \\ C_{n+m}(w_1-w_2) \end{array}\right) = \left( \begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \end{array}\right) . \end{aligned}$$

Again, since \(I_{n+m}-D\) is an invertible matrix, we infer that \(C_i(w_1-w_2)=0\) for \(i=1,\dots ,n+m\). Therefore, by (20), we have that \(w_1-w_2\) is a solution of the homogeneous problem

$$\begin{aligned} \left\{ \begin{aligned} (w_1-w_2)'(t)&=A\,(w_1-w_2)(t) ,\;\; t\in I,\\ B_{i}(w_1-w_2)&= 0,\;\; i=1,\ldots ,n+m. \end{aligned} \right. \end{aligned}$$

By hypothesis, we know that the homogeneous problem (8) has only the trivial solution, so it follows that \(w_1=w_2\) on I, and this completes the proof. \(\square \)

In the following result, once the expression for the solution z is obtained, we can retrieve the solutions u and v of problem (3).

Corollary 3.3

Assume that problem (8) has only the trivial solution and let g be its related Green’s function. Let \(\sigma _{1},\sigma _2\in C(I)\), and \(\delta _{i}\in {\mathbb R},\) \(i=1,\dots , n+m\), be such that condition (15) holds.

Then system (3) has a unique solution \((u,v)\in C^{n}(I)\times C^{m}(I)\) given by

$$\begin{aligned} \left\{ \begin{aligned} u(t)=&\displaystyle \int _{a}^{b} \Big (G_{1,n}(t,s,\delta _{1},\dots ,\delta _{n})\,\sigma _{1}(s)\\&+G_{1,n+m}(t,s,\delta _{1},\dots ,\delta _{n})\,\sigma _2(s)\Big )\,\textrm{d}s,\;\; t\in I,\\ v(t)=&\displaystyle \int _{a}^{b} \Big (G_{n+1,n}(t,s,\delta _{1},\dots ,\delta _{n})\,\sigma _{1}(s)\\&+G_{n+1,n+m}(t,s,\delta _{1},\dots ,\delta _{n})\,\sigma _2(s)\Big )\,\textrm{d}s, \end{aligned}\right. \end{aligned}$$
(21)

where \(G_{i,j}\) denotes the (ij) entries of the matrix G defined in (17).

Remark 3.4

We emphasize that results similar to those obtained above can be deduced for the general problem:

$$\begin{aligned} \left\{ \begin{aligned} L_i(u_1,\dots ,u_n)(t)&=\sigma _i(t),\quad t\in I,\quad i=1,\ldots ,n,\\ B_{p}(u_1,\dots ,u_n)&=\delta _{p}\, C_p(u_1,\dots ,u_n),\quad p=1,\ldots ,\displaystyle \sum _{i=1}^{n} r_i, \end{aligned} \right. \end{aligned}$$

where

$$\begin{aligned} L_i(u_1,\dots ,u_n)(t):=u_{i}^{(r_i)}(t)+\displaystyle \sum _{k=0}^{r_i-1}a_{k}^{i}\, u_{i}^{(k)}(t)+\displaystyle \sum _{j\ne i} \displaystyle \sum _{l=0}^{r_j-1}b_{lj}^{i}\, u_{j}^{(l)}(t). \end{aligned}$$

Here, \(\sigma _{i}\) are continuous functions on I, \(a_{k}^{i}, b_{lj}^{i}\in {\mathbb R}\), \(r_i\in {\mathbb N}\) for all \(i=1,\ldots ,n\), \(l=0,\ldots ,r_j-1\) with \(j\in \{1,\ldots ,n\}\), \(j\ne i\), and \(\delta _{p} \in \mathbb {R}\) for all \(p=1,\ldots , \sum _{i=1}^{n} r_i\).

Moreover, \(C_p: \prod _{k=1}^{n} C^{r_k}(I)\rightarrow \mathbb {R}\) is a linear continuous operator and \(B_p\) covers the general two-point linear boundary conditions, i.e.,

$$\begin{aligned} B_{p}(u_1,\dots ,u_n)=\displaystyle \sum _{i=1}^{n} \displaystyle \sum _{j=0}^{r_i-1} \left( \alpha _{ji}^{p}\, u_{i}^{(j)}(a)+\beta _{ji}^{p}\, u_i^{(j)}(b) \right) ,\quad p=1,\ldots ,\displaystyle \sum _{i=1}^{n} r_i, \end{aligned}$$

with \(\alpha _{ji}^{p}\) and \(\beta _{ji}^{p}\) being real constants for all \(j=0,\ldots ,r_i-1\), \(i=1,\ldots ,n\) and \(p=1,\ldots , \sum _{i=1}^{n} r_i\).

Example 3.5

Consider the system

$$\begin{aligned} \left\{ \begin{aligned} u''(t)-v(t)&=e^{t},\;\; t\in [0,1],\\ v'(t)-u'(t)&=t,\;\; t\in [0,1],\\ u(0)&=0,\\ u'(0)&=u'(1),\\ v(0)&=0. \end{aligned} \right. \end{aligned}$$
(22)

For this system, we have that \(\sigma _1(t)=e^{t}\), \(\sigma _2(t)=t\), \(t\in [0,1]\), \(\delta _{1}=0\), \(\delta _{2}=1\), \(\delta _{3}=0\), \(B_{1}(u,v)=u(0)\), \(B_{2}(u,v)=u'(0)\), \(B_{3}(u,v)=v(0)\), \(C_{1}(u,v)=0\), \(C_{2}(u,v)=u'(1)\), and, \(C_{3}(u,v)=0\).

Using the formulas in Theorem 3.2 and substituting them in the expression (17), we have

$$\begin{aligned} G(t,s)=\begin{pmatrix} 1&{} \sinh (t-s)+\frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)} &{}\cosh (t-s)-\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)}-1\\ 0 &{} \cosh (t-s)+\frac{\cosh (s-1)\,\cosh (t)}{1-\cosh (1)} &{} \sinh (t-s)-\frac{\cosh (t)\,\sinh (s-1)}{1-\cosh (1)} \\ 0 &{}\sinh (t-s)+\frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)} &{} \cosh (t-s)-\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)} \end{pmatrix}, \end{aligned}$$

for \(0\le s\le t\le 1\), and

$$\begin{aligned} G(t,s)=\begin{pmatrix} 0&{} \frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)} &{}-\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)} \\ 0 &{} \frac{\cosh (s-1)\,\cos (t)}{1-\cosh (1)} &{} -\frac{\cosh (t)\,\sinh (s-1)}{1-\cosh (1)}\\ 0 &{} \frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)} &{} -\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)} \end{pmatrix}, \end{aligned}$$

for \(0\le t< s\le 1\).

Now, making use of the expressions (21), we have the solution of problem (22) given by

$$\begin{aligned} \begin{aligned} u(t)&=\frac{e^{-t}\,(e\,(4e-5)+e^{2t}\,(2+e+e^{2}\,(-2+t)+t-2\,e\,t)-(-1+e)^{2}\,e^{t}\,(2+t^{2}))}{2(e-1)^{2}},\nonumber \\&\quad \;\; t\in [0,1],\\ v(t)&=\frac{e^{-t}\,(e\,(4e-5)+e^{2t}\,(2+e+e^{2}\,(t-2)+t-2\,e\,t))}{2(e-1)^{2}}-1,\;\; t\in [0,1]. \end{aligned} \end{aligned}$$

4 Nonlinear problem

In this section, we will study the existence of a nontrivial solution to the nonlinear problem

$$\begin{aligned} \left\{ \begin{aligned} L_1(u,v)(t)&=f_1(t,u(t),\dots ,u^{(n-1)}(t),v(t),\dots ,v^{(m-1)}(t)),\quad t\in I,\\ L_2(u,v)(t)&=f_2(t,u(t),\dots ,u^{(n-1)}(t),v(t),\dots ,v^{(m-1)}(t)),\quad t\in I,\\ B_{i}(u,v)&=\delta _{i}\, C_i(u,v),\quad i=1,\ldots ,n+m, \end{aligned} \right. \end{aligned}$$
(23)

where \(f_{i}:I\times {\mathbb R}^{n+m}\rightarrow [0,\infty )\), \(i=1,2\), are \(L^{1}\)-Carathéodory functions and \(L_1\) and \(L_2\) are given in (4).

The existence of solutions is obtained by applying Krasnoselskii’s fixed-point Theorem on cones in a certain Banach space.

As a notation, we will denote

$$\begin{aligned} \begin{array}{llll} G_{1}(t,s)\equiv &{} G_{1,n}(t,s,\delta _{1},\dots ,\delta _{n+m}),&{} G_{2}(t,s)\equiv &{} G_{1,n+m}(t,s,\delta _{1},\dots , \delta _{n+m}),\\ G_{3}(t,s)\equiv &{} G_{n+1,n}(t,s,\delta _{1},\dots ,\delta _{n+m}),&{} G_{4}(t,s)\equiv &{} G_{n+1,n+m}(t,s,\delta _{1},\dots , \delta _{n+m}). \end{array} \end{aligned}$$

It is not difficult to verify that u and v are solutions of problem (23) if and only if they satisfy the system of integral equations (1).

In the sequel, we will set out the conditions we need to ensure the existence of at least one nontrivial solution to problem (1). We will make use of the following conditions:

  • \((H_{1})\) The Green’s functions \(G_{r}:I\times I\rightarrow {\mathbb R}\), \(r=1,2\), are such that \(G_{r}\in W^{n-1,1}(I\times I)\).

    The \((n-1)\)-th derivative of \(G_{r}\), \(r=1, 2\), satisfies that for every \(\varepsilon >0\) and every fixed \(\mu \in I\), there exists \(\delta >0\) such that, if \(|t-\mu |<\delta \), then

    $$\begin{aligned} \Bigg | \frac{\partial ^{n-1} G_{r}}{\partial t^{n-1}}(t,s)-\frac{\partial ^{n-1} G_{r}}{\partial t^{n-1}}(\mu ,s) \Bigg |<\varepsilon , \end{aligned}$$

    for a.e \(s<\min \{t,\mu \}\) and for a.e \(s>\max \{t,\mu \}\).

    Moreover, if \(n\ge 2\), then for \(i=0,\ldots ,n-1\), for every \(\varepsilon >0\) and every fixed \(\mu \in I\), there exists \(\delta >0\) such that \(|t-\mu |<\delta \) implies that

    $$\begin{aligned} \Bigg | \frac{\partial ^{i} G_{r}}{\partial t^{i}}(t,s)-\frac{\partial ^{i} G_{r}}{\partial t^{i}}(\mu ,s) \Bigg |<\varepsilon ,\;\; \text {for a.e}\;\; s\in I. \end{aligned}$$

    The Green’s functions \(G_{l}:I\times I\rightarrow {\mathbb R}\), \(l=3,4\), are such that \(G_{l}\in W^{m-1,1}(I\times I)\).

    The \((m-1)\)-th derivative of \(G_{l}\), \(l=3, 4\), satisfies that for every \(\varepsilon >0\) and every fixed \(\mu \in I\), there exists \(\delta >0\) such that \(|t-\mu |<\delta \) then

    $$\begin{aligned} \Bigg | \frac{\partial ^{m-1} G_{l}}{\partial t^{m-1}}(t,s)-\frac{\partial ^{m-1} G_{l}}{\partial t^{m-1}}(\mu ,s) \Bigg |<\varepsilon , \end{aligned}$$

    for a.e \(s<\min \{t,\mu \}\) and for a.e \(s>\max \{t,\mu \}\).

    Moreover, if \(m\ge 2\), then for \(j=0,\ldots ,m-1\), for every \(\varepsilon >0\) and every fixed \(\mu \in I\), there exists \(\delta >0\) such that \(|t-\mu |<\delta \) implies that

    $$\begin{aligned} \Bigg | \frac{\partial ^{j} G_{l}}{\partial t^{j}}(t,s)-\frac{\partial ^{j} G_{l}}{\partial t^{j}}(\mu ,s) \Bigg |<\varepsilon ,\;\; \text {for a.e}\;\; s\in I. \end{aligned}$$
  • \((H_{2})\) For \(r=1,2\), and for each \(i\in I_{1}\), \(I_{1} \subset J_{1}:=\{0,\ldots ,n-1\}\), \(I_{1}\ne \emptyset \), there exists a subinterval \([m_{i}^{r},n_{i}^{r}] \subset I\) such that

    $$\begin{aligned} \frac{\partial ^{i} G_{r}}{ \partial t^{i}}(t,s) \ge 0,\;\; \text {for all} \;\; t\in [m_{i}^{r},n_{i}^{r}],\;\; s\in I. \end{aligned}$$

    For \(l=3,4\), and for each \(j\in I_{2}\), \(I_{2} \subset J_{2}:=\{0,\ldots ,m-1\}\), \(I_{2}\ne \emptyset \), there exists a subinterval \([m_{j}^{l},n_{j}^{l}] \subset I\) such that

    $$\begin{aligned} \frac{\partial ^{j} G_{l}}{ \partial t^{j}}(t,s) \ge 0,\;\; \text {for all} \;\; t\in [m_{j}^{l},n_{j}^{l}],\;\; s\in I. \end{aligned}$$

    We also admit that these intervals could be degenerate, that is, it is possible that \(m_{i}^{r}=n_{i}^{r}\) and \(m_{j}^{l}=n_{j}^{l}\).

  • \((H_{3})\) For \(r=1,2\) and for each \(i\in J_{1}\), there exist positive functions \(h_{i}^{r}\in L^{1}(I)\) such that

    $$\begin{aligned} \Bigg |\frac{\partial ^{i} G_{r}}{\partial t^{i}}(t,s)\Bigg |\le h_{i}^{r}(s)\;\; \text {for all}\;\;t\in I\;\;\text {and a.e}\;\; s\in I. \end{aligned}$$

    For \(l=3,4\) and for each \(j\in J_{2}\), there exist positive functions \(h_{j}^{l}\in L^{1}(I)\) such that

    $$\begin{aligned} \Bigg |\frac{\partial ^{j} G_{l}}{\partial t^{j}}(t,s)\Bigg |\le h_{j}^{l}(s)\;\; \text {for all}\;\;t\in I\;\;\text {and a.e}\;\; s\in I. \end{aligned}$$
  • \((H_{4})\) For \(r=1,2\), and for each \(i\in K_{1}\), \(K_{1} \subset I_{1}\), \(K_{1}\ne \emptyset \), there exist subintervals \([a_{i}^{r},b_{i}^{r}] \subset [m_{i}^{r},n_{i}^{r}]\) and \([c_{i}^{r},d_{i}^{r}]\subset I\), nonnegative functions \(\phi _{i}^{r}:I\rightarrow [0,\infty )\) and \(\xi _{i}^{r}\in (0,1)\) such that

    $$\begin{aligned}{} & {} \Bigg | \frac{\partial ^{i} G_{r}}{ \partial t^{i}}(t,s) \Bigg | \le \phi _{i}^{r}(s),\;\; \text {for all} \;\; t\in [c_{i}^{r},d_{i}^{r}]\;\;\text {and a.e}\;\; s\in I, \\{} & {} \frac{\partial ^{i} G_{r}}{ \partial t^{i}}(t,s) \ge \xi _{i}^{r}\, \phi _{i}^{r}(s),\;\; \text {for all} \;\; t\in [a_{i}^{r},b_{i}^{r}]\;\;\text {and a.e}\;\; s\in I. \end{aligned}$$

    For \(l=3,4\), and for each \(j\in K_{2}\), \(K_{2} \subset I_{2}\), \(K_{2}\ne \emptyset \), there exist subintervals \([a_{j}^{l},b_{j}^{l}] \subset [m_{j}^{l},n_{j}^{l}]\) and \([c_{j}^{l},d_{j}^{l}]\subset I\), nonnegative functions \(\phi _{j}^{l}:I\rightarrow [0,\infty )\) and \(\xi _{j}^{l}\in (0,1)\) such that

    $$\begin{aligned}{} & {} \Bigg | \frac{\partial ^{j} G_{l}}{ \partial t^{j}}(t,s) \Bigg | \le \phi _{j}^{l}(s),\;\; \text {for all} \;\; t\in [c_{j}^{l},d_{j}^{l}]\;\;\text {and a.e}\;\; s\in I, \\{} & {} \frac{\partial ^{j} G_{l}}{ \partial t^{i}}(t,s) \ge \xi _{j}^{l}\, \phi _{j}^{l}(s),\;\; \text {for all} \;\; t\in [a_{j}^{l},b_{j}^{l}]\;\;\text {and a.e}\;\; s\in I. \end{aligned}$$

    Moreover, one of the following conditions must be met:

    • \((H_{4}^{1})\) There exists some \(i_{0}\in K_{1}\) such that

      $$\begin{aligned} S_{1}:=[a_{i_{0}}^{1},b_{i_{0}}^{1}]\cap [a_{i_{0}}^{2},b_{i_{0}}^{2}]\cap [c_{i_{0}}^{1},d_{i_{0}}^{1}]\cap [c_{i_{0}}^{2},d_{i_{0}}^{2}]\ne \emptyset \end{aligned}$$

      and \([a_{i_0}^{12},b_{i_0}^{12}]:=[a_{i_{0}}^{1},b_{i_{0}}^{1}]\cap [a_{i_{0}}^{2},b_{i_{0}}^{2}]\) is a non-degenerate interval.

    • \((H_{4}^{2})\) There exists some \(j_{0}\in K_{2}\) such that

      $$\begin{aligned} S_{2}:=[a_{j_{0}}^{3},b_{j_{0}}^{3}]\cap [a_{j_{0}}^{4},b_{j_{0}}^{4}]\cap [c_{j_{0}}^{3},d_{j_{0}}^{3}]\cap [c_{j_{0}}^{4},d_{j_{0}}^{4}]\ne \emptyset \end{aligned}$$

      and \([a_{j_0}^{34},b_{j_0}^{34}]:=[a_{j_{0}}^{3},b_{j_{0}}^{3}]\cap [a_{j_{0}}^{4},b_{j_{0}}^{4}]\) is a non-degenerate interval.

  • \((H_{5})\) There exists \(i_{0}\in I_{1}\) such that either \([m_{i_{0}}^{1},n_{i_{0}}^{1}]\cap [m_{i_{0}}^{2},n_{i_{0}}^{2}]\equiv I\) or \([c_{i_{0}}^{1},d_{i_{0}}^{1}]\cap [c_{i_{0}}^{2},d_{i_{0}}^{2}]\equiv I\) and moreover, \(\{0,\ldots ,i_{0} \}\subset J_{1}\).

    There exists \(j_{0}\in I_{2}\) such that either \([m_{j_{0}}^{3},n_{j_{0}}^{3}]\cap [m_{j_{0}}^{4},n_{j_{0}}^{4}]\equiv I\) or \([c_{j_{0}}^{3},d_{j_{0}}^{3}]\cap [c_{j_{0}}^{4},d_{j_{0}}^{4}]\equiv I\) and moreover, \(\{0,\ldots ,j_{0} \}\subset J_{2}\).

  • \((H_{6})\) The functions \(f_{i}:I\times {\mathbb R}^{n+m}\rightarrow [0,\infty )\), \(i=1,2\) are \(L^{1}(I)\)-Carathéodory functions, that is,

    • \(\bullet \) \(f_{i}(\cdot ,x_1,\dots ,x_{n},y_{1},\dots ,y_{m})\) is measurable for each \((x_1,\dots ,x_{n},y_{1},\dots ,y_{m})\) fixed.

    • \(\bullet \) \(f_{i}(t,\dots ,\dots ,\dots )\) is continuous for a.e. \(t\in I\).

    • \(\bullet \) For each \(R>0\) there exists \(\mu _{R}^{1},\ \mu _{R}^{2}\in L^{1}(I)\) such that

      $$\begin{aligned} f_{i}(t,x_1,\dots ,x_n,y_1,\dots ,y_m)\le \mu _{R}^{i}(t), \end{aligned}$$

      for all \((x_1,\dots ,x_{n},y_1,\dots ,y_{m})\in (-R,R)^{n+m},\;\; \text {a.e.}\;\; t\in I\), and \(i=1, 2\).

  • \((H_{7})\) For each \(r=1,2\) and \(i\in J_{1} \), we have that \(h_{i}^{r}\,\mu _{R}^{1}\in L^{1}(I)\) for every \(R>0\).

    For each \(l=3,4\) and \(j\in J_{2} \), we have that \(h_{j}^{l}\,\mu _{R}^{2}\in L^{1}(I)\) for every \(R>0\).

  • \((H_{8})\) For each \(i=1,2\), we have

    $$\begin{aligned}{} & {} \limsup \limits _{\min \{|x_1|,\dots ,|x_n|,|y_1|,\dots ,|y_m|\}\rightarrow \infty } \max _{t\in I} \frac{f_{i}(t,x_1,\dots ,x_{n},y_{1},\dots ,y_{m})}{|x_1|+\dots +|x_n|+|y_1|+\dots +|y_{m}|}=0, \\{} & {} \liminf \limits _{|x_1|,\dots ,|x_n|,|y_1|,\dots ,|y_m|\rightarrow 0} \min _{t\in I} \frac{f_{i}(t,x_1,\dots ,x_{n},y_{1},\dots ,y_{m})}{|x_1|+\dots +|x_n|+|y_1|+\dots +|y_{m}|}=+\infty . \end{aligned}$$

Let us consider \(X:=C^{n-1}(I)\times C^{m-1}(I)\) the Banach space equipped with the norm

$$\begin{aligned} \Vert (u,v)\Vert _{X}=\max \{ \Vert u\Vert _{C^{n-1}(I)},\Vert v\Vert _{C^{m-1}(I)} \}, \end{aligned}$$

where \( \Vert u\Vert _{C^{n-1}(I)}=\max \{\Vert u^{(i)}\Vert _{\infty },\;\; i\in J_{1}\}\) and \( \Vert v\Vert _{C^{m-1}(I)}=\max \{\Vert v^{(j)}\Vert _{\infty }, j\in J_{2}\}\).

Once we have defined the conditions we need for the development of this section, we will define the following cones:

$$\begin{aligned} H_{1}=\left\{ u\in C^{n-1}(I):\;\; \begin{aligned}&u^{(i)}(t)\ge 0,\;\; t\in [m_{i}^{1},n_{i}^{1}]\cap [m_{i}^{2},n_{i}^{2}],\;\; i\in I_{1},\\&\min _{t\in [a_{i}^{1},b_{i}^{1}]\cap [a_{i}^{2},b_{i}^{2}]} u^{(i)}(t)\ge \xi ^{1}\Vert u^{(i)} \Vert _{[c_{i}^{1},d_{i}^{1}]\cap [c_{i}^{2},d_{i}^{2}]},\;\; i \in K_{1} \end{aligned} \right\} , \end{aligned}$$

and

$$\begin{aligned} H_{2}=\left\{ v\in C^{m-1}(I):\;\; \begin{aligned}&v^{(j)}(t)\ge 0,\;\; t\in [m_{j}^{3},n_{j}^{3}]\cap [m_{j}^{4},n_{j}^{4}],\;\; j\in I_{2},\\&\min _{t\in [a_{j}^{3},b_{j}^{3}]\cap [a_{j}^{4},b_{j}^{4}]} u^{(j)}(t)\ge \xi ^{2}\Vert u^{(j)} \Vert _{[c_{j}^{3},d_{j}^{3}]\cap [c_{j}^{4},d_{j}^{4}]},\;\; j\in K_{2} \end{aligned} \right\} , \end{aligned}$$

where \(\xi ^{1}=\min \{\xi _{i}^{r}:\;\;r\in \{1,2\},\, i\in K_{1} \}\) and \(\xi ^{2}=\min \{\xi _{j}^{l}:\;\;l\in \{3,4\},\, j\in K_{2} \}\).

Moreover, the product space \(H:=H_1\times H_2\) is a cone in the Banach space X with the norm \(\Vert (u,v)\Vert _X\).

Remark 4.1

It can be proved that condition \((H_{5})\) guarantees that H is a cone in X.

We will apply the following Krasnoselskii’s fixed-point Theorem (see [7]) to operator \(T:H\rightarrow H\) given by \(T=(T_1,T_2)\) with \(T_1:H\rightarrow H_1\) defined as

$$\begin{aligned} \begin{aligned} T_1(u,v)(t):=&\displaystyle \int _{a}^{b} G_{1}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&+\displaystyle \int _{a}^{b} G_{2}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),\\&v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s,\;\; t\in I , \end{aligned} \end{aligned}$$
(24)

and \(T_2:H\rightarrow H_2\) defined as

$$\begin{aligned} \begin{aligned} T_2(u,v)(t):=&\displaystyle \int _{a}^{b} G_{3}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&+\displaystyle \int _{a}^{b} G_{4}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),\\&v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s,\;\; t\in I. \end{aligned} \end{aligned}$$
(25)

Theorem 4.2

(Krasnoselskii) Let X be a Banach space and \( K \subset X \) a cone in X. Let \( \Omega _1, \Omega _2 \subset X \) be open bounded sets such that \( 0 \in \Omega _1 \subset \overline{\Omega _1} \subset \Omega _2 \) and \( T: K \cap (\overline{\Omega _2} {\setminus } \Omega _1) \rightarrow K \) a compact operator that satisfies one of the following properties:

  1. 1.

    \(\Vert T (u) \Vert \ge \Vert u \Vert , \hspace{1 ex} \forall u \in K \cap \partial \Omega _1 \) and \( \Vert T (u) \Vert \le \Vert u \Vert , \hspace{1 ex} \forall u \in K \cap \partial \Omega _2 \).

  2. 2.

    \(\Vert T (u) \Vert \le \Vert u \Vert , \hspace{1 ex} \forall u \in K \cap \partial \Omega _1 \) and \( \Vert T (u) \Vert \ge \Vert u \Vert , \hspace{1 ex} \forall u \in K \cap \partial \Omega _2 \).

Then T has a fixed point at \( K \cap (\overline{\Omega _2} {\setminus } \Omega _1) \).

Next, we will prove the following main result of the existence of solutions to problem (1):

Theorem 4.3

Suppose that the conditions \((H_{1})-(H_{8})\) hold. Then problem (1) has at least one nontrivial solution \((u,v)\in X\).

Proof

First, we note that the solutions of problem (1) coincide with fixed points of the operator \(T=(T_1,T_2)\) with \(T_1\) and \(T_2\) defined in (24)–(25).

We will divide the proof in three steps:

Step 1::

\(T:H\rightarrow H\) is a compact operator:

This is proved using standard techniques.

Step 2::

\(T(u,v)\nleq (u,v)\), for all \((u,v)\in H\cap \partial \Omega _1\) with

$$\Omega _1=\{(u,v)\in H:\ \Vert (u,v)\Vert _H<\rho _1\},$$

for some \(\rho _1>0\).

By assumption \((H_4)\), assuming that the first option \(H_{4}^{1}\) is satisfied, there exists \(i_{0}\in K_{1}\) such that \(S_1\ne 0\) and \([a_{i_0}^{12},b_{i_{0}}^{12}]\) is a non-degenerate interval (the case for \(j_0\) is proved analogously). Let us define

$$\begin{aligned} \varepsilon _1=\frac{1}{(\xi ^{1})^{2} \displaystyle \int _{a_{i_0}^{12}}^{b_{i_0}^{12}} \Big (\phi _{i_0}^{1}(s)+\phi _{i_{0}}^{2}(s) \Big )\, \textrm{d}s}, \end{aligned}$$

and \([c_{i_{0}}^{12},d_{i_{0}}^{12}]:=[c_{i_{0}}^{1},d_{i_{0}}^{1}]\cap [c_{i_{0}}^{2},d_{i_{0}}^{2}].\)

By the second condition in \((H_{8})\), there exists \(\rho _1>0\) such that, if \(\Vert (u,v)\Vert _H<\rho _1\), we have

$$\begin{aligned}{} & {} f_{i}(t,u(t),\dots ,u^{(n-1)}(t),v(t),\dots ,v^{(m-1)}(t))>\varepsilon _1\,(|u(t)|+\dots +|u^{(n-1)}(t)|\\{} & {} \qquad +|v(t)|+\dots +|v^{(m-1)}(t)|), \end{aligned}$$

for all \(t\in I\) and \(i=1,2\). Consider \((u,v)\in H\cap \partial \Omega _1\). We have that for \(t\in S_1\),

$$\begin{aligned} \begin{aligned}&(T_1(u,v))^{(i_{0})}(t)\\&\quad =\displaystyle \int _{a}^{b} \frac{\partial ^{i_{0}}}{\partial t^{i_0}}G_{1}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\quad \quad +\displaystyle \int _{a}^{b} \frac{\partial ^{i_{0}}}{\partial t^{i_0}} G_{2}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\quad \ge \displaystyle \int _{a_{i_0}^{12}}^{b_{i_{0}}^{12}} \frac{\partial ^{i_{0}}}{\partial t^{i_0}}G_{1}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\quad +\displaystyle \int _{a_{i_{0}}^{12}}^{b_{i_0}^{12}} \frac{\partial ^{i_{0}}}{\partial t^{i_0}} G_{2}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\quad >\varepsilon _1\,\xi ^{1} \int _{a_{i_0}^{12}}^{b_{i_{0}}^{12}} (\phi _{i_0}^{1}(s)+\phi _{i_{0}}^{2}(s))(|u(s)|+\dots +|u^{(n-1)}(s)|+|v(s)|+\dots \\&\quad \quad +|v^{(m-1)}(s)|)\,\textrm{d}s\\&\quad \ge \varepsilon _1\,(\xi ^{1})^{2}\, \Vert u^{(i_{0})}\Vert _{[c_{i_{0}}^{12},d_{i_{0}}^{12}]} \displaystyle \int _{a_{i_0}^{12}}^{b_{i_{0}}^{12}} (\phi _{i_0}^{1}(s)+\phi _{i_{0}}^{2}(s))\,\textrm{d}s= \Vert u^{(i_{0})}\Vert _{[c_{i_{0}}^{12},d_{i_{0}}^{12}]}. \end{aligned} \end{aligned}$$

Therefore, \((T_1(u,v))^{(i_{0})}(t)>u^{(i_{0})}(t)\) for all \(t\in S_{1}\) and so \(T(u,v)\nleq (u,v)\).

Step 3::

\(T(u,v)\ngeq (u,v)\), for all \((u,v)\in H\cap \partial \Omega _2\) where \(\Omega _2\) is an open to be defined later.

Let

$$\begin{aligned} \varepsilon _2=\min \Bigg \{ \frac{1}{(n+m) \int _{a}^{b} (h_i^{1}(s)+h_{i}^{2}(s))\,\textrm{d}s}:\;\;i \in J_{1} \Bigg \}. \end{aligned}$$

By the first condition of \((H_8)\), there exists \(M>0\) such that, if \(\min \{|u^{(i)}(t)|:\,i\in J_{1}\}\ge M\) and \(\min \{|v^{(j)}(t)|:\,j\in J_{2}\}\ge M\), we have that

$$\begin{aligned} \begin{aligned}&f_{i}(t,u(t),\dots ,u^{(n-1)}(t),v(t),\dots ,v^{(m-1)}(t))\\&\qquad \le \varepsilon _{2}\,(|u(t)|+\dots +|u^{(n-1)}(t)|+|v(t)|+\dots +|v^{(m-1)}(t)|)\\&\qquad \le \varepsilon _{2}\,(n+m)\,\Vert (u,v)\Vert _{H}, \end{aligned} \end{aligned}$$

for all \(t\in I\) and \(i=1, 2\).

Consider \(\rho _{2}>\{\rho _1,M\}\). Using similar arguments to those used in [13, Theorem 3], it can be proved that the set

$$\begin{aligned} \Omega _2=\bigcup _{i=0}^{n-1} \bigcup _{j=0}^{m-1}\Bigg \{(u,v)\in H:\,\min _{t\in I} |u^{(i)}(t)|<\rho _{2},\;\; \min _{t\in I} |v^{(j)}(t)|<\rho _2 \Bigg \}, \end{aligned}$$

is unbounded in the cone H, and the fixed-point index of the operator T with respect to \(\Omega _2\) is only defined in the case that the set of fixed points of operator T in \(\Omega _2\), that is, if \((id-T)^{-1}(\{0\})\cap \Omega _2\), is compact.

Now, to prove that \(\Vert T(u,v)\Vert _{H}\le \Vert (u,v)\Vert _H\) for all \((u,v)\in H \cap \partial \Omega _2\), we will show that, for all \((u,v)\in H\cap \partial \Omega _2\),

$$\begin{aligned} \Vert T_1(u,v)\Vert _{C^{n-1}(I)}\le \Vert (u,v)\Vert _{H} \;\;\text {and}\;\; \Vert T_2(u,v)\Vert _{C^{m-1}(I)} \le \Vert (u,v)\Vert _{H}. \end{aligned}$$

The proof will be done only for \(T_1\), as the other case is similar.

Consider \((u,v)\in H\cap \partial \Omega _2\). Therefore,

$$\begin{aligned} \min \Big \{\min _{t\in I} |u^{(i)}(t)|:\, i\in J_{1}\Big \}=\rho _2\;\;\text {and}\;\; \min \Big \{\min _{t\in I} |v^{(j)}(t)|:\, j\in J_{2}\Big \}=\rho _2. \end{aligned}$$

Then, for all \(i\in J_1\), we have

$$\begin{aligned} \begin{aligned}&(T_1(u,v))^{(i)}(t)\\&\quad =\displaystyle \int _{a}^{b} \frac{\partial ^{i}}{\partial t^{i}}G_{1}(t,s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\qquad +\displaystyle \int _{a}^{b} \frac{\partial ^{i}}{\partial t^{i}} G_{2}(t,s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\quad \le \displaystyle \int _{a}^{b} h_{i}^{1}(s)\,f_{1}(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\qquad +\displaystyle \int _{a}^{b} h_{i}^{2}(s)\,f_2(s,u(s),\dots ,u^{(n-1)}(s),v(s),\dots ,v^{(m-1)}(s))\,\textrm{d}s\\&\qquad \le (n+m)\,\varepsilon _2\,\Vert (u,v)\Vert _{H} \displaystyle \int _{a}^{b} (h_{i}^{1}(s)+h_{i}^{2}(s))\,\textrm{d}s\le \Vert (u,v)\Vert _H. \end{aligned} \end{aligned}$$

Thus, \(\Vert T_1(u,v)\Vert _{C^{n-1}(I)}\le \Vert (u,v)\Vert _{H}\), for all \((u,v)\in H\cap \partial \Omega _2\), and, so, \(T(u,v)\ngeq (u,v)\), for all \((u,v)\in H\cap \partial \Omega _2\).

Therefore, from Theorem 4.2, we deduce that the operator T has a fixed point in \(H\cap (\bar{\Omega _2}{\setminus } \Omega _1)\) which in turn is a solution of problem (1).

\(\square \)

As a consequence, we arrive at the following result:

Corollary 4.4

Suppose that the conditions \((H_{1})-(H_{8})\) hold. Then problem (23) has at least one nontrivial solution \((u,v)\in X\).

In the sequel, we present examples to illustrate the applicability of the previous result and its advantages in comparison with previous results for studying nonlinear systems of differential equations.

Example 4.5

Consider the nonlinear problem

$$\begin{aligned} \left\{ \begin{aligned} u''(t)-v(t)+(t^2+1)\,e^{-|u(t)|-|v(t)|}&=0,\;\; t\in [0,1],\\ v'(t)-u'(t)+\frac{e^{t}}{\ln (1+(u'(t))^2+(v(t))^2)}&=0,\;\; t\in [0,1],\\ u(0)&=0,\\ u'(0)&=u'(1),\\ v(0)&=0. \end{aligned} \right. \end{aligned}$$
(26)

In this case, the functions \(f_{1}(t,x,y,z)=(t^2+1)\,e^{-|x|-|z|}\), \(f_{2}(t,x,y,z)=\frac{e^{t}}{\ln (1+y^2+z^2)}\) satisfy conditions \((H_6)\) and \((H_{8})\).

Since the functions \(f_1\) and \(f_2\) are positive and appear by adding in the equations of problem (26), we have to change the sign of the Green’s functions. Using Example 3.5, we obtain that

$$\begin{aligned} G_{1}(t,s)&=\left\{ \begin{aligned}&\sinh (s-t)-\frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)} ,&\;\; 0\le s\le t\le 1,\\&-\frac{\cosh (s-1)\,\sinh (t)}{1-\cosh (1)},&\;\;0\le t< s\le 1, \end{aligned} \right. \nonumber \\ G_{2}(t,s)&=\left\{ \begin{aligned}&-\cosh (s-t)+\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)}+1,&\;\; 0\le s\le t\le 1,\\&\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)},&\;\;0\le t< s\le 1, \end{aligned} \right. \nonumber \\ G_{3}(t,s)&=G_{1}(t,s),\;\; \text {for all}\;\; t,s\in [0,1], \end{aligned}$$
(27)

and

$$\begin{aligned} G_{4}(t,s)=\left\{ \begin{aligned}&-\cosh (s-t)+\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)},&\;\; 0\le s\le t\le 1,\\&\frac{\sinh (s-1)\,\sinh (t)}{1-\cosh (1)},&\;\;0\le t< s\le 1. \end{aligned} \right. \end{aligned}$$

In this case \(n=2\), \(m=1\), \(J_1=\{0,1\}\), and \(J_{2}=\{0\}\).

The first derivative of \(G_{1}\) is

$$\begin{aligned} \frac{\partial }{\partial t} G_{1}(t,s)=\left\{ \begin{aligned}&-\cosh (s-t)-\frac{\cosh (s-1)\,\cosh (t)}{1-\cosh (1)} ,&\;\; 0\le s\le t\le 1,\\&-\frac{\cosh (s-1)\,\cosh (t)}{1-\cosh (1)},&\;\;0\le t< s\le 1, \end{aligned} \right. \end{aligned}$$

The functions \(G_{1}\) and \(\frac{\partial }{\partial t}G_{1}\) are positive on \([0,1]\times [0,1]\). So, we can take \([m_{0}^{1},n_{0}^{1}]=[0,1]\) and \([m_{1}^{1},n_{1}^{1}]=[0,1]\).

Let \(\mu \in I\) be fixed. The function \(G_1\) is continuous on \([0,1]\times [0,1]\), so the hypothesis \((H_1)\) is immediate for \(i=0\). The first derivative \(\frac{\partial }{\partial t} G_{1}\) is continuous on \(([0,1]\times [0,1]){\setminus } \{(t,t):\ t\in [0,1]\} \), so for every \(\varepsilon >0\), there exists \(\delta >0\) such that, \(|t-\mu |<\delta \) implies that

$$\begin{aligned} \Bigg | \frac{\partial G_{1}}{\partial t}(t,s)-\frac{\partial G_{1}}{\partial t}(\mu ,s) \Bigg |<\varepsilon ,\;\; \text {for all}\;\; s\ne \mu . \end{aligned}$$

Therefore, the hypothesis \((H_1)\) holds for \(G_1\).

Moreover, it can be checked that

$$\begin{aligned} |G_{1}(t,s)|\le h_{0}^{1}(s)=G_{1}(1,s)=\sinh (s-1)-\frac{\cosh (s-1)\,\sinh (1)}{1-\cosh (1)}, \end{aligned}$$

and

$$\begin{aligned} \Big |\frac{\partial }{\partial t} G_{1}(t,s) \Big |\le h_{1}^{1}(s)=\frac{\cosh (s-1)\,\cosh (s)}{\cosh (1)-1}, \end{aligned}$$

for all \(t\in [0,1]\), \(s\in [0,1]\).

If we take \(\phi _{0}^{1}(s)=h_{0}^{1}(s)\), \([c_{0}^{1},d_{0}^{1}]=[0,1]\), and \([a_{0}^{1},b_{0}^{1}]=[0,1]\), it can be verified that for the constant \(\xi _{0}^{1}=\frac{1}{n_1}\),

$$\begin{aligned} G_{1}(t,s)\ge \xi _{0}^{1}\, \phi _{0}^{1}(s)\;\; \text {for all } \ t\in [0,1], \ s\in [0,1], \end{aligned}$$

for \(n_1\) a sufficiently large constant such that \(\xi _{0}^{1}\in (0,1)\).

Analogously, if we choice \(\phi _{1}^{1}(s)=h_{1}^{1}(s)\), \([c_{1}^{1},d_{1}^{1}]=[0,1]\), and \([a_{1}^{1},b_{1}^{1}]=[0,1]\), it can be shown that for the constant \(\xi _{1}^{1}\approx 0.5728\),

$$\begin{aligned} \frac{\partial }{\partial t} G_{1}(t,s)\ge \xi _{1}^{1}\, \phi _{1}^{1}(s)\;\; \text {for all } \ t\in [0,1], \ s\in [0,1]. \end{aligned}$$

The first derivative of \(G_{2}\) is

$$\begin{aligned} \frac{\partial }{\partial t} G_{2}(t,s)=\left\{ \begin{aligned}&\sinh (s-t)+\frac{\sinh (s-1)\,\cosh (t)}{1-\cosh (1)}+1,&\;\; 0\le s\le t\le 1,\\&\frac{\sinh (s-1)\,\cosh (t)}{1-\cosh (1)},&\;\;0\le t< s\le 1. \end{aligned} \right. \end{aligned}$$

The functions \(G_{2}\) and \(\frac{\partial }{\partial t} G_2\) are positive on \([0,1]\times [0,1]\). So, we can take \([m_{0}^{2},n_{0}^{2}]=[0,1]\) and \([m_{1}^{2},n_{1}^{2}]=[0,1]\).

The function \(G_2\) belongs to \(C^1([0,1]\times [0,1])\), so it trivially satisfies hypothesis \((H_1)\).

It can be seen that

$$\begin{aligned} |G_{2}(t,s)|\le h_{0}^{2}(s)=G_{2}(1,s)=\cosh (s)-\frac{(1+e)\,\sinh (s)}{e-1}+1, \end{aligned}$$

and

$$\begin{aligned} \Big |\frac{\partial }{\partial t} G_{2}(t,s) \Big |\le h_{1}^{2}(s)=\frac{\partial }{\partial t} G_{2}(s,s)=\frac{\cosh (s)\,\sinh (s-1)}{1-\cosh (1)}, \end{aligned}$$

for all \(t\in [0,1]\), \(s\in [0,1]\).

In this case, if we take \(\phi _{0}^{2}(s)=h_{0}^{2}(s)\), \(\phi _{1}^{2}(s)=h_{1}^{2}(s)\) and \([c_{0}^{2},d_{0}^{2}]=[c_{1}^{2},d_{1}^{2}]=[a_{0}^{2},b_{0}^{2}]=[a_{1}^{2},b_{1}^{2}]=[0,1]\), then for the constants \(\xi _{0}^{2}=\frac{1}{n_2}\) and \(\xi _{1}^{2}\approx 0.6481 \), we have

$$\begin{aligned} G_{2}(t,s)\ge \xi _{0}^{2}\, \phi _{0}^{2}(s)\;\; \text {for all } \ t\in [0,1], \ s\in [0,1], \end{aligned}$$

and

$$\begin{aligned} \frac{\partial }{\partial t} G_{2}(t,s) \ge \xi _{1}^{2}\, \phi _{1}^{2}(s)\;\; \text {for all } \ t\in [0,1], \ s\in [0,1], \end{aligned}$$

where \(n_2\) is a sufficiently large constant such that \(\xi _{0}^{2}\in (0,1)\).

Moreover, for \(i_0=0, 1\), we have that \([a_{i_0}^{12},b_{i_0}^{12}]=[0,1]\) are non-degenerate intervals.

The function \(G_3\) fulfills the same assumptions as the function \(G_1\) because they are equal. So, \(G_3\) satisfies the hypothesis \((H_{2})\) and we can take \([m_{0}^{3},n_{0}^{3}]=[0,1]\).

Also, by taking \(\phi _{0}^{3}(s)=h_{0}^{1}(s)\), \([c_{0}^{3},d_{0}^{3}]=[0,1]\), and \([a_{0}^{3},b_{0}^{3}]=[0,1]\), we have for the constant \(\xi _{0}^{1}\) that

$$\begin{aligned} G_{3}(t,s)\ge \xi _{0}^{3}\, \phi _{0}^{1}(s)\;\; \text {for all } \ t\in [0,1], \ s\in [0,1]. \end{aligned}$$

Finally, the function \(G_{4}\) is continuous on \(([0,1]\times [0,1]){\setminus } \{(t,t):\ t\in [0,1]\} \) and is nonnegative on the triangle \(\{(t,s)\in [0,1]\times [0,1]:\ t< s\}\). Thus, the hypothesis \((H_{2})\) is fulfilled for \(G_4\) and we can take \([m_{0}^{4},n_{0}^{4}]=\{0\}\).

Analogously, it holds that

$$\begin{aligned} |G_{4}(t,s)|\le h_{0}^{4}(s),\;\; \text {for all }\ t\in [0,1], \ s\in [0,1], \end{aligned}$$

with

$$\begin{aligned} h_{0}^{4}(s):=\max \Bigg \{ \frac{\sinh (s-1)\,\sinh (s)}{1-\cosh (1)},1-\frac{\sinh (s-1)\,\sinh (s)}{1-\cosh (1)}\Bigg \}. \end{aligned}$$

If we chose \(\phi _{0}^{4}(s)=h_{0}^{4}(s)\), \([c_{0}^{4},d_{0}^{4}]=[0,1]\), and \([a_{0}^{4},b_{0}^{4}]=\{0\}\), then for the constant \(\xi _{0}^{4}=\frac{1}{n_{4}}\),

$$\begin{aligned} G_{4}(t,s)\ge \xi _{0}^{4}\, \phi _{0}^{4}(s)\;\; \text {for all } \ t=0, \ s\in [0,1], \end{aligned}$$

where \(n_4\) is a sufficiently large constant such that \(\xi _{0}^{4}\in (0,1)\).

As a consequence, we deduce that \(I_{1}=K_{1}=\{0,1\}\), \(I_{2}=K_{2}=\{0\}\), and the assumptions \((H_1)-(H_8)\) hold.

Therefore, by Corollary 4.4, there is at least one nontrivial solution \((u,v)\in C^{1}(0,1)\times C(0,1)\) of problem (26).

As we mentioned in the Introduction, the following example shows the non-applicability of results of the previous works by transforming the problem of the above example into one with separate variables and two Green’s functions.

Example 4.6

Let’s again consider the problem in the example above. Problem (26) can be rewritten as

$$\begin{aligned} \left\{ \begin{aligned} u''(t)-u(t)+\tilde{f_1}(t,u(t),u'(t),v(t))&=0,\;\; t\in [0,1],\\ v'(t)-\tilde{f_2}(t,u(t),u'(t),v(t))&=0,\;\; t\in [0,1],\\ u(0)&=0,\\ u'(0)&=u'(1),\\ v(0)&=0, \end{aligned} \right. \end{aligned}$$

where \(\tilde{f_1}(t,x,y,z)=x-z+(t^2+1)\,e^{-|x|-|z|}\) and \(\tilde{f_2}(t,x,y,z)=y-\frac{e^{t}}{\ln (1+y^2+z^2)}\).

Note that the functions \(\tilde{f_1}\) and \(\tilde{f_2}\) do not satisfy the hypothesis \((H_8)\) (the positive sign of \(\bar{f_1}\) and \(\bar{f_2}\) is not guaranteed either) since the limit in the first condition of \((H_8)\) does not exist.

However, by passing the linear terms to the right-hand side of the equation and subtracting them from the \(f_i\), \(i=1, 2\), the new non-linearities \(\tilde{f_i}\):

  1. (1)

    They could lose the constant sign.

  2. (2)

    They would not have a sublinear behavior at the limit.

These two statements are common hypotheses in previous works which consider only two Green’s functions, and so this problem could not be solved using such methods. This shows the benefits of our approach to deal with nonlinear systems where the linear operators depend on the two variables u and v.