1 Introduction

While the nonlinear Dirichlet eigenvalue problem for the equation

$$\begin{aligned} \textrm{div}(|\nabla u|^{p-2}\nabla u)+\lambda |u|^{p-2}u=0 \end{aligned}$$

has been thoroughly investigated for scalar functions \(u:\Omega \rightarrow {\mathbb {R}}\) on a bounded domain \(\Omega \) in \({\mathbb {R}}^n\) and for \(p\in (1,\infty )\), the corresponding vectorial problem for \(\textbf{u}=(u_1,u_2,\ldots , u_N)\) has gained considerably less attention. The Dirichlet problem of minimizing the Rayleigh quotient

$$\begin{aligned} I_N(\textbf{v})=\frac{\displaystyle \int _\Omega (|\nabla v_1|^2 + |\nabla v_2|^2+\dots +|\nabla v_N|^2)^\frac{p}{2}dx}{\displaystyle \int _\Omega (|v_1|^2 + |v_2|^2+\dots +|v_N|^2)^\frac{p}{2}dx},\qquad 1<p<\infty , \end{aligned}$$
(1.1)

among all vector-valued functions \(\textbf{v}\) in the Sobolev space \(W^{1,p}_0(\Omega ;{\mathbb {R}}^N)\) leads to the Euler-Lagrange equations

$$\begin{aligned} \textrm{div}(|D \textbf{u}|^{p-2}\nabla u_k)+\Lambda |\textbf{u}|^{p-2}u_k=0, \qquad k=1,2,\ldots ,N. \end{aligned}$$
(1.2)

Here \(\textbf{u}\in W^{1,p}_0(\Omega ;{\mathbb {R}}^N)\) is a minimizer, \(|D\textbf{u}|^2=|\nabla u_1|^2+\cdots +|\nabla u_N|^2\) and \(|\textbf{u}|^2=u_1^2+\dots +u_N^2\). The existence of a minimizer comes by the direct method in the calculus of variations and in this case we denote \(I_N(\textbf{u})\) by \(\Lambda _1\). In the linear case (\(p=2\)), the system (1.2) is decoupled into

$$\begin{aligned} \Delta u_k+\Lambda u_k=0, \qquad k=1,2,\ldots , N. \end{aligned}$$

Then everything reduces to the scalar case \(N=1\) with Helmholtz’s equation.

Not all solutions of (1.2) are necessarily minimizers. \(\Lambda _1\) is the smallest eigenvalue and there are higher eigenfunctions.

Definition 1.1

We say that \({ \textbf{u}}\in W^{1,p}_0(\Omega ;{\mathbb {R}}^N)\) is an eigenfunction if

$$\begin{aligned} \mathop \int \limits _\Omega |D \textbf{u} |^{p-2}D \textbf{u} \cdot D \mathbf{\varphi }\ dx= \Lambda \mathop \int \limits _\Omega |\textbf{u}|^{p-2}{} \textbf{u} \cdot \mathbf{\varphi } \ dx \end{aligned}$$

for all testfunctions \(\mathbf{\varphi }=(\varphi _1,\ldots ,\varphi _N)\in C_0^\infty (\Omega ; {\mathbb {R}}^N)\). Here \(\Lambda \) is the corresponding eigenvalue.

By elliptic regularity theory [9], it is known that \(\textbf{u} \in C(\Omega ; {\mathbb {R}}^N)\). For irregular domains, however, \(\textbf{u}\) is not always in \(C({{\overline{\Omega }}};{\mathbb {R}}^N)\). In any case, the gradient is known to be well-defined in \(\Omega \) because \(\textbf{u}\in C^{1,\alpha }_{\textrm{loc}}(\Omega ;{\mathbb {R}}^N)\) for a suitable \(\alpha \in (0,1)\) depending on p.

In what follows, we observe that if \(\omega :\Omega \rightarrow {\mathbb {R}}\) is an eigenfunction in the scalar case \(N=1\), then for any constant vector \({{\textbf{c}}}\),

$$\begin{aligned} \textbf{u} =(c_1\omega ,\ldots , c_N\omega )=\textbf{c} \omega \end{aligned}$$

is a vectorial eigenfunction. It turns out that the converse is true for the smallest eigenvalue. The following result was proved by Brock and Manásevich in [3], see also del Pino [4].

Theorem 1.2

All minimizers \(\textbf{u}\in W^{1,p}_0(\Omega ;{\mathbb {R}}^N)\) of \(I_N\) are of the form \(\textbf{u}=\textbf{c}\omega \), where \(\omega \) is the scalar minimizer.

We shall present a streamlined proof of the theorem above using the identity of Lagrange (2.1). In the case \(N=1\), the scalar minimizer is known to be unique, except that it can be multiplied by constants. While [1] or [7] contain fairly simple proofs for the scalar case, the proof given here does not even require the use of Jensen’s inequality.

The vectorial case in only one independent variable, in which \(\Omega \) is just an open interval, was studied by M. Del Pino, who proved in [4] that all solutions \(\textbf{u}=\textbf{u}(t)=(u_1(t),\dots , u_N(t))\) of the problem

(1.3)

are just copies of the scalar ones, i.e., \(\textbf{u}=\textbf{c}\omega \). Thus the previous theorem is valid for all eigenfunctions. Since his proof is not easily available, for the benefit of the reader, we present it in Section 3 below, based on an excerpt of his thesis.

Finally, we study the vectorial fractional Rayleigh quotient

$$\begin{aligned} J_N(\textbf{v})=\frac{\displaystyle \int _{{\mathbb {R}}^n}\!\!\int _{{\mathbb {R}}^n}\frac{|\textbf{v}(y)-\textbf{v}(x)|^p}{|y-x|^{n+sp}} \ dx\ dy}{\displaystyle \int _\Omega |\textbf{v}(x)|^p\ dx} \end{aligned}$$

for \(s\in (0,1)\) and \(\textbf{v}\in W^{s,p}_0(\Omega ;{\mathbb {R}}^N)\) with \(\textbf{v}=0\) in \({\mathbb {R}}^n\setminus \Omega \). The scalar case \(N=1\) was studied in [8] and [6]. We shall show that again the vectorial minimizers are just copies of the scalar ones in Theorem 4.2 below. It is worth mentioning that the isolation of the first eigenvalue (i.e., the minimum of the Rayleigh quotient) is known only in the following cases: the scalar case \(N=1\), the linear case \(p=2\), and the o.d.e. case \(n=1\). Establishing the “spectral gap”for the general case seems to be an open problem.

2 Vectorial minimizers

We will prove Theorem 1.2 by employing Lagrange’s identity, which asserts

$$\begin{aligned} \left| \sum ^N_{i=1}t_i\textbf{v}_i\right| ^2=\sum ^N_{i=1}(t_i)^2\sum ^N_{i=1}\left| \textbf{v}_i\right| ^2-\sum _{1\le i<j\le N}|t_i\textbf{v}_j-t_j\textbf{v}_i|^2 \end{aligned}$$
(2.1)

for \(t_1,\dots , t_N\in {\mathbb {R}}\) and \(\textbf{v}_1,\dots ,\textbf{v}_N\in {\mathbb {R}}^d\). This identity is usually stated for \(d=1\). Nevertheless, it is valid for all dimensions \(d\ge 1\).

Proof of Theorem 1.2

Let \(\textbf{u}=(u_1,u_2,\ldots , u_N)\) be a minimizer of the Rayleigh quotient (1.1) on \(W^{1,p}_0(\Omega ;{\mathbb {R}}^N)\) and \(\Lambda _1=I_N(\textbf{u})\). Set \(w=|\textbf{u}|\). It is routine to verify \(w\in W_0^{1,p}(\Omega )\), and direct computation gives

$$\begin{aligned} w\nabla w=\sum ^N_{i=1}u_i\nabla u_i \end{aligned}$$
(2.2)

on \(\{w>0\}\). By Lagrange’s identity (2.1), we also have

$$\begin{aligned} w^2|\nabla w|^2=w^2|D\textbf{u}|^2-\sum _{1\le i<j\le N}|u_i\nabla u_j-u_j\nabla u_i|^2 \end{aligned}$$
(2.3)

on \(\{w>0\}\). As \(\nabla w=0\) almost everywhere on \(\{w=0\}\), we deduce

$$\begin{aligned} |\nabla w|\le |D\textbf{u}| \end{aligned}$$
(2.4)

almost everywhere in \(\Omega \). Furthermore,

$$\begin{aligned} \lambda _1\le \frac{\displaystyle \int _{\Omega }|\nabla w|^pdx}{\displaystyle \int _{\Omega }|w|^pdx}\le \frac{\displaystyle \int _{\Omega }|D\textbf{u} |^pdx}{\displaystyle \int _{\Omega }|\textbf{u}|^pdx}= \Lambda _1. \end{aligned}$$
(2.5)

Here \(\lambda _1=\inf I_1\) is the smallest scalar eigenvalue.

Now suppose \(u\in W^{1,p}_0(\Omega , {\mathbb {R}})\) is a scalar first eigenfunction and \(\textbf{c}\in {\mathbb {R}}^N\) with \(\textbf{c}\ne 0\). Then \(\textbf{v}=\textbf{c} u\in W^{1,p}_0(\Omega , {\mathbb {R}}^N)\) and

$$\begin{aligned} \Lambda _1\le \frac{\displaystyle \int _{\Omega }|D\textbf{v} |^pdx}{\displaystyle \int _{\Omega }|\textbf{v}|^pdx}= \frac{\displaystyle |\textbf{c}|^p\int _{\Omega }|\nabla u|^pdx}{\displaystyle |\textbf{c}|^p\int _{\Omega }|u|^pdx}=\frac{\displaystyle \int _{\Omega }|\nabla u|^pdx}{\displaystyle \int _{\Omega }|u|^pdx}=\lambda _1. \end{aligned}$$

In view of (2.5), \(\lambda _1=\Lambda _1\), so w is necessarily a scalar first eigenfunction and equality must hold almost everywhere in (2.4). Harnack’s inequality for the p-Laplacian (see [11]) also gives \(w>0\) in \(\Omega \). Therefore, by (2.3),

$$\begin{aligned} u_i\nabla u_j=u_j\nabla u_i \quad \text { almost everywhere in }\Omega \end{aligned}$$
(2.6)

for all \(i,j=1,\dots , N\). Combining (2.6) and (2.2) gives

$$\begin{aligned} w^2\nabla u_i=\sum ^N_{j=1}u_j(u_j\nabla u_i)= \sum ^N_{j=1} u_j (u_i\nabla u_j)=u_i \sum ^N_{j=1} u_j\nabla u_j=u_i(w \nabla w). \end{aligned}$$

That is,

$$\begin{aligned} w\nabla u_i=u_i \nabla w \end{aligned}$$

almost everywhere for \(i=1,\dots , N\). Since

$$\begin{aligned} \nabla \left( \frac{u_i}{w}\right) =\frac{w\nabla u_i-u_i \nabla w}{w^2}=0 \end{aligned}$$

almost everywhere and \(\Omega \) is connected, \(u_i=c_iw\) for some \(c_i\in {\mathbb {R}}\) and \(i=1,\dots , N\). We conclude

$$\begin{aligned} \textbf{u}=(c_1,\dots , c_N) w=\textbf{c} w. \end{aligned}$$

\(\square \)

3 One independent variable

In this section, we treat the o.d.e. case \(n=1\), based on an excerpt of Manuel del Pino’s work [4] in an interval, say \((0,1)\subset {\mathbb {R}}\). The Euler-Lagrange equations (1.2) for \(\textbf{u}(t)=(u_1(t),\ldots , u_N(t))\) reduce to ordinary differential equations

$$\begin{aligned} -\left( |\textbf{u}'|^{p-2}{} \textbf{u}'\right) '=\Lambda |\textbf{u}|^{p-2}\textbf{u}. \end{aligned}$$
(3.1)

A smooth \(\textbf{u}:(0,\infty )\rightarrow {\mathbb {R}}^N\) which satisfies (3.1),

$$\begin{aligned} \textbf{u}(0)=0,\text { and } \textbf{u}'(0)=\textbf{c} \end{aligned}$$

is

$$\begin{aligned} \textbf{u}(t)=\frac{\textbf{c}}{\Lambda ^{1/p}} \omega \left( \Lambda ^{1/p}t\right) . \end{aligned}$$
(3.2)

Here \(\omega :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} -\left( |\omega '|^{p-2}\omega '\right) '=|\omega |^{p-2}\omega \quad \text { in } (0,1),\\ \qquad \omega (0)=0,\quad \omega '(0)=1. \end{array}\right. } \end{aligned}$$
(3.3)

Such a scalar function can be found through integration. Moreover, each scalar eigenfunction on \(\Omega =(0,1)\) is given by a multiple of

$$\begin{aligned} \omega \left( k\lambda _p^{1/p}t\right) \end{aligned}$$

for some \(k\in {\mathbb {N}}\). The corresponding eigenvalue (see [5, 10]) is \(k^p\lambda _p\) with

$$\begin{aligned} \lambda _p=\frac{(2\pi )^p(p-1)}{[p\sin (\pi /p)]^p}. \end{aligned}$$

We claim the initial value problem associated with the o.d.e. (3.1) admits a unique solution. This will be essentially due to the following lemma.

Lemma 3.1

Suppose \(\textbf{c}\in {\mathbb {R}}^N\) with \(\textbf{c}\ne 0\) and \(t_0\ge 0\). There is \(\epsilon >0\) so that the initial value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\left( |\textbf{u}'|^{p-2}{} \textbf{u}'\right) '=|\textbf{u}|^{p-2}{} \textbf{u} \quad \text { in } (t_0,t_0+\epsilon ),\\ \qquad \textbf{u}(t_0)=0,\quad \textbf{u}'(t_0)=\textbf{c} \end{array}\right. } \end{aligned}$$
(3.4)

has a unique solution.

Proof

Without any loss of generality, we may suppose \(t_0=0\). We note that the second-order initial value problem (3.4) is equivalent to the following first order one

$$\begin{aligned} {\left\{ \begin{array}{ll} \quad \textbf{u}'=|\textbf{v}|^{q-2}{} \textbf{v} \qquad \text { in } (0,\epsilon ),\\ \quad \textbf{v}'=-|\textbf{u}|^{p-2}{} \textbf{u} \quad \,\text { in } (0,\epsilon ),\\ \quad \textbf{u}(0)=0,\qquad \textbf{v}(0)=|\textbf{c}|^{p-2}{} \textbf{c} \end{array}\right. } \end{aligned}$$

for \(\textbf{u}\) and \(\textbf{v}\). Here \(q=p/(p-1)\). Furthermore, if \(p\ge 2\), then \(\textbf{u}\mapsto |\textbf{u}|^{p-2}{} \textbf{u}\) is continuously differentiable on \({\mathbb {R}}^N\), while \(\textbf{v}\mapsto |\textbf{v}|^{q-2}{} \textbf{v}\) is smooth in a neighborhood of each point in \({\mathbb {R}}^N\) aside from the origin. In particular, the mapping

$$\begin{aligned} F(\textbf{u},\textbf{v})=(|\textbf{v}|^{q-2}{} \textbf{v},-|\textbf{u}|^{p-2}{} \textbf{u}) \end{aligned}$$
(3.5)

on \({\mathbb {R}}^N\times {\mathbb {R}}^N\) is continuously differentiable in a neighborhood of \((\textbf{u},\textbf{v})=(0,|\textbf{c}|^{p-2}{} \textbf{c})\). The claim then follows from the Picard–Lindelöf theorem.

Let us now consider the case \(1<p<2\). In view of (3.2), a solution \(\textbf{u}\) exists globally. We just need to show that it is unique. Suppose \(\textbf{u}\) and \(\textbf{z}\) are two solutions in question for some \(\epsilon >0\). Then

$$\begin{aligned}{} & {} \quad \displaystyle |\textbf{v}'(t)|^{p-2}{} \textbf{u}'(t)-|\textbf{z}'(t)|^{p-2}{} \textbf{z}'(t)\nonumber \\= & {} \displaystyle \mathop \int \limits ^t_0\frac{d}{ds}\left( |\textbf{u}'(s)|^{p-2}{} \textbf{u}'(s)-|\textbf{z}'(s)|^{p-2}{} \textbf{z}'(s)\right) ds\nonumber \\= & {} -\displaystyle \mathop \int \limits ^t_0\left( |\textbf{u}(s)|^{p-2}{} \textbf{u}(s)-|\textbf{z}(s)|^{p-2}{} \textbf{z}(s)\right) ds\quad \text { for }t\in [0,\epsilon ]. \end{aligned}$$
(3.6)

Let us recall two basic inequalities

$$\begin{aligned} ||\textbf{a}|^{p-2}{} \textbf{a}-|\textbf{b}|^{p-2}{} \textbf{b}|&\le (3-p)|\textbf{b}-\textbf{a}|\mathop \int \limits ^1_0|\textbf{a}+\tau (\textbf{b}-\textbf{a})|^{p-2}d\tau \\&\le 2|\textbf{b}-\textbf{a}|\mathop \int \limits ^1_0|\textbf{a}+\tau (\textbf{b}-\textbf{a})|^{p-2}d\tau \end{aligned}$$

and

$$\begin{aligned} ||\textbf{a}|^{p-2}{} \textbf{a}-|\textbf{b}|^{p-2}{} \textbf{b}|\ge (p-1)|\textbf{b}-\textbf{a}|\left( 1+|\textbf{a}|^2+|\textbf{b}|^2\right) ^{\frac{p-2}{2}} \end{aligned}$$

which hold for each \(\textbf{a},\textbf{b}\in {\mathbb {R}}^N\) and \(1<p\le 2\). Combining these inequalities with (3.6) gives

$$\begin{aligned}&(p-1)|\textbf{u}'(t)-\textbf{z}'(t)|\left( 1+|\textbf{u}'(t)|^2+|\textbf{z}'(t)|^2\right) ^{\frac{p-2}{2}}\\&\quad \le \left| |\textbf{u}'(t)|^{p-2}{} \textbf{u}'(t)-|\textbf{z}'(t)|^{p-2}{} \textbf{z}'(t)\right| \\&\quad \le \mathop \int \limits ^t_0\left| |\textbf{u}(s)|^{p-2}{} \textbf{u}(s)-|\textbf{z}(s)|^{p-2}{} \textbf{z}(s)\right| ds\\&\quad \le 2 \max _{0\le s\le t}|\textbf{u}(s)-\textbf{z}(s)|\mathop \int \limits ^t_0\mathop \int \limits ^1_0\frac{1}{|\textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))|^{2-p}}d\tau ds\\&\quad \le 2 \max _{0\le s\le t}|\textbf{u}(s)-\textbf{z}(s)|\mathop \int \limits ^\epsilon _{0}\mathop \int \limits ^1_0\frac{1}{|\textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))|^{2-p}}d\tau ds\\&\quad =2 \max _{0\le s\le t}|\textbf{u}(s)-\textbf{z}(s)|\mathop \int \limits ^1_0\mathop \int \limits ^\epsilon _{0}\frac{1}{|\textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))|^{2-p}}dsd\tau \end{aligned}$$

for each \(0\le t\le \epsilon \).

As \(\textbf{u}(0)=\textbf{z}(0)=\textbf{c}\ne 0\),

$$\begin{aligned} \textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))= \textbf{c}s+o(s) \end{aligned}$$

as \(s\rightarrow 0\) uniformly in \(\tau \in [0,1]\). Reducing \(\epsilon >0\) if necessary, we may suppose

$$\begin{aligned} |\textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))|\ge \frac{1}{2}|\textbf{c}|s\quad \text { for }s\in [0,\epsilon ]. \end{aligned}$$

It follows that

$$\begin{aligned} \mathop \int \limits ^\epsilon _{0}\frac{1}{|\textbf{u}(s)+\tau (\textbf{z}(s)-\textbf{u}(s))|^{2-p}}ds\le \left( \frac{1}{2}|\textbf{c}|\right) ^{p-2}\frac{\epsilon ^{p-1}}{p-1}. \end{aligned}$$

Since \(\textbf{u}\) and \(\textbf{z}\) are continuously differentiable on \([0,\epsilon )\), we also have a constant K such that

$$\begin{aligned} |\textbf{u}'(t)|,|\textbf{z}'(t)|\le K \quad \text { for }t\in [0,\epsilon ]. \end{aligned}$$

Therefore,

$$\begin{aligned} \left( 1+|\textbf{u}'(t)|^2+|\textbf{z}'(t)|^2\right) ^{\frac{p-2}{2}}\ge (1+2K^2)^{\frac{p-2}{2}}. \end{aligned}$$

Putting these estimates together yields

$$\begin{aligned} \max _{0\le t\le \epsilon }|\textbf{u}'(t)-\textbf{z}'(t)|\le & {} \frac{2}{(p-1)^2}\frac{(|\textbf{c}|/2)^{p-2}}{(1+2K^2)^{\frac{p-2}{2}}} \epsilon ^{p-1}\cdot \max _{0\le t\le \epsilon }|\textbf{u}(t)-\textbf{z}(t)|\\= & {} C\epsilon ^{p-1}\cdot \max _{0\le t\le \epsilon }|\textbf{u}(t)-\textbf{z}(t)|. \end{aligned}$$

Furthermore, we can integrate the left hand side over \([0,\epsilon ]\) to get

$$\begin{aligned} \max _{0\le t\le \epsilon }|\textbf{u}(t)-\textbf{z}(t)|\le C\epsilon ^{p}\cdot \max _{0\le t\le \epsilon }|\textbf{u}(t)-\textbf{z}(t)|. \end{aligned}$$

For \(C\epsilon ^p<1\), we get a contradiction unless \(\max _{0\le t\le \epsilon }|\textbf{u}(t)-\textbf{u}(t)|=0\). \(\square \)

This leads to the following theorem.

Theorem 3.2

Suppose \(\Lambda \ne 0\) and \(\textbf{a},\textbf{b}\in {\mathbb {R}}^N\). The initial value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\left( |\textbf{u}'|^{p-2}{} \textbf{u}'\right) '=\Lambda |\textbf{u}|^{p-2}{} \textbf{u} \quad \text { in } (0,\infty ),\\ \hspace{41.54121pt}\textbf{u}(0)=\textbf{a},\qquad \textbf{u}'(0)=\textbf{b} \end{array}\right. } \end{aligned}$$

has a unique solution.

Proof

By scaling, we may suppose \(\Lambda =1\). This second order initial value problem is equivalent to the first order initial value problem for \(\textbf{u}\) and \(\textbf{v}\)

$$\begin{aligned} {\left\{ \begin{array}{ll} \quad \textbf{u}'=|\textbf{v}|^{q-2}{} \textbf{v} \qquad \text { in } (0,\infty ),\\ \quad \textbf{v}'=-|\textbf{u}|^{p-2}{} \textbf{u} \,\quad \text { in } (0,\infty ),\\ \textbf{u}(0)=\textbf{a},\qquad \textbf{v}(0)=|\textbf{b}|^{p-2}{} \textbf{b}. \end{array}\right. } \end{aligned}$$

Here \(q=p/(p-1)\). Moreover, \(\textbf{v}\) would satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} -\left( |\textbf{v}'|^{q-2}{} \textbf{v}'\right) '= |\textbf{v}|^{q-2}{} \textbf{v} \quad \text {in } (0,\infty ),\\ \hspace{41.54121pt}\textbf{v}(0)=|\textbf{b}|^{p-2}{} \textbf{b},\qquad \textbf{v}'(0)=-|\textbf{a}|^{p-2}{} \textbf{a}. \end{array}\right. } \end{aligned}$$

Note that any solution \(\textbf{u}\) and \(\textbf{v}\) must also fulfill the identity

$$\begin{aligned} \frac{1}{p}|\textbf{u}(t)|^p+\frac{1}{q}|\textbf{v}(t)|^q=\frac{1}{p}|\textbf{a}|^p+\frac{1}{q}|\textbf{b}|^p. \end{aligned}$$

So if \(\textbf{a}=\textbf{b}=0\), then both \(\textbf{u}\) and \(\textbf{v}\) vanish identically and the unique solution in question is \(\textbf{u}=\textbf{v}\equiv 0\). Otherwise, suppose \(\textbf{a}\) or \(\textbf{b}\) is not zero. In this case, both \(\textbf{u}(t)\) and \(\textbf{v}(t)\) cannot vanish simultaneously for any solution. If for a given \(t_0\), \(\textbf{u}(t_0)\ne 0\) and \(\textbf{v}(t_0)\ne 0\), then we can uniquely continue this solution on \((t_0,t_0+\epsilon )\) for some \(\epsilon >0\) as the mapping (3.5) is Lipschitz in a neighborhood of \((\textbf{u}(t_0),\textbf{v}(t_0))\). Alternatively, if \(\textbf{u}(t_0)= 0\) and \(\textbf{v}(t_0)\ne 0\), we can appeal to the lemma to uniquely continue the solution on \((t_0,t_0+\epsilon )\). If instead \(\textbf{u}(t_0)\ne 0\) and \(\textbf{v}(t_0)= 0\), we can apply the lemma to \(\textbf{v}\) as this function satisfies the same type of equation. In conclusion, at any time \(t_0\ge 0\), we may uniquely continue the solution to a longer interval. It follows that a solution \(\textbf{u}\) is globally and uniquely defined. \(\square \)

Finally, we are able to conclude that each eigenfunction is a scalar one when \(\Omega =(0,1)\).

Corollary 3.3

Suppose \(\textbf{u}\not \equiv 0\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} -\left( |\textbf{u}'|^{p-2}{} \textbf{u}'\right) '=\Lambda |\textbf{u}|^{p-2}{} \textbf{u} \quad \text {in } (0,1),\\ \hspace{41.54121pt}\textbf{u}(0)=0,\quad \textbf{u}(1)=0 \end{array}\right. } \end{aligned}$$

for some \(\Lambda >0\). Then \(\Lambda =k^p\lambda _p\) for some \(k\in {\mathbb {N}}\) and

$$\begin{aligned} \textbf{u}(t)=\frac{\textbf{c}}{\Lambda ^{1/p}} \omega \left( \Lambda ^{1/p}t\right) \end{aligned}$$

for some \(\textbf{c}\in {\mathbb {R}}^N\), where \(\omega \) is the solution of (3.3).

Proof

By uniqueness, \(\textbf{u}\) has the stated form for \(\textbf{c}=\textbf{u}'(0)\) and is defined on (0, 1). Since \(\textbf{u}(1)=0\),

$$\begin{aligned} \omega \left( \Lambda ^{1/p}\right) =0. \end{aligned}$$

This forces \(\Lambda =k^p\lambda _p\) for some \(k\in {\mathbb {N}}\). \(\square \)

4 Vectorial fractional minimizers

We now consider

$$\begin{aligned} \Lambda ^N_{s}=\inf _{\textbf{v}}\frac{\displaystyle \int _{{\mathbb {R}}^n}\int _{{\mathbb {R}}^n}\frac{|\textbf{v}(x)-\textbf{v}(y)|^p}{|x-y|^{n+sp}}dxdy}{\displaystyle \int _{\Omega }|\textbf{v}|^pdx} \end{aligned}$$

for \(N\in {\mathbb {N}}\), \(s\in (0,1)\), and \(p\in (1,\infty )\). In this infimum, \(\textbf{v}=(v_1,\dots , v_N)\in W^{s,p}_0(\Omega ;{\mathbb {R}}^N)\) is assumed not to be identically 0. It is a standard exercise to check that the infimum is attained. Using this fact, we will derive analogues of the assertions made above for the “local” case.

Lemma 4.1

For each \(N\in {\mathbb {N}}\),

$$\begin{aligned} \Lambda ^N_{s}=\Lambda ^1_{s}. \end{aligned}$$

Proof

Suppose \(\textbf{u}\in W^{s,p}_0(\Omega ;{\mathbb {R}}^N)\) is a minimizer for \(\Lambda ^N_{s}\) and set \(\omega =|\textbf{u}| \in W^{s,p}_0(\Omega )\). By the Cauchy–Schwarz inequality,

$$\begin{aligned} |\omega (x)-\omega (y)|^{2}= & {} |\textbf{u}(x)|^2+|\textbf{u}(y)|^2-2|\textbf{u}(x)||\textbf{u}(y)|\nonumber \\\le & {} |\textbf{u}(x)|^2+|\textbf{u}(y)|^2-2\textbf{u}(x)\cdot \textbf{u}(y)\nonumber \\= & {} |\textbf{u}(x)-\textbf{u}(y)|^2. \end{aligned}$$
(4.1)

As a result,

$$\begin{aligned} \Lambda ^1_{s}\le \frac{\displaystyle \int _{{\mathbb {R}}^n}\!\!\int _{{\mathbb {R}}^n}\frac{|\omega (x)-\omega (y)|^p}{|x-y|^{n+sp}}dxdy}{\displaystyle \int _{\Omega }|\omega |^pdx}\le \frac{\displaystyle \int _{{\mathbb {R}}^n}\!\!\int _{{\mathbb {R}}^n}\frac{|\textbf{u}(x)-\textbf{u}(y)|^p}{|x-y|^{n+sp}}dxdy}{\displaystyle \int _{\Omega }|\textbf{u}|^pdx}=\Lambda ^N_{s}. \end{aligned}$$

Alternatively, suppose u is an extremal when \(N=1\) and set \(\textbf{v} =\textbf{c} u\) for some \(\textbf{c}\in {\mathbb {R}}^N\) with \(\textbf{c}\ne 0\). Then

$$\begin{aligned} \Lambda ^N_{s} \le \frac{\displaystyle \int _{{\mathbb {R}}^n}\!\!\int _{{\mathbb {R}}^n}\!\!\frac{|\textbf{v}(x)-\textbf{v}(y)|^p}{|x-y|^{n+sp}}dxdy}{\displaystyle \int _{\Omega }|\textbf{v}|^pdx}= \frac{\displaystyle |\textbf{c}|^p\!\!\int _{{\mathbb {R}}^n}\!\!\int _{{\mathbb {R}}^n}\!\!\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}dxdy}{|\textbf{c}|^p\displaystyle \int _{\Omega }|u|^pdx}=\Lambda ^1_{s}. \end{aligned}$$

\(\square \)

Theorem 4.2

Suppose \(\textbf{u}\in W^{s,p}_0(\Omega ;{\mathbb {R}}^N)\) is a vector minimizer for \(\Lambda _s^N\). Then there is \(\textbf{c}\in {\mathbb {R}}^N{\setminus }\{0\}\) and a scalar minimizer \(\omega \in W^{s,p}_0(\Omega ;{\mathbb {R}})\) for \(\Lambda _s^1\) such that \(\textbf{u}=\textbf{c}\omega \).

Proof

Suppose \(\textbf{u}=(u_1,\dots ,u_N)\). An inspection of our proof of Lemma 4.1 shows that \(|\textbf{u}|\in W^{s,p}_0(\Omega ;{\mathbb {R}})\) is a scalar minimizer and

$$\begin{aligned} \textbf{u}(x)\cdot \textbf{u}(y)=|\textbf{u}(x)|| \textbf{u}(y)| \end{aligned}$$

for almost every \(x,y\in {\mathbb {R}}^n\). By Lagrange’s identity (2.1), it must be that

$$\begin{aligned} u_i(x)u_j(y)=u_j(x)u_i(y) \end{aligned}$$

for all ij and almost every \(x,y\in {\mathbb {R}}^n\). Since \(\textbf{u}\) does not vanish identically, there is some \(i=1,\dots , N\) and \(y\in {\mathbb {R}}^n\) for which \(u_i(y)\ne 0\). Set \(\omega =u_i\) and note that

$$\begin{aligned} u_j(x)=\frac{u_j(y)}{u_i(y)}u_i(x)=c_j\omega (x) \end{aligned}$$

for almost every \(x\in \Omega \) and \(j=1,\dots , N\). That is, \(\textbf{u} =\textbf{c} \omega \) with \(\textbf{c}=(c_1,\dots ,c_N)\). As \(\textbf{u}\) is a vector minimizer, \(\textbf{c}\ne 0\) and \(\omega \) is a scalar minimizer. \(\square \)

Just for the record and to answer a question raised by the referee, we should point out that in the case of the fractional p-Laplacian our result does not require \(\Omega \) to be connected. Moreover, (cf. [2, p. 329f]) in contrast to the local case \(s=1\), scalar first eigenfunctions are everywhere positive and unique modulo constant factors even on multiply connected domains.