1 Introduction

We consider affinely adjustable robust (AAR) linear complementarity problems (LCPs). The classic, i.e., deterministic, LCP is defined as follows. Given a matrix \(M \in \mathbb {R}^{n\times n}\) and a vector \(q \in \mathbb {R}^n\), the LCP(qM) is the problem to find a vector \(z \in \mathbb {R}^n\) that satisfies the conditions

$$\begin{aligned} z\ge 0, \quad Mz+q\ge 0, \quad z^\top (Mz+q)=0 \end{aligned}$$
(1)

or to show that no such vector exists. In the following, we use the standard \(\perp \)-notation and abbreviate (1) as

$$\begin{aligned} 0 \le z \perp Mz + q \ge 0. \end{aligned}$$
(2)

LCPs are very important both in applications as well as in mathematical theory itself. For instance, they are used to model market equilibrium problems in many applied studies of gas or electricity markets [11] but also play an important role in mathematical optimization, game theory, or general matrix theory. We refer the interested reader to the seminal book [10] for an overview.

Although there is a very strong connection between LCPs and mathematical optimization and although the latter has been studied a lot in the recent decades under data uncertainty, the field of LCPs under uncertainty is still in its infancy. Stochastic approaches can be found in [7,8,9, 15] and are mainly based on minimizing the expected residual gap of the uncertain LCP. On the other hand, robust approaches for uncertain LCPs have been considered recently as well. The first rigorous analysis of robust LCPs can be found in [19, 20], where the authors apply the concept of strict robustness [18] to LCPs, which has been used later in [16] in the context of Cournot–Bertrand equilibria in power networks. Moreover, in [13, 14], LCPs have been studied using \(\Gamma \)-robustness as introduced in [3, 4, 17]; see [6, 12] for some applications in power markets.

The most recent paper on robust LCPs, to the best of our knowledge, is [5], where robust LCPs are studied using the concept of adjustable robustness [2, 21]. In [5], the authors study adjustable robust LCPs in the most simplest setting, which is for affine decision rules and box uncertainties. In this short note, we stay with affine decision rules but generalize the results to general convex and compact uncertainty sets \(\mathcal {U}\). In this context, our contribution is twofold. First, we characterize AAR solutions of robust LCPs and, second, use this characterization to prove that the AAR LCP with a polyhedral uncertainty set is equivalent to a properly chosen mixed-integer linear problem (MILP).

Let us finally note that our study is related to [1], where the authors consider multi-parametric LCPs for sufficient matrices M. However, our robust approach as well as the studied relation to MILPs differ from the concepts and results of [1].

We introduce the problem under consideration in Sect. 2 and derive our main results in Sect. 3. Afterward, we comment on some special cases and extensions in Sect. 4.

2 Problem statement

We now define the adjustable robust LCP with affine decision rules. To this end, let \(M \in \mathbb {R}^{n \times n}\) and \(q \in \mathbb {R}^n\) as before and let \(T \in \mathbb {R}^{n \times k}\) be given. We assume that q is perturbed by Tu with \(u\in \mathcal {U}\). In what follows, we assume that \(\mathcal {U}\subset \mathbb {R}^k\) is a convex and compact uncertainty set that, w.l.o.g., contains 0 in its relative interior, i.e., \(0 \in \text {relint}(\mathcal {U})\). Then, the affinely adjustable robust LCP(\(q, M, T, \mathcal {U}\)) consists of finding an affine decision rule, i.e., we want to determine \(D \in \mathbb {R}^{n \times k}\) and \(r \in \mathbb {R}^n\) such that \(z(u) = Du + r\) satisfies

$$\begin{aligned} 0 \le z(u) \perp Mz(u) + q(u) \ge 0 \quad \text { for all }u \in \mathcal {U}. \end{aligned}$$
(3)

Equivalently, we can state the problem more explicitly as

$$\begin{aligned} 0 \le Du + r \perp MDu + Mr + q + Tu \ge 0 \quad \text { for all }u \in \mathcal {U}. \end{aligned}$$
(4)

Without loss of generality, we may assume that \(T \in \mathbb {R}^{n \times k}\) has full column rank; see [1]. In many applications, some variables are non-adjustable and thus have to be fixed before the uncertainty realizes. To model these so-called here-and-now variables, we simply require that the first h rows of D are zero for some \(h<n\). For more details, we refer to [5].

We close this section by briefly introducing the following notation. Let \(A \in \mathbb {R}^{m\times n}\), \(b \in \mathbb {R}^m\), and index sets \(I\subseteq [m]\mathrel {{\mathop :}{=}}\{1, \dotsc , m\}\) as well as \(J \subseteq [n]\) be given. Then, \(A_{I,J} \in \mathbb {R}^{|I| \times |J|}\) denotes the submatrix of A consisting of the rows indexed by I and the columns indexed by J. Moreover, \(b_I\) denotes the subvector with components specified by entries in I. If \(I = J\), we also write \(A_I\) instead of \(A_{I,I}\).

3 Main results

In this section, we state and prove our two main results. The first one is a full characterization of AAR solutions of robust LCPs.

Theorem 1

Assume that \(\mathcal {U}\) is convex and compact with \(0\in \text {relint}(\mathcal {U})\) and let \(\mathcal {B} = \{v^1, \dotsc , v^\ell \}\), \(\ell \in \mathbb {N}\) with \(\ell \le k\), be a basis of the linear hull \(\text {lin}(\mathcal {U})\) of \(\mathcal {U}\). Moreover, let \(z(u) = Du + r\) such that \(z(u) \ge 0\) as well as \(Mz(u) + q + Tu\ge 0\) holds for all \(u \in \mathcal {U}\) and define \(I \mathrel {{\mathop :}{=}}\{i \in [n]:r_i > 0\}\). Then, \(z(u)=Du+r\) is an AAR solution if and only if D and r satisfy the conditions

$$\begin{aligned} M_{I,\varvec{\cdot }}r + q_I&= 0, \end{aligned}$$
(5)
$$\begin{aligned} (M_{I,\varvec{\cdot }}D + T_{I,\varvec{\cdot }})v^j&= 0, \quad j \in [\ell ]. \end{aligned}$$
(6)

Proof

First, let \(z(u) = Du + r\) be an AAR solution. Then, r is a nominal solution (as \(0 \in \mathcal {U}\)) and therefore r satisfies \(M_{I,\varvec{\cdot }}r + q_I = 0\), i.e., (5) is fulfilled. For every \(v^j\), \(j \in [\ell ]\), there exists a scalar \(\delta _j > 0\) such that \(\delta _jv^j \in \mathcal {U}\) and \(\delta _j D_{I,\varvec{\cdot }} v^j + r_I > 0\) holds. Thus, for every \(j \in [\ell ]\) the AAR solution z satisfies

$$\begin{aligned} 0 = M_{I,\varvec{\cdot }} z(\delta _jv^j) + q_I + \delta _jT_{I,\varvec{\cdot }}v^j = \delta _jM_{I,\varvec{\cdot }}Dv^j + \delta _jT_{I,\varvec{\cdot }}v^j = \delta _j(M_{I,\varvec{\cdot }}D+T_{I,\varvec{\cdot }})v^j, \end{aligned}$$

where we used (5) for the second equality. Thus, z satisfies (6).

Let now D and r satisfy (5) and (6). From \(0 \in \text {relint}(\mathcal {U})\), it follows that for all \(u \in \mathcal {U}\) there exists an \(\varepsilon > 0\) such that \(-\varepsilon u \in \mathcal {U}\). Hence, nonnegativity of \(z(u)=Du+r\) yields

$$\begin{aligned} \left\{ i \in [n]:\exists u \in \mathcal {U}: D_{i,\varvec{\cdot }}u + r_i > 0\right\} \subseteq I. \end{aligned}$$

Thus, for \(\bar{I} = [n] \setminus I\),

$$\begin{aligned} z_{\bar{I}}(u)^\top (M_{\bar{I},\varvec{\cdot }} z(u) + q_{\bar{I}} + T_{\bar{I},\varvec{\cdot }}u) = 0 \end{aligned}$$

holds for all \(u \in \mathcal {U}\). On the other hand, every \(u \in \mathcal {U}\) can be written as a linear combination \(u = \sum _{i=1}^\ell \lambda _j v^j\) with \(\lambda _j \in \mathbb {R}\). Hence,

$$\begin{aligned} M_{I,\varvec{\cdot }}(Du+r)+q_I+T_{I,\varvec{\cdot }}u = M_{I,\varvec{\cdot }}Du + T_{I,\varvec{\cdot }} u = (M_{I,\varvec{\cdot }}D+T_{I,\varvec{\cdot }}) \left( \sum _{j=1}^\ell \lambda _j v^j\right) = 0 \end{aligned}$$

holds, where we used (5) for the first and (6) for the last equality. Therefore, \(z(u)=Du+r\) fulfills complementarity and is an AAR solution due to the additional assumptions of the theorem. \(\square \)

The last theorem states a rather abstract characterization of AAR solutions. For arbitrary convex and compact uncertainty sets, working with this characterization might be difficult. However, the characterization can be practically used in more specific cases, which is what we do in our second main result about polyhedral uncertainty sets, where we use the characterization of the last theorem to show that affinely adjustable robust solutions are the solutions of a properly chosen MILP.

Theorem 2

Let \(\mathcal {U}= \{u \in \mathbb {R}^k:\Theta u \ge \zeta \}\) with \(\Theta \in \mathbb {R}^{g \times k}\) and \(\zeta \in \mathbb {R}^g\) and let \(\mathcal {B} = \{v^1, \dotsc , v^\ell \}\) be a basis of \(\text {lin}(\mathcal {U})\). Furthermore, let \(b \in \mathbb {R}\) be sufficiently large and consider the mixed-integer linear feasibility problem

$$\begin{aligned} \text {Find} \quad&x\in \{0,1\}^n,~D\in \mathbb {R}^{n\times k}, ~r\in \mathbb {R}_{\ge 0}^n,~A,C\in \mathbb {R}_{\ge 0}^{g\times n} \end{aligned}$$
(7a)
$$\begin{aligned} \text {s.t.}\quad&r_i\le bx_i,&i\in [n], \end{aligned}$$
(7b)
$$\begin{aligned}&b(1-x_i)\ge M_{i,\varvec{\cdot }}r+q_i\ge 0,&i\in [n], \end{aligned}$$
(7c)
$$\begin{aligned}&b(1-x_i)\ge (M_{i,\varvec{\cdot }}D+T_{i,\varvec{\cdot }})v^j\ge -b(1-x_i),&i \in [n],\, j\in [\ell ], \end{aligned}$$
(7d)
$$\begin{aligned}&\zeta ^\top A_{\varvec{\cdot },i} + r_i \ge 0,&i\in [n], \end{aligned}$$
(7e)
$$\begin{aligned}&\Theta ^\top A_{\varvec{\cdot },i} = D_{i,\varvec{\cdot }}^\top ,&i\in [n], \end{aligned}$$
(7f)
$$\begin{aligned}&\zeta ^\top C_{\varvec{\cdot },i} + M_{i,\varvec{\cdot }}r +q_i \ge 0,&i\in [n], \end{aligned}$$
(7g)
$$\begin{aligned}&\Theta ^\top C_{\varvec{\cdot },i} = (M_{i,\varvec{\cdot }}D+T_{i,\varvec{\cdot }})^\top ,&i\in [n], \end{aligned}$$
(7h)
$$\begin{aligned}&D_{[h],\varvec{\cdot }}=0. \end{aligned}$$
(7i)

If (7) is feasible, it returns an AAR solution of the form \(z(u) = Du + r\) of (4). If it is infeasible, no AAR solution exists.

Proof

We show that \(z(u)=Du+r\) is an AAR solution if and only if there exist xAC such that xDrAC solve (7).

We start by proving complementarity of the solutions. Let \(z(u)=Du+r\) be an AAR solution. We define \(I \mathrel {{\mathop :}{=}}\{i \in [n]:r_i > 0\}\) and \(x_i=1\) for all \(i\in I\) and \(x_i=0\) for all \(i\in [n]\setminus I\). Then, Theorem 1 implies that xDr satisfy the constraints (7b)–(7d) for sufficiently large b. On the other hand, if xDrAC satisfy the conditions  (7b)–(7d), \(r_i>0\) implies \(x_i=1\) and thus D and r fulfill the conditions (5) and (6) of Theorem 1.

It remains to consider the nonnegativity constraints of (4). First, we prove nonnegativity of the solution, i.e., \(Du+r\ge 0\) for all \(u\in \mathcal {U}\), if and only if there exists a matrix A such that DrA satisfy (7e) and (7f). For all \(i\in [n]\), we observe that \(D_{i,\varvec{\cdot }}u+r_i\ge 0\) holds for all \(u\in \mathcal {U}\) if and only if \(\min _{u\in \mathcal {U}} \{D_{i,\varvec{\cdot }}u+r_i\}\ge 0\). We now employ duality and obtain that this is equivalent to the statement that there exists a vector \(a\in \mathbb {R}^g_{\ge 0}\) such that \(\zeta ^\top a + r_i \ge 0\) and \(\Theta ^\top a = D_{i,\varvec{\cdot }}^\top \). The matrix \(A\in \mathbb {R}_{\ge 0}^{g\times n}\) then contains the vectors a as columns.

Next, we show that \(MDu+Mr+q+Tu\ge 0\) holds for all \(u\in \mathcal {U}\) if and only if there exists a matrix C such that DrC satisfy (7g) and (7h). This is analogous to the previous step and we observe that for every \(i\in [n]\), \(M_{i,\varvec{\cdot }}Du+M_{i,\varvec{\cdot }}r+q_i+T_{i,\varvec{\cdot }}u\ge 0\) for all \(u\in \mathcal {U}\) is equivalent to \(\min _{u\in \mathcal {U}} \{M_{i,\varvec{\cdot }}Du + M_{i,\varvec{\cdot }}r+q_i+T_{i,\varvec{\cdot }}u\} \ge 0\). Again, this holds if and only if there exists a vector \(c\in \mathbb {R}^g_{\ge 0}\) such that \(\zeta ^\top c + M_{i,\varvec{\cdot }}r +q_i \ge 0\) and \(\Theta ^\top c = (M_{i,\varvec{\cdot }}D+T_{i,\varvec{\cdot }})^\top \). The matrix \(C\in \mathbb {R}_{\ge 0}^{g\times n}\) then contains the vectors c as columns.

Finally, the remaining constraint (7i) enforces that the first h variables are non-adjustable. \(\square \)

Remark 1

The linear hull \(\text {lin}(\mathcal {U})\) of the uncertainty set \(\mathcal {U}\) can be computed in polynomial time if \(\mathcal {U}\) is a polyhedron, i.e., if \(\mathcal {U}= \{u\in \mathbb {R}^k:\Theta u \ge \zeta \}\) as in Theorem 2. We can then maximize once in every direction \(\Theta _{j,\varvec{\cdot }}\), \(j\in [g]\), and check if the optimal value is larger than \(\zeta _j\). If it is equal to \(\zeta _j\), we know \(\zeta _j=0\) due to \(0\in \text {relint}(\mathcal {U})\) and the inequality constraint can be replaced by an equality constraint. We obtain the representation \(\mathcal {U}= \{u\in \mathbb {R}^k:\Phi u=0, \Theta ' u \ge \zeta '\}\) with \(\Phi \in \mathbb {R}^{(g-f)\times k}\), \(f\le g\), and \(\Theta '\in \mathbb {R}^{f\times k}\). The basis of \(\text {lin}(\mathcal {U})\) is then given by the basis of ker\((\Phi )\).

Let us also comment on a difference to the setting considered in [5]. There, the submatrix \(M_I\) has to be invertible for an AAR solution to exist if all entries of q are uncertain, cf. Theorem 4.5 in [5]. This is not the case in our setting as the following example shows.

Example 1

Consider the uncertain LCP given by

$$\begin{aligned} M = \begin{bmatrix} 1&{}-1\\ 1&{}-1 \end{bmatrix}, \quad q(u) = \begin{pmatrix} -1\\ -1 \end{pmatrix} + \begin{pmatrix} u_1\\ u_2 \end{pmatrix}, \quad \mathcal {U}= \left\{ (u_1, u_2):-2 \le u_1 =u_2 \le 2\right\} . \end{aligned}$$

Then,

$$\begin{aligned} D = \begin{bmatrix} -1&{}0\\ 0&{}0 \end{bmatrix}, \quad r = \begin{pmatrix} 2\\ 1 \end{pmatrix} \end{aligned}$$

is an AAR solution with \(I = \{1,2\}\), but the matrix M is not invertible.

Finally note that if T is the identity matrix and \(\mathcal {U}\) is a box, the MILP (7) is equivalent to the MILP in Theorem 4.7 in [5].

4 Remarks and extensions

In this section, we comment on a special case, namely the one in which M is positive semidefinite, and several possible extensions.

4.1 Positive semidefinite M

We first consider the case that the matrix M is positive semidefinite. In the following, we show that in this setting an AAR solution can be found in polynomial time. The same result was shown for box uncertainties in [5] with similar arguments. For positive semidefinite M, Theorem 3.1.7 (a) in [10] states that

$$\begin{aligned} y^\top (Mz+q) = z^\top (My+q) = 0 \end{aligned}$$
(8)

holds for any \(y,z\in \text {SOL}(q,M)\), where \(\text {SOL}(q,M)\) denotes the set of solutions of the LCP(qM). Let

$$\begin{aligned} \mathcal {P}= \left\{ i\in [n]:\exists z\in \text {SOL}(q,M) \text { with } z_i>0\right\} . \end{aligned}$$

Due to (8), every nominal solution \({r\in \text {SOL}(q,M)}\) satisfies \({M_{\mathcal {P},\varvec{\cdot }}r + q_\mathcal {P}= 0}\). Therefore, every AAR solution has to satisfy

$$\begin{aligned} M_{\mathcal {P},\varvec{\cdot }} Du + T_{\mathcal {P},\varvec{\cdot }}u = 0 \end{aligned}$$

for all \(u\in \mathcal {U}\) as otherwise there would exist a \(u'\in \mathcal {U}\) with \(M_{i,\varvec{\cdot }}(Du'+r)+\bar{q}+T_{i,\varvec{\cdot }}u'<0\) for some \(i\in \mathcal {P}\). Thus, the set I in Theorem 1 can be replaced by \(\mathcal {P}\) and the MILP (7) can be simplified to an LP as we do not need the binary variables anymore.

Furthermore, Theorem 3.1.7 (c) in [10] states that \(\text {SOL}(q,M)\) is given by

$$\begin{aligned} \text {SOL}(q,M) = \left\{ z \in \mathbb {R}^n_{\ge 0}:q+Mz\ge 0, \ q^\top (z-\bar{z}) = 0, \ (M+M^\top )(z-\bar{z}) = 0\right\} , \end{aligned}$$

where \(\bar{z}\in \text {SOL}(q,M)\) is an arbitrary solution. Such a solution \(\bar{z}\) can be found by solving a single convex-quadratic optimization problem. With this polyhedral description of \(\text {SOL}(q,M)\), \(\mathcal {P}\) can be obtained by solving n linear programs in which \(z_i\), \(i \in [n]\), is maximized over \(\text {SOL}(q,M)\) and then checking, whether the optimal value is strictly positive. This implies that \(\mathcal {P}\) can be computed in polynomial time and, hence, we can find an AAR solution in polynomial time if M is positive semidefinite.

4.2 Discrete uncertainty sets

Next, we briefly discuss discrete uncertainty sets. In the following example, for any uncertainty realization in the discrete set, there exists a solution whereas there does not exist solutions for some realizations in the convex hull of the uncertainty set.

Example 2

Consider the LCP given by

$$\begin{aligned} M= \begin{bmatrix} 0&{}0\\ 1&{}0 \end{bmatrix}, \quad q(u)= \begin{pmatrix} 1+u\\ u \end{pmatrix} \end{aligned}$$

and \(\mathcal {U}= \{\pm 1\}\). Then, for \(u=1\), \(z=(0,0)\) is a solution and for \(u=-1\), \(z=(1,0)\) is a solution. If \(\mathcal {U}' = \text {conv}(\mathcal {U})\), there is, however, no solution for \(u= - 1/2\).

This example is in contrast to results for classic robust linear optimization, where one can always replace the uncertainty set with its convex hull. The reason for this behavior can be explained with classic LCP theory. In the literature, the cone of vectors q for which the LCP(qM) with a given matrix M has a solution is usually denoted by K(M), i.e.,

$$\begin{aligned} K(M) = \left\{ q\in \mathbb {R}^n:\text {SOL}(q,M) \ne \emptyset \right\} . \end{aligned}$$

In general, K(M) is not convex, and hence the convex hull of some points that lie in K(M) is not necessarily contained in K(M). However, K(M) is convex if and only if M is a so-called \(Q_0\)-matrix, cf. Proposition 3.2.1 in [10], and we obtain the following result.

Corollary 1

Suppose that M is a \(Q_0\)-matrix. Then, the uncertain LCP has a solution for all \(u\in \text {conv}(\mathcal {U})\) if it has a solution for all \(u\in \mathcal {U}\).

4.3 Decision-dependent uncertainty sets

The MILP (7) can be extended to cover simple decision-dependent uncertainty sets. To this end, consider the uncertainty set

$$\begin{aligned} \mathcal {U}(r)=\{u\in \mathbb {R}^k:\Theta u\ge \zeta + \Psi r\}, \quad \Psi \in \mathbb {R}^{g\times n}, \end{aligned}$$

that depends on the chosen nominal solution r. If the deviation caused by \(\Psi r\) is not too large, in some cases the linear hull does not change. Hence, in these cases we only have to replace the constraints (7e) and (7g) by their respective quadratic versions that include the terms \((\Psi r)^\top A_{\varvec{\cdot }, i}\) and \((\Psi r)^\top C_{\varvec{\cdot }, i}\), respectively. We leave the detailed study of such situations for future work.

4.4 Mixed LCPs

Finally, we discuss so-called mixed LCPs. These problems consist in finding \(z\in \mathbb {R}^n\) and \(y\in \mathbb {R}^m\) such that

$$\begin{aligned} Vz+Wy+p&= 0, \end{aligned}$$
(9a)
$$\begin{aligned} Mz+Ny+q&\ge 0, \end{aligned}$$
(9b)
$$\begin{aligned} z&\ge 0, \end{aligned}$$
(9c)
$$\begin{aligned} z^\top (Mz+Ny+q)&= 0 \end{aligned}$$
(9d)

with \(M\in \mathbb {R}^{n\times n}\), \(N\in \mathbb {R}^{n\times m}\), \(q\in \mathbb {R}^n\), \(V\in \mathbb {R}^{m\times n}\), \(W\in \mathbb {R}^{m\times m}\), \(p\in \mathbb {R}^m\). We refer to [10] for some source problems.

We now briefly demonstrate necessary adaptions to the MILP (7) to compute an AAR solution to an uncertain version of the mixed LCP (9). As before, we assume that q is affected by uncertainty in the form of \(q(u)=Tu\), \(u\in \mathcal {U}\), and that z is affinely adjustable, i.e., \(z(u)=Du+r\). Several parameters might be uncertain in the case of mixed LCPs. In the simplest case, the matrices V, W, M, and N are certain, y is non-adjustable and only \(p(u)=p+Pu\) is uncertain for some given \(P\in \mathbb {R}^{m\times k}\) and \(u\in \mathcal {U}\). In this case, the constraints (7c) and (7g) have to be extended by the term Ny. Moreover, D, r, and y have to satisfy the resulting slightly adapted version of the MILP (7) and the additional constraints

$$\begin{aligned} Vr+Wy+p&= 0, \\ (VD+P)v^j&= 0, \quad j \in [l]. \end{aligned}$$

This also includes the special case in which all additional parameters p, V, W, M, and N are certain and y is non-adjustable. In this case, the second of the above constraints reduces to \(VDv^j=0\) for all \(v^j\in [l]\). In the case of adjustable y, i.e., \(y(u)=Eu+s\), the MILP has to be adapted accordingly in a similar way. Additionally, z and y have to satisfy the additional constraints

$$\begin{aligned} Vr+Ws+p&= 0, \\ (VD+WE+P)v^j&= 0, \quad j \in [l]. \end{aligned}$$

Compared to the classic LCP, on the one hand we get additional freedom by being allowed to choose more variables, while on the other hand, there are additional constraints, some of which might be quite restrictive.