1 Introduction

Linear programming (LP) problems represent many real-world problems such as production planning problems, diet problems, investment problems, budget allocation problems, and so on. The conventional LP approach assumes that all coefficients are known precisely. However, in real world applications, this assumption cannot always be guaranteed. Due to measure restrictions, noise, model errors, imprecise knowledge (Wendell 1984) and human factors such as just-noticeable difference (JND) (Stern and Johnson 2010), the coefficients are known only imprecisely, but the possible ranges of coefficient values may be known frequently. In such situations, the decision-maker would have a great interest in knowing to what extent a solution would preserve its optimality against possible fluctuations of the coefficients.

To address such problems, researchers have come up with many methods. A classical one called sensitivity analysis (Bradley et al. 1977) has been a powerful tool in handling the difficulty in obtaining the exact coefficients. It utilises a factor called shadow prices to analyse the maximum perturbations on a single coefficient with maintaining the basis of the obtained optimal solution. However, the drawback is evident since it cannot treat the case with multiple perturbations. To overcome this problem, Bradley et al. (1977) pointed out a particular cone for the multiple perturbation case and named the approach as 100 percent rule. In this paper, we call the cone optimality assurance cone since it assures that the optimal basis of the current solution would not change as far as multiple perturbations do not exceed the range specified by the cone. This is a remarkable result. If the affected coefficients fall into the optimality assurance cone, the basis of the optimal solution is unchanged. Nevertheless, some researchers found that the optimality assurance cone is hard for utilization, because it is still hard to check whether the affected coefficients are in the cone or not.

Given a hyper-box showing the possible range of the objective function coefficient vector, Wendell (1985) proposed an approach, called the tolerance approach, which finds the maximum hyper-box geometrically similar to the given possible range in the optimality assurance cone. Although this approach finds the maximum hyper-box geometrically similar to the given possible range, it turns out that there may exist a larger hyper-box, including it, which is not geometrically similar to the given possible range. Wondolowski (1991) and Filippi (2005) successfully proposed the modified tolerance approach, which finds a maximal hyper-box within the optimality assurance cone. Furthermore, if the feasible basic solution is degenerate, Hladík (2011) gave a useful approach based on the support set and optimal partition invariancy.

Different approaches from the tolerance approach have been proposed while the problem settings are similar. Inuiguchi and Sakawa (1994) introduced the concept of possible and necessary optimality to linear programming problems with interval and fuzzy objective function coefficients. A necessarily optimal solution is the most reasonable solution because it is optimal for all possible objective function coefficients. On the other hand, a possibly optimal solution is the minimally reasonable solution because it is optimal for at least one of possible objective function coefficient vectors. As we prefer a robust solution, we describe the necessary optimality. It has been considered that the checking procedure of necessary optimality of a given feasible solution requires an enormous effort even in the case of interval coefficients, because the necessary optimality can be confirmed by checking the optimality for all combinations of the bounds of interval coefficients. Moreover, the necessarily optimal solution does not always exist.

In this paper, we overcome the former difficulty of necessary optimality for non-degenerate basic feasible solutions. To overcome the latter difficulty, several methods have been proposed. One approach is to relax the optimality to sub-optimality, from this point of view, the minimax regret solution (Inuiguchi and Sakawa 1995, 1998; Mausser and Laguna 1999; Inuiguchi and Tanino 2001; Rivaz and Yaghoobi 2013) and the best-worst achievement rate solution (Inuiguchi and Sakawa 1997; Henriques and Antunes 2009) were proposed. In those approaches, the worst deviation from the optimality is minimized. However, the computational complexity of obtaining the solution remains large. As another way to overcome the non-existence of a necessarily optimal solution, we may specify the most probable range of objective function coefficients as a small set, so that the fuzzy subset representing the possible range of objective function coefficients has a small core, where the core of a fuzzy subset is a set of elements having the full membership, i.e., membership degree 1. The singleton core frequently makes the necessarily optimal solution non-empty. Some useful definitions of fuzzy numbers (Dubois and Prade 1987; Klir and Yuan 1996) and fuzzy vectors (Inuiguchi and Ramík 2000; Inuiguchi et al. 2003) are utilized for representations of the fuzzy subsets.

It is noted that we only consider single-objective LP problems in this paper. As for the multi-objective problems, Hladík (2011) and Henriques et al. (2020) have shown some useful results for a necessarily efficient solution, a multi-objective optimization counterpart of a necessarily optimal solution. Moreover, Hladík (2012) also proved that the computational complexity of checking necessary efficiency is co-NP hard.

This paper is organized as follows. In the next section, we start with a brief introduction about the well-known results on the optimality condition of a basic solution to LP problems. We describe also the results about possible and necessary optimalities useful for LP problems with uncertain objective function coefficients. The tolerance analysis (Wendell 1985) is applied to the necessary optimality test. In Sect. 3, we extend the proposed approach to the case where the range of the objective coefficient vector is specified by a fuzzy subset. Using possibility and necessity measures, possible and necessary optimalities are extended to the fuzzy coefficient case. Main results about the calculation of the necessary optimality degree are investigated in several types of fuzzy subsets. The usefulness of the obtained results is demonstrated by examples in this section. Finally, concluding remarks are given in Sect. 4.

1.1 Notations

In this paper, we use the following notations:

  • \({\mathbb {R}}^n\) denotes an n-dimensional Euclidean space.

  • \(\varvec{I}\) denotes an identity matrix with convenient dimension.

  • \(|\cdot |\) denotes an entry-wise absolute operator.

  • \(\varvec{A}^{-\mathrm {T}}\) denotes the inverse of a transposed matrix \(\varvec{A}\) (non-singular).

  • \(\mathrm {diag}(\varvec{v})\) denotes the diagonal matrix with the diagonal entries being \(\varvec{v}\).

2 Linear programming problems with uncertain objective function coefficients

2.1 General fundamentals

In this paper, we treat the optimality tests for non-degenerate basic feasible (NBF) solutions in an LP problem with fuzzy objective function coefficients. Before starting the analysis, we give a brief review of the essential fundamentals.

The standard form of an LP problem is written as follows:

$$\begin{aligned} \mathrm{minimize} \ \varvec{c}^{\mathrm {T}} \varvec{x}, \text { subject to } \varvec{A}\varvec{x}=\varvec{b},\ \varvec{x}\ge \varvec{0}, \end{aligned}$$
(1)

where \(\varvec{x}\in {\mathbb {R}}^n\) denotes a decision variable vector, while \(\varvec{A}\in {\mathbb {R}}^{m\times n}\), \(\varvec{b}\in {\mathbb {R}}^m\) and \(\varvec{c}\in {\mathbb {R}}^n\) denote the coefficients, i.e., constant matrix and vectors.

In the simplex method (Bradley et al. 1977), we consider basic solutions. Let \(\varvec{x}_B^*\) and \(\varvec{x}_N^*\) be the basic and non-basic variable sub-vectors corresponding to a basic solution \(\varvec{x}^*\), respectively. Matrices \(\varvec{A}_B\in {\mathbb {R}}^{m\times m}\) and \(\varvec{A}_N\in {\mathbb {R}}^{m\times (n-m)}\) refer to the basic and non-basic sub-matrices of \(\varvec{A}\) corresponding to the index sets of \(\varvec{x}_B^*\) and \(\varvec{x}_N^*\), respectively. Coefficient vectors \(\varvec{c}_B\in {\mathbb {R}}^m\) and \(\varvec{c}_N\in {\mathbb {R}}^{n-m}\) refer to the basic and non-basic sub-vectors of \(\varvec{c}\) corresponding to the index sets of \(\varvec{x}_B^*\) and \(\varvec{x}_N^*\), respectively. As \(\varvec{A}_B\) should be non-singular, the basic solution \(\varvec{x}^*\) is determined as \(\varvec{x}^*_B = \varvec{A}_B^{-1}\varvec{b}\) and \(\varvec{x}^*_N = \varvec{0}\). When \(\varvec{x}^*_B = \varvec{A}_B^{-1}\varvec{b}\ge \mathbf{0}\), the basic solution \(\varvec{x}^*\) is feasible. Moreover, if \(\varvec{x}^*_B = \varvec{A}_B^{-1}\varvec{b}>\mathbf{0}\), the basic feasible (BF) solution \(\varvec{x}^*\) is said to be non-degenerate. If a BF solution is not non-degenerate, it is said to be degenerate.

The optimality condition of a basic solution is given as follows:

Proposition 1

A basic solution \(\varvec{x}^*\) is optimal if and only if the following conditions are valid:

$$\begin{aligned} \varvec{c}_N - \varvec{A}_N^{\mathrm {T}} \varvec{A}_B^{-\mathrm {T}}\varvec{c}_B \ge \varvec{0} \text { and } \varvec{A}_B^{-1}\varvec{b} \ge \varvec{0}, \end{aligned}$$
(2)

As a result, the optimal solution is obtained by \(\varvec{x}^*_B = \varvec{A}_B^{-1}\varvec{b}\) and \(\varvec{x}^*_N = \varvec{0}\) with the optimal value \(\varvec{c}_B^\mathrm{T} \varvec{A}_B^{-1}\varvec{b}\).

Proposition 1 guarantees the invariance of the index set of the basic variables corresponding to \(\varvec{x}_B^*\) regardless of how coefficients \(\varvec{A}\), \(\varvec{b}\) and \(\varvec{c}\) vary in their corresponding ranges derived by (2). However, solution \(\varvec{x}^*\) can be changed by variations in \(\varvec{A}\) and \(\varvec{b}\) as long as \(\varvec{x}^*_B = \varvec{A}_B^{-1}\varvec{b}\) and \(\varvec{x}^*_N = \varvec{0}\) are be satisfied. Moreover, let \(\varvec{A}\) and \(\varvec{b}\) be fixed. If \(\varvec{c}\) is varied in the range where (2) is satisfied, the optimal solution is invariant but the optimal value changes because it is obtained by \(\varvec{c}_B^\mathrm{T} \varvec{A}_B^{-1}\varvec{b}\). For a degenerate basic feasible (DBF) solution, the optimal solution can be invariant even when \(\varvec{c}\) is changed beyond the range of (2) under fixed \(\varvec{A}\) and \(\varvec{b}\). This is because there are multiple index sets of basic variables with which the associated basic solution becomes the same optimal solution \(\varvec{x}^*\). Namely, corresponding to another index set, we have a different range of (2) and if the varied \(\varvec{c}\) satisfies (2), the optimal solution does not change.

For a DBF solution, for each index set of basic variables with which the associated basic solution is optimal, a different range of \(\varvec{c}\) satisfying (2) with fixed \(\varvec{A}\) and \(\varvec{b}\) is obtained. To avoid the complexity caused by such multiple ranges, we concentrate on the analysis of a non-degenerate basic feasible (NBF) solution \(\varvec{x}^*\) as it has a unique range expressed by (2). However, we can extend the proposed analysis to the case of a DBF solution by using some proper enumeration technique of all index sets of basic variables with which the associated basic solution is optimal. For example, we may utilize the support set and the optimal partition invariancy (Hladík 2010).

In this paper, we consider LP problems with uncertain objective function coefficient vector \(\varvec{\gamma }\), Namely, we consider

$$\begin{aligned} \mathrm{minimize} \ \varvec{\gamma }^{\mathrm {T}} \varvec{x}, \text { subject to } \varvec{A}\varvec{x}=\varvec{b}, \ \varvec{x}\ge {\varvec{0}}, \end{aligned}$$
(3)

where \(\varvec{\gamma }\) denotes the uncertain coefficients while \(\varvec{A}\) and \(\varvec{b}\) denote the constant coefficients.

We note that many approaches (Stancu-Minasian 1984; Inuiguchi and Ramík 2000) to the LP problem with uncertain \(\varvec{A}\) and \(\varvec{b}\) have been proposed. When the possible ranges of \(\varvec{A}\) and \(\varvec{b}\) are given by intervals or fuzzy subsets, the constraints of LP problems with uncertain \(\varvec{A}\) and \(\varvec{b}\) are reduced to linear inequalities (see Inuiguchi Masahiro and Kume 1993; Inuiguchi and Ramík 2000). Therefore, when \(\varvec{A}\) and \(\varvec{b}\) are also uncertain in LP Problem (3), we first apply the results in the previous approaches (Inuiguchi Masahiro and Kume 1993; Inuiguchi and Ramík 2000), so that the constraints with uncertain \(\varvec{A}\) and \(\varvec{b}\) are reduced to linear inequalities and the given problem is reduced to an LP Problem where only objective function coefficients are uncertain.

When the variation range of \(\varvec{\gamma }\) is specified as a set \(\varPhi \subseteq {\mathfrak {R}}^n\), we can apply the concept of possible and necessary optimalities (Inuiguchi and Sakawa 1994) to a feasible solution in Problem (3):

Definition 1

(Possible and Necessary Optimalities) Let \(\varPhi \subseteq {\mathbb {R}}^n\) denote a bounded set composed of all possible objective function coefficient vectors \(\varvec{c}\). Let \(\varvec{x}^*\) be a feasible solution.

  • \(\varvec{x}^*\) is possibly optimal for \(\varPhi \) if and only if it is optimal for at least one \(\varvec{c}\in \varPhi \), and

  • \(\varvec{x}^*\) is necessarily optimal for \(\varPhi \) if and only if it is optimal for every \(\varvec{c}\in \varPhi \).

We note that infeasible solutions are neither possibly optimal nor necessarily optimal.

Definition 2

(Optimality Assurance Cone) Let \(\varvec{x}^*\) be a feasible solution. The optimality assurance cone of \(\varvec{x}^*\), denoted as \({\mathscr {N}}(\varvec{x}^*)\), is defined by

$$\begin{aligned} {\mathscr {N}}(\varvec{x}^*)=\left\{ \varvec{c}\in {\mathbb {R}}^n: \varvec{c}^{\mathrm {T}}\varvec{x}^* =\min \{ \varvec{c}^\mathrm{T}\varvec{x}: \varvec{A}\varvec{x}=\varvec{b}, \ \varvec{x}\ge \mathbf{0} \}\right\} . \end{aligned}$$
(4)

\({\mathscr {N}}(\varvec{x}^*)\) is the set of objective function coefficient vectors to which feasible solution \(\varvec{x}^*\) is optimal under constraints \(\varvec{A}\varvec{x}=\varvec{b}\), \(\varvec{x}\ge \mathbf{0}\).

It is apparent that \({\mathscr {N}}(\varvec{x}^*)\) is a set of objective function coefficient vectors maintaining the optimality of \(\varvec{x}^*\) and becomes a convex cone. Namely, if \({\mathscr {N}}(\varvec{x}^*)\) is a non-empty set, a feasible solution \(\varvec{x}^*\) is optimal as far as the objective function coefficient vector is in \({\mathscr {N}}(\varvec{x}^*)\). This is why we call \({\mathscr {N}}(\varvec{x}^*)\) the optimal assurance cone of \(\varvec{x}^*\).

Using the optimality assurance cone \({\mathscr {N}}(\varvec{x}^*)\), we have the following equivalences (Inuiguchi and Sakawa 1994):

$$\begin{aligned} \varvec{x}^* \hbox { is possibly optimal}&\Leftrightarrow \varPhi \cap {\mathscr {N}}(\varvec{x}^*) \ne \emptyset , \end{aligned}$$
(5)
$$\begin{aligned} \varvec{x}^* \hbox { is necessarily optimal}&\Leftrightarrow \varPhi \subseteq {\mathscr {N}}(\varvec{x}^*). \end{aligned}$$
(6)

When \(\varvec{x}^*\) is an NBF solution, using Proposition 1, the assurance cone of \(\varvec{x}^*\), \({\mathscr {N}}(\varvec{x}^*)\), is obtained explicitly as shown in the following proposition.

Proposition 2

Let \(\varvec{x}^*\) be an NBF solution to Problem (3). Then the optimality assurance cone \({\mathscr {N}}(\varvec{x}^*)\) is obtained as

$$\begin{aligned} {\mathscr {M}}(\varvec{x}^*):=\left\{ \varvec{c} \in {\mathbb {R}}^n: \varvec{c}_N - \varvec{A}_N^{\mathrm {T}} \varvec{A}_B^{-\mathrm {T}}\varvec{c}_B \ge \varvec{0} \right\} . \end{aligned}$$
(7)

For convenience, we denote a simple expression of \({\mathscr {M}}(\varvec{x}^*)\) with matrix \(\varvec{M}(\varvec{x}^*)\) with

$$\begin{aligned} {\mathscr {M}}(\varvec{x}^*) = \left\{ \varvec{c} \in {\mathbb {R}}^n:\varvec{M}(\varvec{x}^*)\varvec{c} \ge \mathbf{0}\right\} , \end{aligned}$$
(8)

where \(\varvec{M}(\varvec{x}^*)\in {\mathbb {R}}^{(n-m)\times n}\) and when \(\varvec{c}\) is written as \(\varvec{c}^\mathrm {T}= \left( \varvec{c}_B^\mathrm {T}, \ \varvec{c}_N^\mathrm {T}\right) \), by some permutation, we have \(\varvec{M}(\varvec{x}^*) = \left( - \varvec{A}_N^{\mathrm {T}} \varvec{A}_B^{-\mathrm {T}}, \ \varvec{I} \right) \).

Proof

It is straightforwardly obtained from Proposition 1 and the fact that a unique index set of basic variables is associated with the NBF solution \(\varvec{x}^*\). \(\square \)

If \(\varvec{x}^*\) is an NBF solution, we obtain \({\mathscr {N}}(\varvec{x}^*) = {\mathscr {M}}(\varvec{x}^*)\) as shown in Proposition 2. However, if \(\varvec{x}^*\) is a degenerate basic feasible solution, we do not obtain such a concise expression of \({\mathscr {N}}(\varvec{x}^*)\).

Since \({\mathscr {N}}(\varvec{x}^*) = {\mathscr {M}}(\varvec{x}^*)\) for an NBF solution \(\varvec{x}^*\), the possible and necessary optimality of \(\varvec{x}^*\) can be written as

$$\begin{aligned} \varvec{x}^* \hbox { is possibly optimal}&\Leftrightarrow \varPhi \cap {\mathscr {M}}(\varvec{x}^*) \ne \emptyset , \end{aligned}$$
(9)
$$\begin{aligned} \varvec{x}^* \hbox { is necessarily optimal}&\Leftrightarrow \varPhi \subseteq {\mathscr {M}}(\varvec{x}^*). \end{aligned}$$
(10)

By the assumption that \(\varPhi \) is non-empty, \(\varPhi \subseteq {\mathscr {M}}(\varvec{x}^*)\) implies \(\varPhi \cap {\mathscr {M}}(\varvec{x}^*) \ne \emptyset \), which obviously suggests that a necessarily optimal solution is also a possibly optimal solution. The optimal assurance cone and possible and necessary optimalities are illustrated by the following example which is originally given by Inuiguchi and Sakawa (1994).

Example 1

Consider the following linear programming problem:

$$\begin{aligned} \mathrm{minimize} \quad&c_1x_1+c_2x_2, \\ \text {subject to} \quad&3x_1+4x_2+x_3 = 42, \\&3x_1+x_2+x_4 = 24, \\&x_2+x_5 = 9, \\&x_i \ge 0,\ i=1,2,\ldots ,5, \end{aligned}$$

where the objective function coefficients \(c_1\) and \(c_2\) are not known exactly but the possible range of \((c_1,c_2)^{\mathrm {T}}\) is known as \(\varPhi \subseteq {\mathbb {R}}^2\). Namely, we know \((c_1,c_2)^{\mathrm {T}}\in \varPhi \).

We consider an NBF solution \(\varvec{x}^*= (6,6,0,0,3)^{\mathrm {T}}\) with the index set of basic variables, \(\{1,2,5 \}\). Then, the optimality assurance cone is obtained as

$$\begin{aligned} {\mathscr {M}}(\varvec{x}^*)=\left\{ (c_1,c_2,c_3,c_4,c_5)^{\mathrm {T}} \ \bigg | \ \begin{array}{r} \frac{1}{9}c_1-\frac{1}{3}c_2+c_3+\frac{1}{3}c_5 \ge 0 \\ -\frac{4}{9}c_1 + \frac{1}{3}c_2 +c_4-\frac{1}{3}c_5 \ge 0 \end{array} \right\} . \end{aligned}$$

As \(c_3\), \(c_4\) and \(c_5\) are fixed at zeros, \(x_3\), \(x_4\) and \(x_5\) can be seen as slack variables.Then, the problem can be depicted on the \(x_1\)\(x_2\) coordinate system. Accordingly, the constraints are rewritten as

$$\begin{aligned} \text {subject to} \quad&3x_1+4x_2 \le 42, \\&3x_1+x_2 \le 24, \\&x_2 \le 9, \\&x_1 \ge 0,\ x_2 \ge 0. \end{aligned}$$

Moreover, from \(c_3=c_4=c_5=0\), the projection of \({\mathscr {M}}(\varvec{x}^*)\) on the \(c_1\)\(c_2\) coordinate system, is obtained as

$$\begin{aligned} \check{\mathscr {M}}(\varvec{x}^*)=\left\{ (c_1,c_2)^\mathrm{T} \ \bigg | \ \begin{array}{r} \frac{1}{9}c_1-\frac{1}{3}c_2 \ge 0 \\ -\frac{4}{9}c_1 + \frac{1}{3}c_2 \ge 0 \end{array} \right\} . \end{aligned}$$

To illustrate the possible and necessary optimalities, we give two cases (a) and (b) shown in Fig. 1, where \({\check{\varPhi }}\) denotes the projection of \(\varPhi \) on the \(c_1\)\(c_2\) coordinate system. As the coefficients of \(x_3\), \(x_4\) and \(x_5\) are zeros in the given problem, we have \(\varPhi =\{\left( c_1,c_2,0,0,0\right) ^{\mathrm {T}} : \left( c_1,c_2\right) ^{\mathrm {T}}\in {\check{\varPhi }}\}\).

Fig. 1
figure 1

Necessarily and possibly optimal solutions

When \({{\check{\varPhi }}}\) is given as in Fig. 1a, we know that \(\varvec{x}^*\) is necessarily optimal, since \({{\check{\varPhi }}} \subseteq {\mathscr {\check{M}}}(\varvec{x}^*)\) implies \(\varPhi \subseteq {\mathscr {M}} (\varvec{x}^*)\). On the other hand, when \({{\check{\varPhi }}}\) is given as in Fig. 1b, we understand that \(\varvec{x}^*\) is only possibly optimal, since \({\check{\varPhi }} \cap {\mathscr {\check{M}}}(\varvec{x}^*) \ne \emptyset \) implies \(\varPhi \cap {\mathscr {M}}(\varvec{x}^*) \ne \emptyset \) but there exists \((c_1,c_2)^\mathrm {T}\in {{\check{\varPhi }}}\) such that \((c_1,c_2)^\mathrm {T}\notin {\mathscr {\check{M}}}(\varvec{x}^*)\).

Since it is acknowledged that the possible optimality is the minimal requirement for an optimal solution while the necessary optimality is the ideal requirement (Inuiguchi and Sakawa 1994). Therefore, we consider a necessarily optimal solution as far as it exists. We utilize the following proposition to confirm whether an NBF solution is necessarily optimal:

Proposition 3

Let us consider Problem (3) with \(\varvec{\gamma }\in \varPhi \), where \(\varPhi \) is the set composed of all possible objective function coefficient vectors. Then an NBF solution \(\varvec{x}^*\) is necessarily optimal if and only if \(\varPhi \subseteq {\mathscr {M}} (\varvec{x}^*)\), where \(\mathscr {M} (\varvec{x}^*)\) is the optimality assurance cone of \(\varvec{x}^*\).

Moreover, if \(\varPhi \) is a convex, closed and bounded polytope, Proposition 3 is equivalent to the following one:

Proposition 4

Let us consider Problem (3) with \(\varvec{\gamma }\in \varPhi \), where \(\varPhi \) is a convex, closed and bounded polytope composed of all possible objective function coefficient vectors. Let \(V(\varPhi )\) denote the set of all vertices of \(\varPhi \). Then an NBF solution \(\varvec{x}^*\) is necessarily optimal if and only if \(V(\varPhi ) \subseteq \mathscr {M} (\varvec{x}^*)\).

Proof

Since \(\varPhi \) is a convex, closed and bounded polytope, it is represented by

$$\begin{aligned} \varPhi =\mathrm{Conv}(V(\varPhi ))=\left\{ \varvec{c}\ :\ \varvec{c}=\sum _{k=1}^l \lambda _k \varvec{c}^k,\ \ \sum _{k=1}^l \lambda _k=1,\ \lambda _k \ge 0,\ k=1,2,\ldots ,l \right\} , \end{aligned}$$

where \(\mathrm{Conv}(\varPhi )\) is the convex hull of \(\varPhi \). Then, \(\varPhi \subseteq {{\mathscr {M}}}(\varvec{x}^*)\) implies \(V(\varPhi )\subseteq \varPhi \subseteq {{\mathscr {M}}}(\varvec{x}^*)\). On the other hand, \(V(\varPhi ) \subseteq {{\mathscr {M}}}(\varvec{x}^*)\) implies \(\varPhi = \mathrm{Conv}(\varPhi ) \subseteq {{\mathscr {M}}}(\varvec{x}^*)\) because \({{\mathscr {M}}}(\varvec{x}^*)\) is convex. \(\square \)

Together with (8) in Proposition 2, Proposition 4 implies that, when \(\varPhi \) is a convex, closed and bounded polytope, the necessary optimality of an NBF solution \(\varvec{x}^*\) can be confirmed by checking \(\varvec{M}(\varvec{x}^*)\varvec{v}^i \ge \varvec{0}\) for all entries \(\varvec{v}^i \in V(\varPhi )\).

However, the computational burden involved in this procedure remains enormous. For example, if the range of each objective function coefficient is given by a closed interval, which eventually forms a hyper-rectangle in \({\mathbb {R}}^n\), one needs to check the inequalities for all \(2^n\) vertices of \(\varPhi \). When n becomes sufficiently large, the computation amount would increase enormously to accomplish the goal in a limited time.

2.2 The application of tolerance analysis

To overcome the difficulty that we have mentioned at the end of the previous subsection, we introduce the tolerance analysis. Namely, when \(\varPhi \) is a hyper-rectangle, i.e., the range of each objective function coefficient is given by a closed interval \([c_i^\mathrm{L},c_i^\mathrm{R}]\) for \(i \in \{1,2,\ldots ,n\}\) and the optimal solution \(\varvec{x}^*\) with respect to the objective function coefficient vector \(\varvec{c}^\mathrm{C}=(\varvec{c}^\mathrm{L}+\varvec{c}^\mathrm{R})/2\) is an NBF solution, the condition \(V(\varPhi ) \subseteq \mathscr {M} (\varvec{x}^*)\) in Proposition 4 at an NBF solution \(\varvec{x}^*\), is examined straightforwardly by the tolerance approach (Wendell 1985), where \(\varvec{c}^\mathrm{L}=[c_1^\mathrm{L},\ldots ,c_n^\mathrm{L}]^\mathrm{T}\) and \(\varvec{c}^\mathrm{R}=[c_1^\mathrm{R},\ldots ,c_n^\mathrm{R}]^\mathrm{T}\) are the lower and upper bounds of the hyper-rectangle, respectively.

Before the tolerance approach, we describe a lemma shown by Filippi (2005):

Lemma 1

Let \({{\mathscr {P}}}^0=\{ \varvec{x}\in {\mathbb {R}}^n: \varvec{A}^0\varvec{x} \le \varvec{b}^0 \}\) be a polyhedron with \(l \times n\) matrix \(\varvec{A}^0=(a_{kj}^0) \ne \varvec{O}\) and \(\varvec{b}^0 \ge \mathbf{0}\), where \(\varvec{O}\) is a zero matrix. For \(k=1,2,\ldots ,l\), we define

$$\begin{aligned} \sigma _k= \left\{ \begin{array}{cl} \displaystyle \frac{b_k^0}{\sum _{j=1}^n |a_{kj}^0|}, &{} \text{ if } \sum _{j=1}^n |a_{kj}^0|>0,\\ 0, &{} \text{ otherwise, } \end{array} \right. \ \ \ \text{ and } \ \ \ \sigma ({{\mathscr {P}}}^0)=\min _{{\mathop {\sum _{j=1}^n |a_{kj}^0| >0}\limits ^{\scriptstyle k=1,2,\ldots ,l}}} \sigma _k. \end{aligned}$$
(11)

Then, a set \({\mathscr {B}}\subseteq {\mathbb {R}}^n\) defined by

$$\begin{aligned} {\mathscr {B}}=\left\{ \varvec{s}=(s_1,s_2,\ldots ,s_n)^\mathrm{T}: s_i \in [-\sigma ({{\mathscr {P}}}^0),\sigma ({{\mathscr {P}}}^0)]\right\} \end{aligned}$$
(12)

is the largest hyper-box contained in \({{\mathscr {P}}}^0\).

Applying Wendell’s idea (1985), we consider an objective function coefficient vector \(\varvec{\gamma }=\varvec{c}^\mathrm{C}+\mathrm {diag}(\varvec{c}^\mathrm{S})\varvec{\tau }\) for Problem (3), where \(\varvec{c}^\mathrm{S}=(\varvec{c}^\mathrm{R}-\varvec{c}^\mathrm{L})/2\) and \(\varvec{\tau }= [\tau _1, \tau _2,\ldots ,\tau _n]^\mathrm {T}\) denote the vector of parameters. Let \(\hat{\varvec{x}}\) denote the optimal NBF solution with respect to the objective function coefficient vector \(\varvec{c}^\mathrm {C}\), then we obtain \(\varvec{c}_N^\mathrm{C} -A_N^\mathrm{T}A_B^{-\mathrm{T}}\varvec{c}_B^\mathrm {C} = \varvec{M}(\hat{\varvec{x}})\varvec{c}^\mathrm{C} \ge \mathbf{0}\) by Proposition 1.

Therefore, by Definition 1, we obtain the set of \(\varvec{\tau }\) that preserves the optimality of \(\hat{\varvec{x}}\) where the objective function coefficient vector is perturbed by \(\varvec{c}^\mathrm{C}+\text{ diag }(\varvec{c}^\mathrm{S})\varvec{\tau }\)

$$\begin{aligned} {\mathscr {M}}_\tau (\hat{\varvec{x}})= & {} \{ \varvec{\tau } \in {\mathbb {R}}^n: \varvec{M}(\hat{\varvec{x}})(\varvec{c}^\mathrm{C}+\text{ diag }(\varvec{c}^\mathrm{S})\varvec{\tau })\ge \mathbf{0} \} \nonumber \\= & {} \{ \varvec{\tau } \in {\mathbb {R}}^n: -\varvec{M}(\hat{\varvec{x}})\mathrm{diag}(\varvec{c}^\mathrm{S})\varvec{\tau } \le \varvec{M}(\hat{\varvec{x}})\varvec{c}^\mathrm{C} \}, \end{aligned}$$
(13)

and by applying Lemma 1, we obtain the following theorem:

Theorem 1

For \(k=1,2,\ldots ,n-m\), we define

$$\begin{aligned} \tau _k= & {} \left\{ \begin{array}{cl} \displaystyle \frac{\sum _{j=1}^n M_{kj}(\hat{\varvec{x}})c_j^\mathrm{C}}{\sum _{j=1}^n |M_{kj}(\hat{\varvec{x}})||c_j^\mathrm{S}|}, &{} \text{ if } \sum _{j=1}^n |M_{kj}(\hat{\varvec{x}})||c_j^\mathrm{S}| >0,\\ 0, &{} \text{ otherwise, } \end{array} \right. \end{aligned}$$
(14)
$$\begin{aligned} \tau ^\mathrm{min}= & {} \min _{{\mathop { \sum _{j=1}^n |M_{kj}(\hat{\varvec{x}})||c_j^\mathrm{S}|>0}\limits ^{\scriptstyle k=1,2,\ldots ,n-m}}} \tau _k. \end{aligned}$$
(15)

Then, the set \({\mathscr {B}}_{\tau ^\mathrm{min}} := \left\{ (s_1, s_2, \ldots , s_n)^{\mathrm {T}}: s_i \in [-\tau ^\mathrm{min},\tau ^\mathrm{min}],\ i=1,2,\ldots ,n\right\} \subseteq {\mathbb {R}}^n\) becomes the largest hyper-rectangle contained in \({{\mathscr {M}}}_\tau (\hat{\varvec{x}})\).

Proof

It can be obtained straightforwardly by applying Lemma 1 to \({{\mathscr {M}}}_\tau (\hat{\varvec{x}})\). \(\square \)

Theorem 1 is proved by Wendell (1985). From Theorem 1, we obtain the following theorem which shows the relation to the necessary optimality.

Theorem 2

Let \(\tau ^\mathrm{min}\) denote the one defined in Theorem 1. If \(\tau ^\mathrm{min} \ge 1\), then \(\hat{\varvec{x}}\) is a necessarily optimal solution to the LP problem with uncertain objective function coefficient vector \(\varvec{\gamma }\), whose range is given by the hyper-rectangle defined with \([c_i^\mathrm{L},c_i^\mathrm{R}]\), \(i=1,2,\ldots ,n\).

Proof

When \(\tau ^\mathrm{min}=1\), we have \({\mathscr {B}}_{\tau ^\mathrm{min}} \subseteq {\mathscr {M}}_\tau (\hat{\varvec{x}})\), which is equivalent to

$$\begin{aligned} \varPhi = [\varvec{c}^\mathrm {L}, \varvec{c}^\mathrm {R}] = \{\varvec{c}^\mathrm{C}+{\mathrm {diag}}(\varvec{c}^\mathrm{S})\varvec{\tau }:\varvec{\tau } \in [-1,1]^n \} \subseteq {{\mathscr {M}}}(\hat{\varvec{x}}) \end{aligned}$$

by the substitution of \(\varvec{c}=\varvec{c}^\mathrm{C}+\mathrm{diag}(\varvec{c}^\mathrm{S})\varvec{\tau }\) resulting from Eq. (13). Hence, we obtain \(\varPhi \subseteq {{\mathscr {M}}}(\hat{\varvec{x}})\), which implies \(\hat{\varvec{x}}\) is necessarily optimal by Proposition 3. \(\square \)

Consequently, by Theorems 1 and 2, we show that the tolerance approach can confirm the necessary optimality of a given NBF solution easily.

3 Fuzzy linear programming

Intervals \([c_i^\mathrm{L},c_i^\mathrm{R}]\), \(i=1,2,\ldots ,n\) may not be sufficient for the representation of the decision-maker’s knowledge. S/he can have the ranges of most probable values, moderately potential values and somehow conceivable values for each objective function coefficient. In such cases, fuzzy subsets would be more suitable for representing her/his knowledge about the objective function coefficients rather than intervals. Then, we extend the analysis to the case that \(\varPhi \) is generalized to a fuzzy subset. Namely, we consider Problem (3) with \(\varvec{\gamma }\) whose possible range is specified by a fuzzy subset \(\varPhi \). In what follows, \(\mu _{\varPhi }:{\mathbb {R}}^n \rightarrow [0,1]\) denotes the membership function of \(\varPhi \).

To generalize the possible and necessary optimalities under fuzzy subset \(\varPhi \), we introduce the possibility and necessity measures (Inuiguchi and Sakawa 1996) defined as follows.

Definition 3

(Possible and Necessity Measures) Let A and B be fuzzy subsets of a universal set \(\varOmega \) with membership functions \(\mu _A:\varOmega \rightarrow [0,1]\) and \(\mu _B:\varOmega \rightarrow [0,1]\), respectively. Then possibility and necessity measures of B under A are defined respectively as:

$$\begin{aligned} \varPi _A(B)&= \sup _{r \in \varOmega }\min \{\mu _A(r), \ \mu _B(r)\}, \end{aligned}$$
(16)
$$\begin{aligned} N_A(B)&= \inf _{r \in \varOmega } \max \{1-\mu _A(r), \ \mu _B(r)\}. \end{aligned}$$
(17)

\(\varPi _A(B)\) shows the possibility degree of the realization of event B when the set of possible realizations are given by A. On the other hand, \(N_A(B)\) shows the necessity degree of the realization of event B when the set of possible realizations are given by A.

Possibility and necessity measures of a fuzzy subset B under fuzzy subset A are depicted in Fig. 2. They are closely related to the non-emptiness of the intersection \(A \cap B\) and the inclusion \(A \subseteq B\) as shown in the following proposition (Inuiguchi and Ichihashi 1990).

Proposition 5

We have the following equivalences for any \(h \in {\mathbb {R}}\):

$$\begin{aligned} \varPi _A(B) >h\Leftrightarrow & {} (A)_h \cap (B)_h \ne \emptyset , \end{aligned}$$
(18)
$$\begin{aligned} N_A(B) \ge h\Leftrightarrow & {} (A)_{1-h} \subseteq [B]_h, \end{aligned}$$
(19)

where \((A)_h\) and \([A]_h\) are a strong h-level set and a (weak) h-level set of fuzzy subset A defined by

$$\begin{aligned} (A)_h= & {} \{ r \in \varOmega : \mu _A(r) >h \}, \end{aligned}$$
(20)
$$\begin{aligned} {[A]}_h= & {} \{ r \in \varOmega : \mu _A(r) \ge h \}. \end{aligned}$$
(21)
Fig. 2
figure 2

Possibility and necessity measures: \(\varPi _A(B)\) and \(N_A(B)\)

The possible and necessary optimalities under fuzzy subset \(\varPhi \) are now defined as follows.

Definition 4

(Possible and Necessity Optimality with Fuzzy Coefficients) The possibly and necessarily optimal solution sets under fuzzy objective coefficients subset \(\varPhi \) are defined by fuzzy subsets, where their membership functions are defined by

$$\begin{aligned} \mu _{\varPi S}(\varvec{x})= & {} \left\{ \begin{array}{cl} \varPi _{\varPhi }({\mathscr {N}}(\varvec{x})), &{} \mathrm{if}\ \varvec{x} \ \mathrm{is} \,\mathrm{feasible}, \\ 0, &{} \mathrm{otherwise,} \end{array} \right. \end{aligned}$$
(22)
$$\begin{aligned} \mu _{NS}(\varvec{x})= & {} \left\{ \begin{array}{cl} N_{\varPhi }({\mathscr {N}}(\varvec{x})), &{} \mathrm{if}\ \varvec{x} \mathrm{is} \,\mathrm{feasible}, \\ 0, &{} \mathrm{otherwise,} \end{array} \right. \end{aligned}$$
(23)

where \(\varPi _{\varPhi }\) and \(N_{\varPhi }\) are possibility and necessity measures under \(\varPhi \), and \({\mathscr {N}}(\varvec{x})\) is a set of objective function coefficient vectors to which \(\varvec{x}\) is optimal defined by (4).

The correspondences of the possible and necessary optimalities under crisp subset \(\varPhi \) (Definition 1) and those under fuzzy subset \(\varPhi \) (Definition 4) can be understood from the properties shown in Proposition 5. Namely, the possible optimality is associated with the non-empty intersection while the necessary optimality is associated with the satisfaction of the inclusion relation.

Since we are interested in a solution robust against the fluctuations of objective function coefficients, we concentrate on the necessary optimality. Applying Proposition 5 to the necessarily optimal solution set, we obtain that for any \(h\in {\mathbb {R}}\),

$$\begin{aligned} \varvec{x} \in [NS]_h\Leftrightarrow & {} \mu _{NS}(\varvec{x})\ge h \ \Leftrightarrow \ N_{\varPhi }({\mathscr {N}}(\varvec{x}))\ge h \nonumber \\\Leftrightarrow & {} (\varPhi )_{1-h} \subseteq [{\mathscr {N}}(\varvec{x})]_h. \end{aligned}$$
(24)

Together with the fact that \({\mathscr {N}}(\varvec{x})\) is crisp and closed by its definition, (24) implies

$$\begin{aligned} \varvec{x} \in [NS]_h \ \Leftrightarrow \ \mathrm{cl}(\varPhi )_{1-h} \subseteq {\mathscr {N}}(\varvec{x}),\ \mathrm{for}\ h \in (0,1], \end{aligned}$$
(25)

where \(\mathrm{cl}(\varPhi )_{1-h}\) stands for the closure of strong \((1-h)\)-level set \((\varPhi )_{1-h}\).

The result in (25) indicates that the necessary optimality degree of a feasible solution \(\varvec{x}\) can be obtained with the maximal value of h such that \(\mathrm {cl}(\varPhi )_{1-h}\subseteq [\mathscr {N}(\varvec{x})]_h\) or equivalently, the supremum of h such that \(\mathrm {cl}(\varPhi )_{1-h}\subseteq \mathscr {N}(\varvec{x})\), i.e.,

$$\begin{aligned} \mu _{NS}(\varvec{x})= & {} \max \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq [{\mathscr {N}}(\varvec{x})]_h \right\} \nonumber \\= & {} \sup \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq {\mathscr {N}}(\varvec{x}) \right\} . \end{aligned}$$
(26)

Moreover, if we know \(\mu _{NS}(\varvec{x})>0\), i.e., we found \({\mathrm {cl}}(\varPhi )_{1-h} \subseteq {\mathscr {N}}(\varvec{x})\) holds for some \(h>0\), we have

$$\begin{aligned} \mu _{NS}(\varvec{x})= & {} \max \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq {\mathscr {N}}(\varvec{x}) \right\} . \end{aligned}$$
(27)

For an NBF solution \(\varvec{x}^*\), we have \(\mathscr {N}(\varvec{x}^*) = \mathscr {M}(\varvec{x}^*)\). Then, from (26), we have

$$\begin{aligned} \mu _{NS}(\varvec{x}^*)= & {} \max \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq [{\mathscr {M}}(\varvec{x}^*)]_h \right\} \nonumber \\= & {} \sup \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq {\mathscr {M}}(\varvec{x}^*) \right\} . \end{aligned}$$
(28)

If we know \(\mu _{NS}(\varvec{x}^*)>0\), we have

$$\begin{aligned} \mu _{NS}(\varvec{x})= & {} \max \left\{ h: {\mathrm {cl}}(\varPhi )_{1-h} \subseteq {\mathscr {M}}(\varvec{x}) \right\} . \end{aligned}$$
(29)

In the remaining content of this section, we show some useful results for the calculation of \(\mu _{NS}(\varvec{x}^*)\) in several particular cases of a fuzzy subset \(\varPhi \). We now introduce L-R fuzzy numbers and oblique fuzzy vectors.

Definition 5

(L-R Fuzzy Number) An L-R fuzzy number \(A\subseteq {\mathbb {R}}\) is a fuzzy subset of the real line \({\mathbb {R}}\) characterized by the following membership function:

$$\begin{aligned} \mu _A(r) = \left\{ \begin{aligned}&L\left( \displaystyle \frac{a^\mathrm{L}-r}{\alpha }\right) ,&\text { if }r<a^\mathrm{L}, \ \alpha> 0, \\&1,&\text { if }a^\mathrm{L} \le r \le a^\mathrm{R}, \\&R\left( \displaystyle \frac{r-a^\mathrm{R}}{\beta }\right) ,&\text { if }r> a^\mathrm{R}, \ \beta > 0, \\&0,&\text { otherwise,} \end{aligned} \right. \end{aligned}$$
(30)

where \(a^\mathrm{L}\) and \(a^\mathrm{R}\), satisfying \(a^\mathrm{L}\le a^\mathrm{R}\), denote the lower and upper bounds of \([A]_1\), respectively. \(\alpha \ge 0\) and \(\beta \ge 0\) denote the left and right spreads of A, respectively. \(L: [0,\infty )\rightarrow [0,1]\) and \(R: [0,\infty )\rightarrow [0,1]\) are reference functions satisfying

  1. 1.

    \(L(0) = R(0) = 1\), and \(\forall r > 0\), L(r), \(R(r) <1\).

  2. 2.

    \(\lim _{r\rightarrow \infty }L(r) = \lim _{r\rightarrow \infty }R(r) = 0\).

  3. 3.

    L and R are upper semi-continuous and non-increasing.

For simplification, one can denote an L-R fuzzy number A as \((a^\mathrm{L},a^\mathrm{R},\alpha , \beta )_{LR}\) by its parameters and reference functions.

A typical L-R fuzzy number is illustrated in Fig. 3. When reference functions L and R of an L-R fuzzy number are the same, i.e., \(L = R\), we call it an L-L fuzzy number and denote it as \((a^\mathrm{L},a^\mathrm{U},\alpha , \beta )_{LL}\). Moreover, if both \(L = R\) and \(\alpha = \beta \), we call it a symmetric fuzzy number and denote it as \((a^\mathrm{L},a^\mathrm{U},\alpha , \alpha )_{LL}\).

Fig. 3
figure 3

An example of an L-R fuzzy number

A fuzzy number A is defined by its h-level set satisfying (i) \([A]_1\ne \emptyset \), (ii) \([A]_h\), \(h \in (0,1]\) are closed, (iii) \([A]_h\), \(h \in (0,1]\) are convex and (iv) \([A]_h\), \(h \in (0,1]\) are bounded. These conditions are equivalent to the following conditions about the membership function \(\mu _A\), respectively: (i) \(\exists r\in {\mathbb {R}},\ \mu _A(r)=1\), (ii) \(\mu _A\) is upper semi-continuous, (iii) \(\mu _A\) is quasi-concave and (iv) \(\lim _{r \rightarrow \infty } \mu _A(r)=\lim _{r \rightarrow -\infty } \mu _A(r)=0\). We note that any fuzzy subset A is regarded as an L-R fuzzy number \((a^\mathrm{L},a^\mathrm{R},\alpha , \beta )_{LR}\) with

$$\begin{aligned} \begin{array}{ll} a^\mathrm{L}=\inf [A]_1, &{} \alpha =\left\{ \begin{array}{ll} \inf [A]_1-\inf (A)_0, &{} \text{ if } \inf (A)_0 > -\infty ,\\ \inf [A]_1-\inf [A]_\epsilon , &{} \text{ otherwise }, \end{array} \right. \\ a^\mathrm{R}=\sup [A]_1, &{} \beta =\left\{ \begin{array}{ll} \sup (A)_0-\sup [A]_1, &{} \text{ if } \sup (A)_0 <\infty ,\\ \sup (A)_\epsilon -\sup [A]_1, &{} \text{ otherwise }, \end{array} \right. \end{array} \end{aligned}$$
(31)

and

$$\begin{aligned} L(r)=\mu _A(a^\mathrm{L}-\alpha r),\ r\ge 0; \quad R(r)=\mu _A(\alpha r-a^\mathrm{R}),\ r\ge 0. \end{aligned}$$
(32)

where \(\epsilon >0\) denotes a very small positive constant.

Let \(L^*:[0,1] \rightarrow [0,+\infty ]\) and \(R^*:[0,1] \rightarrow [0,+\infty ]\) denote pseudo-inverse functions of reference functions L and R, respectively. By (30) in Definition 5, we can define them as

$$\begin{aligned} \begin{aligned} L^*(h)&=\sup \{ r \in [0,+\infty ): L(r) \ge h\}, \\ R^*(h)&=\sup \{ r \in [0,+\infty ): R(r) \ge h\}. \end{aligned} \end{aligned}$$
(33)

For h-level sets of L-R fuzzy numbers, we have the following proposition:

Proposition 6

A fuzzy subset A can be denoted as \(A = (a^\mathrm{L}, b^\mathrm{R}, \alpha , \beta )_{LR}\) if and only if the following condition is valid:

$$\begin{aligned} \forall h \in (0,1], \ [A]_h=[a^\mathrm{L}-L^*(h)\alpha ,\ a^\mathrm{R} + R^*(h)\beta ]. \end{aligned}$$
(34)

Proof

Let A be an L-R fuzzy number specified by \((a^\mathrm{L},b^\mathrm{R},\alpha ,\beta )_{LR}\). By the definitions of \(L^*\) and \(R^*\) in (33), we obtain that \(\forall h \in (0,1]\), \(\mu _A(r)\ge h\) if and only if r satisfies at least one of the following conditions:

$$\begin{aligned} \text {(a)}&\ \ L\left( \frac{a^\mathrm{L}-r}{\alpha }\right) \ge h,\ r <a^\mathrm{L} \ \text {and} \ \alpha>0, \\ \text {(b)}&\ \ R\left( \frac{r-a^\mathrm{R}}{\beta }\right) \ge h, \ r>a^\mathrm{R} \ \text {and} \ \beta >0, \\ \text {(c)}&\ \ a^\mathrm{L} \le r \le a^\mathrm{R}, \end{aligned}$$

where condition (a) is equivalent to \(a^\mathrm{L}- L^*(h){\alpha } \le r <a^\mathrm{L}\) and \(\alpha >0\), and condition (b) is equivalent to \(a^\mathrm{R} < r \le a^\mathrm{R}+R^*(h)\beta \) and \(\beta >0\).

Because we have \(\alpha \ge 0\) and \(\beta \ge 0\) from the definition of L-R fuzzy numbers, conditions (a), (b) and (c) give the result, \(r \in [a^\mathrm{L}-L^*(h)\alpha ,a^\mathrm{R}+R^*(h)\beta ]\). Therefore, we obtain \(\forall h \in (0,1], \ [A]_h=[a^\mathrm{L}-L^*(h)\alpha ,a^\mathrm{R}+R^*(h)\beta ]\).

On the other hand, if we know \(\forall h \in (0,1], \ [A]_h=[a^\mathrm{L}-L^*(h)\alpha ,a^\mathrm{R}+R^*(h)\beta ]\), we obtain \(\mu _A(x)=\sup \{ h : x \in [A]_h\}\) becomes the membership function of L-R fuzzy number \(A=(a^\mathrm{L},b^\mathrm{R},\alpha ,\beta )_{LR}\) by the resolution principle (Zadeh 1971). \(\square \)

We define pseudo-inverse functions of reference functions L and R in a different way with \(L_*:(0,1]\rightarrow [0,+\infty )\) and \(R_*:(0,1]\rightarrow [0,+\infty )\) such that

$$\begin{aligned} \begin{aligned} L_*(h)&= \inf \{ r\in [0,+\infty ) : L(r) \le h\}, \\ R_*(h)&= \inf \{ r\in [0,+\infty ) : R(r) \le h\}. \end{aligned} \end{aligned}$$
(35)

The closure of the strong h-level set of L-R fuzzy number \(A=(a^\mathrm{L},b^\mathrm{R},\alpha ,\beta )_{LR}\) is obtained by

$$\begin{aligned} \mathrm{cl}(A)_h =[a^\mathrm{L}-L_*(h)\alpha ,a^\mathrm{R}+R_*(h)\beta ],\ \forall h \in [0,1). \end{aligned}$$
(36)

The difference between \(L^*(h)\) and \(L_*(h)\) is depicted in Fig. 4. As shown in Fig. 4, the difference appears at the level of the flat part of reference function L.

Fig. 4
figure 4

Pseudo-inverse functions \(L^*\) and \(L_*\) of reference function L

In the literature, non-interactive fuzzy numbers are frequently utilized for representing a possible region of uncertain variable vector in n-dimensional space \({\mathbb {R}}^n\), which compose a fuzzy vector. A fuzzy vector \(A\subseteq {\mathbb {R}}^n\) defined by non-interactive fuzzy numbers \(A_i\), \(i = 1,2,\ldots , n\), has the membership function defined by

$$\begin{aligned} \mu _A(\varvec{r})=\min _{i=1,\ldots ,n} \mu _{A_i}(r_i), \end{aligned}$$
(37)

where \(\varvec{r} = (r_1, r_2, \ldots , r_n)^\mathrm {T}\subseteq {\mathbb {R}}^n\) and \(\mu _{A_i}\) denotes the membership function of the fuzzy number \(A_i\). For the sake of simplicity, in what follows, a vector of non-interactive fuzzy numbers \(A_i\), \(i = 1,2,\ldots , n\) is called a non-interactive fuzzy vector A.

In general, the possible range of an uncertain variable may depend on the values of other uncertain variables. In such a case, it is said that uncertain variables are interactive. A non-interactive fuzzy vector shows that the range of each uncertain variable is represented constantly by a fuzzy number independent of values of other uncertain variables. Namely, a non-interactive fuzzy vector does not appropriately represent the possible range of an uncertain variable vector whose components are interactive.

It is known that a general fuzzy vector whose uncertain variables are arbitrarily interactive cannot be treated easily. Then we consider a special fuzzy vector called an oblique fuzzy vector (Inuiguchi et al. 2003) in this paper. The oblique fuzzy vector is proposed to represent some interactions among the uncertain variables without great loss of tractability in calculations. It is defined as follows.

Definition 6

(Oblique Fuzzy Vector) An oblique fuzzy vector A is defined as a fuzzy subset of \({\mathbb {R}}^n\) having the following membership function:

$$\begin{aligned} \mu _{A}(\varvec{r}) = \min _{i=1,\ldots ,n}\mu _{B_i}(\varvec{d}_i^{\mathrm {T}} \varvec{r}), \end{aligned}$$
(38)

where \(\varvec{d}_i^\mathrm {T}\) is the i-th row of a given non-singular matrix \(\varvec{D}\), called an obliquity matrix, and \(B_i\) is a fuzzy number where \(B_i\) and \(B_j\) are non-interactive for all \(i\ne j\).

Let \(B\subseteq {\mathbb {R}}^n\) be a non-interactive fuzzy vector composed of \(B_1\), \(B_2\), \(\ldots \), \(B_n\). Then (38) in Definition 6 is rewritten as

$$\begin{aligned} \mu _A(\varvec{r}) = \mu _B(\varvec{D} \varvec{r}). \end{aligned}$$
(39)

Two non-interactive fuzzy vectors and two oblique fuzzy vectors in \({\mathbb {R}}^2\) are illustrated in Fig. 5. In Fig. 5, each fuzzy vector is depicted by a 3-dimensional figure and its equivalence curves of the membership function. Figure 5a shows a non-interactive fuzzy vector defined by symmetric fuzzy numbers, while Fig. 5b shows a non-interactive fuzzy vector defined by non-symmetric L-L fuzzy numbers. On the other hand, Fig. 5c shows an oblique fuzzy vector defined by symmetric fuzzy numbers \(B_1\) and \(B_2\), while Fig. 5d shows an oblique fuzzy vector defined by non-symmetric L-L fuzzy numbers \(B_1\) and \(B_2\).

Fig. 5
figure 5

Non-interactive fuzzy vectors and oblique fuzzy vectors in \({\mathbb {R}}^2\)

3.1 Case where \(\varPhi \) is defined by non-interactive symmetric fuzzy numbers

First, we describe the result when \(\varPhi \) is a non-interactive fuzzy vector defined by symmetric fuzzy numbers. Namely, we consider a LP problem with fuzzy objective function coefficient vector \(\varPhi \), where \(\varPhi \) is composed of symmetric fuzzy numbers \(C_i=(c_i^\mathrm{C},c_i^\mathrm{C}, \alpha _i, \alpha _i)_{LL}\), \(i=1,2,\ldots ,n\). Therefore, the membership function of \(\varPhi \) is expressed as

$$\begin{aligned} \begin{aligned} \mu _{\varPhi }(\varvec{c})&= \min _{i=1,2,\ldots ,n}\mu _{C_i}(c_i) \\&= \left\{ \begin{array}{cl} \displaystyle \min _{i: \alpha _i>0}L \left( \frac{|c_i-c_i^\mathrm{C}|}{\alpha _i} \right) , &{} \text{ if } \forall j \in \{i:\alpha _i=0 \}, c_j=c_j^\mathrm{C},\\ 0, &{} \text{ otherwise, } \end{array} \right. \end{aligned} \end{aligned}$$
(40)

where \(\mu _{\varPhi }\) and \(\mu _{C_i}\) are the membership functions of \(\varPhi \) and \(C_i\), respectively.

In this case, the necessary optimality degree \(\mu _{NS}(\varvec{x}^*)\) of an NBF solution \(\varvec{x}^*\) can be obtained easily as shown in the following theorem.

Theorem 3

Let \(\varvec{x}^*\) be an NBF solution which is optimal to the LP problem with an objective function coefficient vector \(\varvec{c}^\mathrm{C}=(c_1^\mathrm{C},c_2^\mathrm{C},\ldots ,c_n^\mathrm{C})^\mathrm{T}\). When \(\varPhi \) is a fuzzy subset composed of non-interactive symmetric fuzzy numbers \(C_i = (c_i^\mathrm{C},c_i^\mathrm{C},\alpha _i,\alpha _i)_{LL}\), \(i=1,2,\ldots ,n\), and defined by (40), we obtain the necessary optimality degree \(\mu _{NS}(\varvec{x}^*)\) straightforwardly by

$$\begin{aligned} \mu _{NS}(\varvec{x}^*)=1-L\left( \min _{{\mathop { \sum _{j=1}^n |M_{kj}(\varvec{x}^*)||\alpha _j|>0}\limits ^{k=1,2,\ldots ,n-m}}} \displaystyle \frac{\sum _{j=1}^n M_{kj}(\varvec{x}^*)c_j^\mathrm{C}}{ \sum _{j=1}^n |M_{kj}(\varvec{x}^*)||\alpha _j|} \right) , \end{aligned}$$
(41)

where \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _n)^\mathrm{T}\). Furthermore, if \(\mu _{NS}(\varvec{x}^*) = 1\), it is regarded as a perfect necessarily optimal solution.

Proof

Since \(C_i=(c_i^\mathrm{C},c_i^\mathrm{C},\alpha _i,\alpha _i)_{LL}\), we have \([C_i]_h=[c_i^\mathrm{C}-L^*(h)\alpha _i,c_i^\mathrm{C}+L^*(h)\alpha _i]\) for any \(h \in (0,1]\). Hence, we obtain

$$\begin{aligned} \begin{aligned} {[\varPhi ]}_h&= [C_1]_h\times [C_2]_h \times \cdots \times [C_n]_h \\&= \{\varvec{c}^\mathrm{C}+{\mathrm {diag}}(\varvec{\alpha })\varvec{\tau }: \tau _i \in [-L^*(h),L^*(h)],\ i=1,2,\ldots ,n \}. \end{aligned} \end{aligned}$$
(42)

By Theorem 1, we obtain \(\tau ^{\min }\) of (15) with substitutions of \(\varvec{\alpha }\) and \(\varvec{x}^*\) for \(\varvec{c}^\mathrm{S}\) and \(\hat{\varvec{x}}\). Then, we obtain the following result:

$$\begin{aligned} \hat{\mathscr {B}}_{\tau ^\mathrm{min}}= \{\varvec{c}^\mathrm{C}+\mathrm{diag}(\varvec{\alpha })\varvec{\tau }: \tau _i \in [-\tau ^\mathrm{min},\tau ^\mathrm{min}],\ i=1,2,\ldots ,n \} \subseteq {\mathscr {M}}(\varvec{x}^*), \end{aligned}$$
(43)

where \(\tau = \tau ^\mathrm {min}\) maximizes \(\tau \) subject to \(\hat{\mathscr {B}}_{\tau } \subseteq {\mathscr {M}}(\varvec{x}^*)\). Let \({\bar{h}}=L(\tau ^\mathrm{min})\), then we obtain that \(L^*({\bar{h}})\ge \tau ^\mathrm{min}\) and \(L^*({\bar{h}}+\epsilon ) \le \tau ^\mathrm{min}\) for any \(\epsilon \in (0,1-{\bar{h}}]\). Since we have \((\varPhi )_h=\bigcup _{\epsilon \in (0,1-h]}[\varPhi ]_{h+\epsilon }\) for any \(h \in [0,1)\), from (42), (43) and \(L^*({\bar{h}}+\epsilon ) \le \tau ^\mathrm{min}\) for any \(\epsilon \in (0,1-{\bar{h}}]\), we obtain

$$\begin{aligned} (\varPhi )_{{\bar{h}}} =\bigcup _{\epsilon \in (0,1-{\bar{h}}]}[\varPhi ]_{{\bar{h}}+\epsilon } \subseteq \hat{\mathscr {B}}_{\tau ^\mathrm{min}}. \end{aligned}$$
(44)

Owing to the closedness of \(\hat{\mathscr {B}}_{\tau ^\mathrm{min}}\), we obtain \({\mathrm {cl}}(\varPhi )_{{\bar{h}}}\subseteq \hat{\mathscr {B}}_{\tau ^\mathrm{min}}\). On the other hand, from \(L^*({\bar{h}})\ge \tau ^\mathrm{min}\) and \({\bar{h}}=L(\tau ^\mathrm{min})\), we have

$$\begin{aligned} \mathrm{cl}(\varPhi )_{h}\supset [\varPhi ]_{{\bar{h}}} \supseteq \hat{\mathscr {B}}_{\tau ^\mathrm{min}},\ \forall h < {\bar{h}}. \end{aligned}$$
(45)

Therefore, we obtain \({\mathrm {cl}}(\varPhi )_{{\bar{h}}}\subseteq \hat{\mathscr {B}}_{\tau ^\mathrm{min}}\) and \(\mathrm{cl}(\varPhi )_{h}\supset hat{\mathscr {B}}_{\tau ^\mathrm{min}}\), \(\forall h < {\bar{h}}\). Together with the fact that \(\tau =\tau ^\mathrm{min}\) maximizes \(\tau \) subject to \(\hat{\mathscr {B}}_{\tau } \subseteq {\mathscr {M}}(\varvec{x}^*)\), we obtain \({\mathrm {cl}}(\varPhi )_{{\bar{h}}}\subseteq {\mathscr {M}}(\varvec{x}^*)\) and \(\mathrm{cl}(\varPhi )_{h} \not \subseteq {\mathscr {M}}(\varvec{x}^*)\), \(\forall h < {\bar{h}}\). By (28), we obtain \(\mu _{NS}(\varvec{x}^*)=1-{\bar{h}}=1-L(\tau ^\mathrm{min})\). From the definition of \(\tau ^\mathrm{min}\) with substitutions of \(\varvec{\alpha }\) and \(\varvec{x}^*\) for \(\varvec{c}^\mathrm{S}\) and \(\hat{\varvec{x}}\), we obtain (41). \(\square \)

As shown in Theorem 3, the necessary optimality degree of an NBF solution of Problem (3) can be straightforwardly obtained by (41) when the possible ranges of objective function coefficients are described by a non-interactive fuzzy vector defined by symmetric fuzzy numbers. The simple calculation of \(\mu _{NS}(\varvec{x}^*)\) shown in Theorem 3 is exemplified as follows.

Example 2

Let us consider the LP problem in Example 1, where the possible ranges of \(c_1\) and \(c_2\) are symmetric fuzzy numbers \(C_1\) and \(C_2\) defined by \(C_1 = (-23,-23,8,8)_{LL}\) and \(C_2 = (-14,-14,5,5)_{LL}\), respectively. The reference function \(L:[0,+\infty )\rightarrow [0,1]\) is defined by

$$\begin{aligned} L(r) = \left\{ \begin{array}{ll} 1-r, &{} \text { if } r\in [0,1], \\ 0, &{} \text { otherwise.} \end{array} \right. \end{aligned}$$
(46)

The objective function coefficients of decision variables \(x_3\), \(x_4\) and \(x_5\) are zeros. Then we consider \(C_i=(0,0,0,0)_{LL}\), \(i=3,4,5\). Then we obtain \(\varvec{c}^\mathrm{C}=(-23,-14,0,0,0)^\mathrm{T}\) and the optimal solution of LP problem with the objective function coefficient vector \(\varvec{c}^\mathrm{C}=(-23,-14,0,0,0)^\mathrm{T}\) is obtained as \(\varvec{x}^*=(6,6,0,0,3)^\mathrm{T}\). This is an NBF solution.

We calculate the necessary optimality degree \(\mu _{NS}(\varvec{x}^*)\). The optimality assurance cone with respect to \(\varvec{x}^*\) is obtained as \({{\mathscr {M}}}(\varvec{x}^*)\) shown in Example 1. Namely, we obtain

$$\begin{aligned} M(\varvec{x}^*)=\left( \begin{array}{ccccc} \frac{1}{9} &{} -\frac{1}{3} &{} 1 &{} 0 &{} -\frac{1}{3} \\ -\frac{4}{9} &{} \frac{1}{3} &{} 0 &{} 1 &{}- \frac{1}{3} \end{array} \right) \end{aligned}$$

and \(\varvec{\alpha }=(8,5,0,0,0)^\mathrm{T}\). Then we obtain

$$\begin{aligned} \begin{array}{l} \tau _1=\displaystyle \frac{-\frac{1}{9}\cdot 23+\frac{1}{3}\cdot 14}{\frac{1}{9}\cdot 8+\frac{1}{3}\cdot 5}=\frac{19}{23}, \\ \tau _2=\displaystyle \frac{\frac{4}{9}\cdot 23-\frac{1}{3}\cdot 14}{\frac{4}{9}\cdot 8+\frac{1}{3}\cdot 5}=\frac{50}{47}. \end{array} \end{aligned}$$

Thus, we obtain \(\tau _{\min }=\frac{19}{23}\). Hence, we obtain \(\mu _{NS}(\varvec{x}^*)=L(\frac{19}{23})=1-\frac{19}{23}=\frac{4}{23}\).

The situation of solution \(\varvec{x}^*\) is depicted in Fig. 6. In Fig. 6, the feasible region is projected on the \(x_1\)\(x_2\) coordinate system while the assurance cone \({{\mathscr {M}}}(\varvec{x}^*)\) and \(\varPhi \) are projected on the \(c_1\)\(c_2\) coordinate system. We confirm that the left-lower vertex of the projected \(\mathrm {cl}(\varPhi )_0\) is outside the projected optimality assurance cone.

Fig. 6
figure 6

Illustration of example 2

3.2 Case where \(\varPhi \) is defined by non-interactive fuzzy numbers

Now we describe the results when \(\varPhi \) is defined by non-interactive fuzzy numbers. Since any fuzzy number can be seen as an L-R fuzzy number, we investigate the calculation method of \(\mu _{NS}(\varvec{x}^*)\) when \(\varPhi \) is defined by non-interactive L-R fuzzy numbers. Namely, \(\varPhi \) is composed of L-R fuzzy numbers \(C_i=(c_i^\mathrm{L},c_i^\mathrm{R},\alpha _i,\beta _i)_{L_i R_i}\), \(i=1,2,\ldots ,n\) and its membership function is defined by

$$\begin{aligned} \mu _{\varPhi }(\varvec{c})=\min _{i=1,2,\ldots ,n}\mu _{C_i}(c_i), \end{aligned}$$
(47)

where \(\varvec{c}=(c_1,c_2,\ldots ,c_n)^\mathrm{T}\).

Let \({L_{i}}_*:(0,1] \rightarrow [0,+\infty )\) and \({R_{i}}_*: (0,1] \rightarrow [0,+\infty )\) be the pseudo-inverse functions of \(L_i\) and \(R_i\) defined by (35) with substitutions of \(L_i\) and \(R_i\) for L and R.

Then we obtain that \(\forall h \in (0,1], \ \mathrm{cl}(C_i)_{h}=[c_i^\mathrm{L}-{L_i}_*(h) \alpha _i, c_i^\mathrm{R}+{R_i}_*(h) \beta _i]\), where we note that \(\mathrm{cl}(C_i)_1=\emptyset \). Let \(q_i^\mathrm{C}:(0,1]\rightarrow {\mathbb {R}}\) and \(q_i^\mathrm{S}:(0,1]\rightarrow [0,+\infty )\) be functions defined by

$$\begin{aligned} q_i^\mathrm{C}(h)&=\frac{(c_i^\mathrm{R}+{R_i}_*(h) \beta _i)+(c_i^\mathrm{L}-{L_i}_*(h) \alpha _i)}{2},\ h \in (0,1], \end{aligned}$$
(48)
$$\begin{aligned} q_i^\mathrm{S}(h)&=\frac{(c_i^\mathrm{R}+{R_i}_*(h)\beta _i)-(c_i^\mathrm{L}-{L_i}_*(h) \alpha _i)}{2},\ h \in (0,1]. \end{aligned}$$
(49)

Namely, \(q_i^\mathrm{C}(h)\) and \(q_i^\mathrm{S}(h)\) are the centre and spread of \(\mathrm {cl}(\varPhi )_h\). We obtain the following theorem.

Theorem 4

Let \(\varvec{x}^*\) be an NBF solution to Problem (3) with the objective function coefficient vector being \((\varvec{c}^\mathrm{L}+\varvec{c}^\mathrm{R})/2\), where \(\varvec{c}^\mathrm{L}=(c_1^\mathrm{L}, c_2^\mathrm{L}, \ldots ,c_n^\mathrm{L})^{\mathrm {T}}\) and \(\varvec{c}^\mathrm{R}=(c_1^\mathrm{R}, c_2^\mathrm{R}, \ldots , c_n^\mathrm{R})^{\mathrm {T}}\). Calculate \(\tau ^\mathrm{min}\) by (15) with substitutions of \(q_i^\mathrm{C}(h)\), \(q_i^\mathrm{S}(h)\) and \(\varvec{x}^*\) for \(c_i^\mathrm{C}\), \(c_i^\mathrm{S}\) and \(\hat{\varvec{x}}\) of (14). Then we obtain

  1. (a)

    \(\tau ^\mathrm{min}\ge 1\) implies \(\mu _{NS}(\varvec{x}^*) \ge 1-h\), and

  2. (b)

    \(\tau ^\mathrm{min} < 1\) implies \(\mu _{NS}(\varvec{x}^*) < 1-h\).

Proof

Since \(\mathrm{cl}(\varPhi )_h=\mathrm{cl}(C_1)_h \times \mathrm{cl}(C_2)_h \times \cdots \times \mathrm{cl}(C_n)_h\) holds, \(\mathrm{cl}(\varPhi )_h\) is a hyper-rectangle defined by \(\mathrm{cl}(C_i)_h=[c_i^\mathrm{L}-{L_i}_*(h) \alpha _i,c_i^\mathrm{R}+{R_i}_*(h) \beta _i]=[c_i^\mathrm{C}-c_i^\mathrm{S},c_i^\mathrm{C}+c_i^\mathrm{S}]\), \(i=1,2,\ldots ,n\).

First we prove (a). When \(\tau ^\mathrm{min}\ge 1\), as shown in the proof of Theorem 2, we have \(\mathrm{cl}(\varPhi )_h \subseteq {{\mathscr {M}}}(\varvec{x}^*)\). Hence by (28), we obtain \( \mu _{NS}(\varvec{x}^*) \ge 1-h\).

We prove (b). The fact \(\tau ^\mathrm{min}< 1\) implies that the hyper-rectangle defined by \([c_i^\mathrm{C}-c_i^\mathrm{S}, c_i^\mathrm{C}+c_i^\mathrm{S}]\), \(i=1,2,\ldots ,n\) is not included in \({{\mathscr {M}}}(\varvec{x}^*)\). Namely, \(\mathrm{cl}(\varPhi )_h \not \subseteq {{\mathscr {M}}}(\varvec{x}^*)\). Therefore by (28), we obtain \( \mu _{NS}(\varvec{x}^*) < 1-h\). \(\square \)

By Theorem 4, we obtain the procedure for calculating \(\mu _{NS}(\varvec{x}^*)\), where \(\varvec{x}^*\) is an NBF solution. The procedure is based on the bisection method, and described in the following algorithm, where we set \(c_i^\mathrm{C}\) and \(c_i^\mathrm{S}\), \(i=1,2,\ldots ,n\) as those described in Theorem 4:

figure a

In Algorithm A, when \(h^\mathrm{L}\) and \(h^\mathrm{H}\) are valued, \(1-h^\mathrm{H} \le \mu _{NS}(\varvec{x}^*) < 1-h^\mathrm{L}\) can always be satisfied. We use the property of \(\tau ^\mathrm{min}\) shown in Theorem 1, i.e., \({\mathscr {B}}_{\tau ^\mathrm{min}}\subseteq {\mathscr {M}}_\tau (\varvec{x}^*)\). To put this differently, we have

$$\begin{aligned} \{\varvec{q}^\mathrm{C}(h)+{\mathrm {diag}}(\varvec{q}^\mathrm{S}(h))\varvec{\tau }:\varvec{\tau } \in [-\tau _\mathrm{min},\tau _\mathrm{min}]^n \} \subseteq {{\mathscr {M}}}(\varvec{x}^*), \end{aligned}$$
(51)

whenever \(\tau _\mathrm{min}\) is calculated in Algorithm A. Equation (50) calculates the maximum h satisfying

$$\begin{aligned} \mathrm{cl}(\varPhi )_h \subseteq \{\varvec{q}^\mathrm{C}(h)+{\mathrm {diag}}(\varvec{q}^\mathrm{S}(h))\varvec{\tau }:\varvec{\tau } \in [-\tau _\mathrm{min},\tau _\mathrm{min}]^n \}. \end{aligned}$$
(52)

We utilize (50) to accelerate the bisection procedure, which makes \(h^\mathrm {H}\) converge faster. When \(\sum _{j=1}^n |M_{kj}(\varvec{x}^*)||q_i^\mathrm{S} (h)=0\) for all \(k \in \{1,2,\ldots ,n-m\}\), \(\tau ^\mathrm{min}\) is not defined. To treat this case properly, the second bullet of Step 1 is added. The value of \(\tau ^{\min }\) at Steps 2 and 4 can be negative, which implies the case where \(\sum _{j=1}^n M_{kj}(\varvec{x}^*)q_j^\mathrm{C}(h) <0\), i.e., the vector composed of centres of h-level set \((C_j)_h\) is not in \({{\mathscr {M}}}(\varvec{x}^*)\).

To illustrate Algorithm A, we utilize Example 1 again in the following example.

Example 3

Let us consider the LP problem described in Example 1. In this example, we define \(C_1\) and \(C_2\) as non-interactive L-R fuzzy numbers by the following membership functions:

$$\begin{aligned} \mu _{C_1}(r) \!=\!&\left\{ \! \begin{array}{cl} 1\!-\!\frac{(r+23)^2}{64}, &{} r\!\in \!(-31,-51), \\ 0, &{} \text {otherwise,} \end{array} \right. \!,&\mu _{C_2}(r) \!=\!&\left\{ \! \begin{array}{cl} \frac{(r+19)^2}{25}, &{} r\!\in \!(-19,-14), \\ 1\!-\!\frac{(r+14)^2}{25}, &{} r\in [-14,-9), \\ 0, &{} \text {otherwise.} \end{array} \right. \end{aligned}$$

Those membership functions are depicted in Fig. 7.

Fig. 7
figure 7

Membership function of \(\varPhi \)

Using reference functions

$$\begin{aligned} L(r)=\max (0,1-r^2), \quad \text{ and }\quad R(r)=(\max (0,1-r))^2, \end{aligned}$$
(53)

we have \(C_1=(-23,-23,8,8)_{LL}\) and \(C_2=(-14,-14,5,5)_{LR}\). As we know \(C_i=(0,0,0,0)_{LL}\), \(i=3,4,5\), we obtain \(\varvec{c}^\mathrm{C}=(-23,-14,0,0,0)^\mathrm{T}\). Then \(\varvec{x}^*=(6,6,0,0,3)^\mathrm{T}\) is the unique optimal solution to LP problem with the objective function coefficients \(c_i=q^{\mathrm C}_i (1)=c_i^\mathrm{C}\), \(i = 1, 2, \ldots , 5\). Moreover, \(\varvec{x}^*=(6,6,0,0,3)^\mathrm{T}\) is an NBF solution.

As \(C_2\) is not a symmetric fuzzy number, the results shown in Theorem 3 are not applicable, and thus, Algorithm A is applied to calculating \(\mu _{NS}(\varvec{x}^*)\). The calculation process is roughly shown as follows (we round values to three decimal places):

  • Initialization Set \(\delta = 1\times 10^{-3}\) and \(\varepsilon =1\times 10^{-4}\). We obtain \(\varvec{q}^\mathrm C (1) = \varvec{c}^\mathrm{C}= (-23,-14,0,0,0)^\mathrm{T}\) and thus, \(\varvec{x}^*=(6,6,0,0,3)^\mathrm{T}\), an NBF solution. \(M(\varvec{x}^*)\) is shown in Example 2.

  • Step 1 Set \(h=1\), which gives \(\varvec{q}^\mathrm C (h) = (-23,-14,0,0,0)^\mathrm{T}\) and \(\varvec{q}^\mathrm S (h) = (0,0,0,0,0)^\mathrm {T}\). As we have \(\sum _{j=1}^n |M_{kj}(\varvec{x}^*)||q_i^\mathrm{S}(h)|=0\), we recalculate \(\tau ^\mathrm{min}\) with \(c_i^\mathrm{S}=1\), \(i=1,2,\ldots ,n\) in (14). We obtain \(\tau ^\mathrm{min} = 4.75\), and from (50), we obtain \(h^\mathrm{M}=0.647\). Then we set \(h^\mathrm{H} = h^\mathrm{M} = 0.647\).

  • Step 2 Set \(h = \varepsilon \), which gives \(\varvec{q}^\mathrm C (h) = (-23,-13.992,0,0,0)^\mathrm {T}\) and \(\varvec{q}^\mathrm S (h) = (8,4.992,0,0,0)^\mathrm {T}\). We obtain \(\tau ^\mathrm {min} =0.826 < 1\). Namely, \(\varvec{x}^*\) is not a completely necessarily optimal, i.e., \(\mu _{NS}(\varvec{x}^*)<1\). From (50), \(h^\mathrm{M}=0.318\). Then, we set \(h^\mathrm {L} = \varepsilon \) and \(h^\mathrm{H}=\min (0.647,0.318)=0.318\).

  • Step 3 \(h^\mathrm {H} - h^\mathrm {L}=0.318 \ge \delta \). Continue the procedure.

  • Step 4 Set \(h = (h^\mathrm {H} + h^\mathrm {L})/2=0.159\). \(\varvec{q}^\mathrm C (h) = (-23,-13.211,0,0,0)^\mathrm {T}\) and \(\varvec{q}^\mathrm S (h) = (7.337,3.796,0,0,0)^\mathrm {T}\). From (14), we obtain \(\tau ^\mathrm {min}=0.888<1\). We update \(h^\mathrm{L}=0.159\). From (50), \(h^\mathrm{M}=0.336\). \(h^\mathrm{H}=\min (0.318,0.336)=0.318\). Return to Step 3.

  • Step 3 \(h^\mathrm {H} - h^\mathrm {L}=0.159 \ge \delta \). Continue the procedure.

  • Step 4 Set \(h = (h^\mathrm {H} + h^\mathrm {L})/2=0.238\). \(\varvec{q}^\mathrm C (h) = (-23,-13.098,0,0,0)^\mathrm {T}\) and \(\varvec{q}^\mathrm S (h) = (6.982,3.461,0,0,0)^\mathrm {T}\). From (14), we obtain \(\tau ^\mathrm {min}=0.938<1\). We update \(h^\mathrm{L}=0.238\). From (50), \(h^\mathrm{M}=0.330\). \(h^\mathrm{H}=\min (0.318,0.330)=0.318\). Return to Step 3.

  • Step 3 \(h^\mathrm {H} - h^\mathrm {L}=0.079 \ge \delta \). Continue the procedure.

  • Continue Steps 3 and 4 for \(h=0.278\), 0.298, 0.308, 0.312, 0.315, 0.317, ..., 0.318. After 11th iteration. we obtain the approximate \(\mu _{NS}(\varvec{x}^*)\approx 1-0.317592=0.682408\).

As the result, we obtain \(\mu _{NS}(\varvec{x}^*)\approx 0.682408\).

The next theorem shows that \(\mu _{NS}(\varvec{x}^*)\) is calculated more or less easily also in the case where \(\varPhi \) is composed of non-interactive L-L fuzzy numbers \(C_i=(c_i^\mathrm{L},c_i^\mathrm{R},\alpha _i,\beta _i)_{LL}\), \(i =1,2,\ldots ,n\) although they are not symmetric, i.e., \(\alpha _i \ne \beta _i\), \(i=1,2,\ldots ,n\).

Theorem 5

Assume that \(L_i:[0,+\infty )\) and \(R_i:[0,+\infty )\), \(i=1,2,\ldots ,n\) are the same reference function \(L:[0,+\infty )\). Let \(\varvec{x}^*\) be an NBF solution which is optimal to Problem (3) with objective function coefficient vector \((\varvec{c}^\mathrm{L}+\varvec{c}^\mathrm{R})/2\), where \(\varvec{c}^\mathrm{L}=(c_1^\mathrm{L}, c_2^\mathrm{L},\ldots ,c_n^\mathrm{L})^\mathrm{T}\) and \(\varvec{c}^\mathrm{R}=(c_1^\mathrm{R}, c_2^\mathrm{R},\ldots ,c_n^\mathrm{R})^\mathrm{T}\). If \(\tau ^\mathrm{min}\) with substitutions of \(q_i^\mathrm{C}(1)\), \(q_i^\mathrm{S}(1)\) and \(\varvec{x}^*\) for \(c_i^\mathrm{C}\), \(c_i^\mathrm{S}\) and \(\hat{\varvec{x}}\) of (14) is not less than 1 or \(q_i^\mathrm{S}(1)=0\), \(i\in N\), we obtain the necessary optimality degree as

$$\begin{aligned} \mu _{NS}(\varvec{x}^*)=1-L\left( \min _{{\mathop {w_k >0}\limits ^{\scriptstyle k=1,\ldots ,n-m}}} \displaystyle \frac{r_k}{w_k} \right) , \end{aligned}$$
(54)

where for \(k=1,2,\ldots ,n-m\), \(w_k\) and \(r_k\) are defined by

$$\begin{aligned} w_k&= \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)>0 \end{array}}^n M_{kj}(\varvec{x}^*)\alpha _j - \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)<0 \end{array}}^n M_{kj}(\varvec{x}^*)\beta _j, \end{aligned}$$
(55)
$$\begin{aligned} r_k&= \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)>0 \end{array}}^n M_{kj}(\varvec{x}^*)c_j^\mathrm{L} + \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)<0 \end{array}}^n M_{kj}(\varvec{x}^*)c_j^\mathrm{R}. \end{aligned}$$
(56)

Proof

From Theorem 4(a), we consider the minimal h such that \(\tau ^\mathrm{min} \ge 1\), where \(\tau ^\mathrm{min}\) is obtained from (15) with substitutions of \(q_i^\mathrm{C}(h)\), \(q_i^\mathrm{S}(h)\) and \(\varvec{x}^*\) for \(c_i^\mathrm{C}\), \(c_i^\mathrm{S}\) and \(\hat{\varvec{x}}\). Under these substitutions, for \(k=1,2,\ldots ,n-m\), the denominator of \(\tau _k\) of (14) is always non-negative, and thus, the numerator of \(\tau _k\) of (14) should be non-negative for having \(\tau ^\mathrm{min} \ge 1\). Therefore, from (14) and (15) with substitutions of \(q_j^\mathrm{C}(h)\) defined by (48), \(q_j^\mathrm{S}(h)\) defined by (49) and \(\varvec{x}^*\) for \(c_j^\mathrm{C}\), \(c_j^\mathrm{S}\) and \(\hat{\varvec{x}}\), we have the following equivalences:

$$\begin{aligned}&\tau ^\mathrm{min} \ge 1 \nonumber \\&\quad \Leftrightarrow \sum _{j=1}^n |M_{kj}(\varvec{x}^*)||c_j^\mathrm{R}-c_j^\mathrm{L} + L_*(h) (\alpha _j + \beta _j)| \nonumber \\&\quad \le \sum _{j=1}^n M_{kj}(\varvec{x}^*)(c_j^\mathrm{L} + c_j^\mathrm{R} + L_*(h) (\beta _j - \alpha _j)),\ k=1,2,\ldots ,n-m \nonumber \\&\quad \Leftrightarrow \sum _{j=1}^n |M_{kj} (\varvec{x}^*)| (c_j^\mathrm{R} - c_j^\mathrm{L}) -\sum _{j=1}^n M_{kj}(\varvec{x}^*)(c_j^\mathrm{L} + c_j^\mathrm{R}) \nonumber \\&\quad \le L_*(h)\left( \sum _{j=1}^n M_{kj}(\varvec{x}^*)(\beta _j-\alpha _j) -\sum _{j=1}^n |M_{kj}(\varvec{x}^*)|(\alpha _j+\beta _j) \right) ,\ k=1,2,\ldots ,n-m \nonumber \\&\quad \Leftrightarrow -2 \left( \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)>0 \end{array}}^n M_{kj}(\varvec{x}^*) c_j^\mathrm{L} + \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)<0 \end{array}}^n M_{kj}(\varvec{x}^*) c_j^\mathrm{R} \right) \nonumber \\&\quad \le -2 L_*(h)\left( \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)>0 \end{array}}^n M_{kj}(\varvec{x}^*) \alpha _j - \sum _{\begin{array}{c} j=1 \\ M_{kj}(\varvec{x}^*)<0 \end{array}}^n M_{kj}(\varvec{x}^*) \beta _j \right) ,\ k=1,2,\ldots ,n-m \nonumber \\&\quad \Leftrightarrow r_k \ge L_*(h) w_k, \ k=1, 2, \ldots , n-m. \end{aligned}$$
(57)

Since \(\alpha _j,\beta _j \ge 0\), \(j=1,2,\ldots ,n\), we have \(w_k \ge 0\), \(k=1,2,\ldots ,n-m\). Moreover, \(r_k \ge 0\) is guaranteed by the assumption \(\tau ^\mathrm{min} \ge 1\) with substitutions of \(q_i^\mathrm{C}(1)\), \(q_i^\mathrm{S}(1)\) and \(\varvec{x}^*\) for \(c_i^\mathrm{C}\), \(c_i^\mathrm{S}\) and \(\hat{\varvec{x}}\) in (14), for \(i=1,2,\ldots ,n\). Those ensure the last equivalence. From \(L_*(h)= \inf \{ r\in [0,+\infty ) : L(r) \le h \}\), \(L_*(h) \le v\) is equivalent to \(h \ge L(v)\). Hence, from (57), the minimal h satisfying \(\tau ^\mathrm{min}\ge 1\) is obtained as

$$\begin{aligned} {\check{h}}=L \left( \min _{{\mathop {w_k>0}\limits ^{k=1,\ldots ,n-m,}}} \frac{r_k}{w_k}\right) . \end{aligned}$$

Hence, we obtain \(\mu _{NS}(\varvec{x}^*)=1-{\check{h}}\) and the theorem is proven. \(\square \)

Example 4

Let us consider the problem in Example 1 again where \(C_1\) and \(C_2\) are non-interactive L-L fuzzy numbers such that \(C_1 = (-23,-23,4,8)_{LL}\), \(C_2 = (-14, -14, 2, 5)_{LL}\) and \(C_i=(0,0,0,0)_{LL}\), \(i=3,4,5\). As for the reference function \(L(\cdot )\), we assume a non-linear one with

$$\begin{aligned} L(r) = \left\{ \begin{aligned} (1-r)^2, \quad&\text {if } r\in [0,1], \\ 0\quad , \quad&\text {otherwise.} \end{aligned} \right. \end{aligned}$$

We obtain \(\varvec{c}^\mathrm{C}=(-23,-14,0,0,0)^\mathrm{T}\). Thus, we consider an NBF solution \(\varvec{x}^*=(6,6,0,0,3)^\mathrm{T}\) as in the previous examples.

We calculate \(\mu _{NS}(\varvec{x}^*)\). The matrix \(M(\varvec{x}^*)\) defining the associated optimality assurance cone \(\mathscr {M}(\varvec{x}^*)\) is shown in Example 2. As all \(C_i\), \(i=1,2,\ldots ,5\) are L-L fuzzy numbers, we apply Theorem 5, We obtain

$$\begin{aligned}w_1= & {} \frac{1}{9}\cdot 4 - (-\frac{1}{3})\cdot 5 + 1 \cdot 0 + \frac{1}{3}\cdot 0 =\frac{19}{9}, \\ w_2= & {} - (-\frac{4}{9})\cdot 8 + \frac{1}{3}\cdot 2 + 1 \cdot 0 - (-\frac{1}{3})\cdot 0 =\frac{38}{9}, \\ r_1= & {} \frac{1}{9}\cdot (-23) - \frac{1}{3}\cdot (-14) + 1 \cdot 0 + \frac{1}{3}\cdot 0=\frac{19}{9}, \\ r_2= & {} -\frac{4}{9}\cdot (-23) + \frac{1}{3}\cdot (-14) +1\cdot 0 - \frac{1}{3}\cdot 0=\frac{50}{9}. \end{aligned}$$

Then we obtain \(r_1/w_1=19/19=1\) and \(r_2/w_2=25/19>1\). \({\check{h}}=L(\min (1,\frac{25}{19}))=L(1)=0\). Therefore, we obtain \(\mu _{NS}(\varvec{x}^*)=1-{\check{h}}=1\).

3.3 The case of oblique fuzzy vectors

In this subsection, we consider the case where \(\varPhi \) is an oblique fuzzy vector (Inuiguchi et al. 2003) defined by the following membership function:

$$\begin{aligned} \mu _{\varPhi }(\varvec{c})=\min _{i=1,\ldots ,n} \mu _{\varPsi _i}(\varvec{d}_i^\mathrm{T} \varvec{c}) =\mu _{\varPsi }(\varvec{D} \varvec{c}), \end{aligned}$$
(58)

where \(\varvec{D}\) is the obliquity matrix and \(\varvec{d}_i^\mathrm{T}\) is the i-th row of \(\varvec{D}\), i.e., we have

$$\begin{aligned} \varvec{D}=(\varvec{d}_1, \ \varvec{d}_2, \ \ldots , \ \varvec{d}_n)^\mathrm{T}. \end{aligned}$$
(59)

Since an obliquity matrix is non-singular, there always exists \(\varvec{D}^{-1}\). Together with the assumption that \(\varPsi \) is a vector of non-interactive fuzzy numbers \(G_1, G_2,\ldots , G_n\), we obtain the following theorem.

Theorem 6

Let \(\varPhi \) denote an oblique fuzzy vector defined by (58) and \(\varvec{x}^*\) be an NBF solution. Then we obtain

$$\begin{aligned} \mathrm{cl}(\varPhi )_{1-h} \subseteq {{\mathscr {M}}}(\varvec{x}^*) \Leftrightarrow \mathrm{cl}(\varPsi )_{1-h} \subseteq {{\mathscr {M}}}_D(\varvec{x}^*), \end{aligned}$$
(60)

where \({{\mathscr {M}}}(\varvec{x}^*)\) is the cone defined by (8) while \({{\mathscr {M}}}_D(\varvec{x}^*)\) is the cone defined by

$$\begin{aligned} {{\mathscr {M}}}_D(\varvec{x}^*) =\{ \varvec{r} \in {\mathbb {R}}^n: \varvec{M}(\varvec{x}^*)\varvec{D}^{-1}\varvec{r}\ge \mathbf{0} \}. \end{aligned}$$
(61)

Proof

By (8), we have

$$\begin{aligned} {\mathrm {cl}}(\varPhi )_{1-h} \subseteq {{\mathscr {M}}}(\varvec{x}^*) \Leftrightarrow \varvec{c} \in \mathrm{cl}(\varPhi )_{1-h} \text { implies } \varvec{M}(\varvec{x}^*)\varvec{c}\ge \mathbf{0} \end{aligned}$$
(62)

Since by (58), \(\varvec{c} \in \mathrm{cl} (\varPhi )_{1-h}\) is equivalent to \(\varvec{Dc} \in \mathrm{cl}(\varPsi )_{1-h}\). To put this differently, \(\varvec{r} \in \mathrm{cl}(\varPsi )_{1-h}\) is equivalent to \(\varvec{D}^{-1}\varvec{r} \in \mathrm{cl}(\varPhi )_{1-h}\), which implies

$$\begin{aligned} \mathrm{cl}(\varPhi )_{1-h} \subseteq {{\mathscr {M}}}(\varvec{x}^*)&\Leftrightarrow \varvec{r} \in \mathrm{cl}(\varPsi )_{1-h} \ \text{ implies } \ \varvec{M}(\varvec{x}^*)\varvec{D}^{-1}\varvec{r}\ge \mathbf{0} \nonumber \\&\Leftrightarrow \mathrm{cl}(\varPsi )_{1-h}\subseteq {{\mathscr {M}}}_D(\varvec{x}^*). \end{aligned}$$
(63)

\(\square \)

Applying Theorem 6 to (28), we obtain

$$\begin{aligned} \mu _{NS}(\varvec{x}^*)=\max \{h:\mathrm{cl}(\varPsi )_{1-h}\subseteq {{\mathscr {M}}}_D(\varvec{x}^*)\}. \end{aligned}$$
(64)

Namely, Theorem 6 ensures the necessary optimality degree \(\mu _{NS} (\varvec{x}^*)\) under an oblique fuzzy vector \(\varPhi \) can be calculated in the same way as the one under a vector \(\varPsi \) of non-interactive fuzzy numbers by replacing \({{\mathscr {M}}}(\varvec{x}^*)\) with \({{\mathscr {M}}}_D(\varvec{x}^*)\). More concretely, \(\mu _{NS}(\varvec{x}^*)\) is calculated by Algorithm A by replacing \(\varPhi \) and \({{\mathscr {M}}}(\varvec{x}^*)\) with \(\varPsi \) and \({{\mathscr {M}}}_D(\varvec{x}^*)\), respectively.

When \(\varPsi \) is a vector with symmetric fuzzy numbers \(G_i=(g_i^\mathrm{C},g_i^\mathrm{C}, \alpha _i,\alpha _i)_{LL}\), we can apply (40) with substitutions of \(\varPsi \) and \({{\mathscr {M}}}_D(\varvec{x}^*)\) for \(\varPhi \) and \({{\mathscr {M}}}(\varvec{x}^*)\). Similarly, when \(\varPsi \) is defined by fuzzy numbers \(G_i=(g_i^\mathrm{L},g_i^\mathrm{R},\alpha _i,\beta _i)_{L L}\), \(i=1,2,\ldots ,n\), we can apply (54) with substitutions of \(\varPsi \) and \({{\mathscr {M}}}_D(\varvec{x}^*)\) for \(\varPhi \) and \({{\mathscr {M}}}(\varvec{x}^*)\).

Example 5

Let us consider the problem in Example (1) again with an oblique fuzzy vector \(\varPhi \). The oblique fuzzy vector \(\varPhi \) is defined by an obliquity matrix \(\varvec{D}\),

$$\begin{aligned} D =\left( \begin{array}{ccccc} 10 &{} 1 &{} 0 &{} 0 &{} 0 \\ -8 &{} 15 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1 \end{array} \right) \end{aligned}$$
(65)

and non-interactive L-L fuzzy numbers \(G_1 = (-399,-399,18,140)_{LL}\), \(G_2 = (-139, -139, 112, 204)_{LL}\) and \(G_i=(0,0,0,0)_{LL}\), \(i=2,3,4\) (see (38)). Let \(g_i^\mathrm{C}=(g_i^\mathrm{L}+g_i^\mathrm{R})/2\), \(i=1,2,\ldots ,5\) we have \(\varvec{g}^\mathrm{C}=(g_1^\mathrm{C},g_2^\mathrm{C},\ldots ,g_5^\mathrm{C})^\mathrm{T} =(-399,-139,0,0,0)^\mathrm{T}\). Then we have \(\varvec{c}^\mathrm{C}=\varvec{D}^{-1}\varvec{g}^\mathrm{C}= (-37,-29,0,0,0)^\mathrm{T}\). We consider an NBF solution \(\varvec{x}^* = (6,6,0,0,3)^\mathrm {T}\) solves the LP problem with objective function vector \(\varvec{c}^\mathrm{C}\). We calculate \(\mu _{NS}(\varvec{x}^*)\) using (60) or equivalently, (64).

The cone \({{\mathscr {M}}}_D(\varvec{x}^*)\) is characterized by the following matrix:

$$\begin{aligned} M(\varvec{x}^*)D^{-1}= & {} \left( \begin{array}{ccccc} \frac{1}{9} &{} -\frac{1}{3} &{} 1 &{} 0 &{} \frac{1}{3} \\ -\frac{4}{9} &{} \frac{1}{3} &{} 0 &{} 1 &{} -\frac{1}{3} \end{array} \right) \left( \begin{array}{ccccc}10 &{} 1 &{} 0 &{} 0 &{} 0 \\ -8 &{} 15 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1 \end{array} \right) ^{-1} \\= & {} \left( \begin{array}{ccccc} \frac{1}{9} &{} -\frac{1}{3} &{} 1 &{} 0 &{} \frac{1}{3} \\ -\frac{4}{9} &{} \frac{1}{3} &{} 0 &{} 1 &{} -\frac{1}{3} \end{array}\right) \left( \begin{array}{ccccc} \frac{15}{158} &{} \frac{-1}{158} &{} 0 &{} 0 &{} 0 \\ \frac{4}{79} &{} \frac{5}{79} &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1 \end{array} \right) \\= & {} \left( \begin{array}{ccccc} -\frac{1}{158} &{} -\frac{31}{1422} &{} 1 &{} 0 &{} \frac{1}{3} \\ -\frac{2}{79} &{} \frac{17}{711} &{} 0 &{} 1 &{} -\frac{1}{3} \end{array} \right) . \end{aligned}$$

As \(\varPsi \) is composed by L-L fuzzy numbers \(G_i\), \(i=1,2,\ldots ,5\), we may apply Theorem 5 to obtaining \(\mu _{NS}(\varvec{x}^*)\) by (64). \(w_1\), \(w_2\), \(r_1\) and \(r_2\) as

$$\begin{aligned} w_1= & {} -(-\frac{1}{158})\cdot 140 - (-\frac{31}{1422})\cdot 204 + 1\cdot 0 + \frac{1}{3}\cdot 0) =\frac{16}{3}, \\ w_2= & {} -(-\frac{2}{79})\cdot 140 + \frac{17}{711}\cdot 112+ 1 \cdot 0 -(-\frac{1}{3}) \cdot 0 =\frac{56}{9}, \\ r_1= & {} -\frac{1}{158}\cdot (-399) - \frac{31}{1422}\cdot (-139) + 1 \cdot 0 +\frac{1}{3}\cdot 0=\frac{50}{9}, \\ r_2= & {} -\frac{2}{79}\cdot (-399) + \frac{17}{711}\cdot (-139) +1\cdot 0 - \frac{1}{3}\cdot 0=\frac{61}{9}. \end{aligned}$$

Then, we compute \(r_1/w_1\!=\!25/24 \!>\! 1\) and \(r_2/w_2\!=\!61/56\!>\!1\). \({\check{h}}\!=\!L(\min (\frac{25}{24},\frac{61}{56}))\!=\!0\). Therefore, we obtain \(\mu _{NS}(\varvec{x}^*)=1-{\check{h}}=1\).

The situation of the necessarily optimal solution \(\varvec{x}^* = (6,6,0,0,3)^\mathrm {T}\) is depicted in Fig. 8 on the \(x_1\)\(x_2\) coordinate. In this figure, we observe \(\mathrm{cl}({\check{\varPhi }})_0 \subseteq {\mathscr {\check{M}}}(\varvec{x}^*)\), where \({\check{\varPhi }}\) and \({\mathscr {\check{M}}}(\varvec{x}^*)\) are the projected sets of \({\varPhi }\) and \({{\mathscr {M}}}(\varvec{x}^*)\) on the \(c_1\)\(c_2\) coordinate. This confirms that \(\mu _{NS}(\varvec{x}^*)=1\).

Fig. 8
figure 8

The necessarily optimal solution in Example 5

4 Conclusion

In this paper, we investigated a method for computing the necessary optimality degrees of a non-degenerate basic solution of a linear programming problem with fuzzy objective function coefficients. After reviewing the tolerance analysis, we applied it to a linear programming problem with interval objective function coefficients. We showed that the necessary optimality of a non-degenerate basic feasible solution can be tested easily by the tolerance analysis. We extend this approach to the calculation of the necessary optimality degrees of non-degenerate basic feasible solutions to linear programming problems with several kinds of fuzzy objective function coefficient vectors.

The main results obtained in this paper are as follows:

  • When the fuzzy objective coefficient vector is a non-interactive fuzzy vector composed of non-interactive symmetric fuzzy numbers, the necessary optimality degree of a non-degenerate basic feasible solution is obtained by simple calculations.

  • When the fuzzy objective coefficient vector is a non-interactive fuzzy vector composed of non-interactive L-L fuzzy numbers, the necessary optimality degree of a non-degenerate basic feasible solution is obtained also by simple calculations.

  • When the fuzzy objective coefficient vector is a non-interactive fuzzy vector composed of non-interactive general fuzzy numbers, the necessary optimality degree of a non-degenerate basic feasible solution is calculated by an iterative procedure based on a bisection method.

  • When the fuzzy objective coefficient vector is an oblique fuzzy vector composed of an obliquity matrix and non-interactive fuzzy numbers, the necessary optimality degree of a non-degenerate basic feasible solution is calculated in the same way as the case of a non-interactive fuzzy vector by replacing the optimality assurance cone with the linearly transformed optimality assurance cone by the inverse of the obliquity matrix.

From those results, we have found that the necessary optimality degree of a non-degenerate basic feasible solution is obtained easily when the fuzzy objective function coefficient vector is a non-interactive fuzzy vector or an oblique fuzzy vector.

We could extend the proposed approach for degenerate basic feasible solutions. The proposed approach would work well also in the case of degenerate basic feasible solutions if all basic representations of the solution are obtained. The consideration of an efficient calculation, in this case, would be a future topic. Moreover, since the oblique fuzzy vector is one of the most basic interactive fuzzy subsets, we may try to extend the proposed approach to the cases of a more general interactive fuzzy set. One conceivable generalization is a fuzzy polytope such that its h-level sets are convex polytopes. In this case, we should consider an efficient method without enumerating all vertices of the convex polytopes of the h-level sets, which would be one of our future topics.