1 Introduction

The fundamental concepts in uncertainty theory are uncertain measure (Liu, 2007) and product uncertain measure (Liu, 2009). Based on the measure, another fundamental concept, uncertain variable (Liu, 2007), was proposed to represent the uncertain quantities. To characterize the uncertain variable, the concept of uncertainty distribution was presented by Liu (2007). Following that, the sufficient and necessary condition for a real-valued function being an uncertainty distribution was proved by Peng and Iwamura (2010) and revised by Liu and Lio (2020).

To obtain an uncertainty distribution based on expert’s experimental data, Liu (2010) commenced the discussion of uncertain statistics by designing a questionnaire survey for the domain expert, and Chen and Ralescu (2012) showed that the questionnaire survey works. Moreover, Liu (2010) proposed the principle of least squares to estimate the unknown parameters in an uncertainty distribution with a known functional form, while Wang and Peng (2014) suggested the method of moments. For the case with multiple domain experts, Delphi method was revised by Wang et al. (2012) to determine the uncertainty distributions according to the experts’ experimental data.

Essentially, uncertain statistics is a set of mathematical tools to collect, analyze and interpret data based on uncertainty theory, and the techniques can also be applied to dealing with the observed data. In order to determine whether or not a statistical hypothesis is appropriate via observed data, uncertain hypothesis test was commenced by Ye and Liu (2021, 2022). Another main topic of uncertain statistics is uncertain regression analysis, which is a series of techniques that apply uncertainty theory to exploring the relationship between the response variables and explanatory variables. The discussion of uncertain regression was initialized by Yao and Liu (2018), and Lio and Liu (2018) further suggested an approach to estimate the uncertainty distribution of the disturbance term in an uncertain regression model. For predicting future values according to the previously observed data, uncertain time series was first studied by Yang and Liu (2019). Furthermore, many methods were proposed to make a connection between observed data and uncertain differential equation, for example, method of moments (Liu & Liu, 2021; Yao & Liu, 2020), minimum cover estimation (Yang et al., 2020), least squares estimation (Sheng et al., 2020), generalized moment estimation (Liu, 2021), and maximum likelihood estimation (Liu & Liu, 2022).

Bayesian rule is a usual method for connecting observed data and the distribution function by updating the prior distribution to the posterior distribution. It is known that the estimated distribution function got in most practical case is not close enough to the frequency in the world of eternal change and should be treated as an uncertainty distribution. How could a prior uncertainty distribution become a posterior probability distribution? That means, the classical method is not available when the prior distribution is regarded as an uncertainty distribution. Therefore, this paper will propose a new method to obtain the posterior uncertainty distribution from the prior uncertainty distribution with given observed data. The rest of the paper is organized as follows: Sect. 2 is a preliminary about some properties for likelihood function in the sense of uncertainty theory. Section 3 presents the method for updating prior uncertainty distribution to the posterior uncertainty distribution based on likelihood function and observed data. From Sects. 4 to 7, the theorems and examples for calculating the posterior uncertainty distributions from some special uncertainty distributions are provided. Section 8 introduces an application for the posterior uncertainty distribution to the uncertain urn problem and compares it to the posterior probability distribution. Finally, a concise conclusion is made in Sect. 9.

2 Preliminary

To define the posterior uncertainty distribution, the preliminary about the likelihood function is first presented in this section.

Theorem 1

[Lio and Liu (2020), Likelihood Function] Suppose \(\eta _{1},\eta _{2},\ldots ,\eta _{n}\) are iid uncertain variables with uncertainty distribution \(F(y\left| \theta \right. )\) where \(\theta \) is an unknown parameter, and have observed values \(y_{1},y_{2},\ldots ,y_{n}\), respectively. If \(F(y\left| \theta \right. )\) is differentiable at \(y_{1},y_{2},\ldots ,y_{n}\), then the likelihood function associated with \(y_{1},y_{2},\ldots ,y_{n}\) is

$$\begin{aligned} \begin{aligned} \text{ L }\left( \theta | y_{1},y_{2},\ldots ,y_{n}\right) = \bigwedge \limits _{i=1}^{n} F ' (y_{i}\left| \theta \right. ). \end{aligned} \end{aligned}$$
(1)

Example 1

Suppose \(\eta _{1},\eta _{2},\ldots ,\eta _{n}\) are iid uncertain variables which are normal uncertain variables \(\text{ N }(e,\sigma )\) with unknown parameters e and \(\sigma \), and have observed values \(y_{1},y_{2},\ldots ,y_{n}\), respectively. Then the likelihood function associated with \(y_{1},y_{2},\ldots ,y_{n}\) is

$$\begin{aligned} \begin{aligned} \text{ L }\left( e,\sigma | y_{1},y_{2},\ldots ,y_{n}\right) =\displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}\sigma } \exp \left( \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| e-y_{i}\right| \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| e-y_{i}\right| \right) \right) ^{2}}. \\ \end{aligned} \end{aligned}$$
(2)

Example 2

(Linear Likelihood) Suppose \(\eta _{1},\eta _{2},\ldots ,\eta _{n}\) are iid uncertain variables which are linear uncertain variables \(L (a,b)\) with unknown parameters a and b, and have observed values \(y_{1},y_{2},\ldots ,y_{n}\), respectively. Then the likelihood function associated with \(y_{1},y_{2},\ldots ,y_{n}\) is

$$\begin{aligned} \text{ L }(a,b | y_{1},y_{2},\ldots ,y_{n})=\left\{ \begin{array}{cl} \displaystyle \frac{1}{b-a}, \quad &{}\text{ if } \ a \le \bigwedge \limits _{k=1}^{n} y_{k} \ \ \text{ and } \ \ b \ge \bigvee \limits _{k=1}^{n} y_{k} \\ 0, \quad &{}\text{ otherwise. } \end{array}\right. \end{aligned}$$
(3)

3 Likelihood function and posterior uncertainty function

A prior uncertainty distribution \(\Phi (x)\), which can be regarded as the uncertainty distribution of an uncertain variable \(\xi \), is available. Some uncertain variables \(\eta _1,\eta _2,\ldots ,\eta _n\) with uncertainty distribution \(F(y | \xi )\) are obtained, and they have observed values \(y_1,y_2,\ldots ,y_n\), respectively. What is the posterior uncertainty distribution \(\Psi (x|y_1,y_2,\ldots ,y_n)?\)

To answer this question, the likelihood function proposed by Lio and Liu (2020) should be applied. Essentially, the likelihood function represents the likelihood of taking value at x. Note that the derivative of a posterior uncertainty function can also represent the likelihood. Moreover, since the posterior uncertainty function is updated from the prior uncertainty distribution, in the sense of uncertainty theory, it is natural to regard that the derivative of the posterior uncertainty distribution should be in proportion to the minimum of likelihood function and the derivative of prior uncertainty distribution, i.e.,

$$\begin{aligned} \Psi '(x | y_1,y_2,\ldots ,y_n) \propto \text{ L }(x |y_1,y_2,\ldots ,y_n) \wedge \Phi '(x)= \bigwedge \limits _{i=1}^{n} F'(y_i |x) \wedge \Phi '(x). \end{aligned}$$
(4)

Unfortunately, if an uncertainty distribution is defined by

$$\begin{aligned} \int \limits _{-\infty }^{x} \bigwedge \limits _{i=1}^{n} F'(y_i |s) \wedge \Phi '(s) \mathrm{d}s, \end{aligned}$$
(5)

then for any \(x\in \Re \), the distribution function is always strictly smaller than 1. It seems that we have no alternative but to divide Eq. (5) by

$$\begin{aligned} \int \limits _{-\infty }^{+\infty } \bigwedge \limits _{i=1}^{n} F'(y_i |s) \wedge \Phi '(s) \mathrm{d}s. \end{aligned}$$

Hence the definition of posterior uncertainty function is given as follows:

Definition 1

Suppose \(\xi \) is an uncertain variable with prior uncertainty distribution \(\Phi (x)\), and \(\eta _1,\eta _2,\ldots ,\eta _n\) are iid uncertain variables from a population with uncertainty distribution \(F(y | \xi )\). Suppose \(\Phi '(x)\) and \(F'(y | \xi )\) can be obtained, and \(\eta _1,\eta _2,\ldots ,\eta _n\) have observed values \(y_1,y_2,\ldots ,y_n\), respectively. Then the posterior uncertainty distribution is defined by

$$\begin{aligned} \begin{aligned} \Psi (x | y_1,y_2,\ldots ,y_n)&= \frac{\displaystyle \int _{-\infty }^{x} \text{ L }(s|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s) \mathrm{d}s}{\displaystyle \int _{-\infty }^{+\infty } \text{ L }(s|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s) \mathrm{d}s} \\&=\frac{\displaystyle \int _{-\infty }^{x} \bigwedge \limits _{i=1}^{n} F'(y_i |s) \wedge \Phi '(s) \mathrm{d}s}{\displaystyle \int _{-\infty }^{+\infty } \bigwedge \limits _{i=1}^{n} F'(y_i |s) \wedge \Phi '(s) \mathrm{d}s}. \end{aligned} \end{aligned}$$
(6)

It is clear that if

$$\begin{aligned} \int \limits _{-\infty }^{+\infty } \bigwedge \limits _{i=1}^{n} F'(y_i |s) \wedge \Phi '(s) \mathrm{d}s\ne 0, \end{aligned}$$

then the posterior uncertainty distribution defined by Eq. (6) is a continuous monotone increasing function satisfying

$$\begin{aligned} \begin{array}{cl} 0 \le \Psi (x | y_1,y_2,\ldots ,y_n) \le 1, \\ \Psi (x | y_1,y_2,\ldots ,y_n) \not \equiv 0,\\ \Psi (x | y_1,y_2,\ldots ,y_n) \not \equiv 1.\\ \end{array} \end{aligned}$$

It was proved by Peng and Iwamura (2010) and Liu and Lio (2020) that Eq. (6) is indeed an uncertainty distribution.

4 Linear prior distribution and linear population

A definition was provided by Liu (2010) that a linear uncertain variable has an uncertainty distribution

$$ \Phi (x) = \left\{ \begin{gathered} 0,\quad {\text{if }}x \le a \hfill \\ \frac{{x - a}}{{b - a}},\quad {\text{if }}a < x \le b \hfill \\ 1,\quad {\text{if }}x > b \hfill \\ \end{gathered} \right. $$

denoted by \(L (a,b)\) where a and b are real numbers with \(a<b\). This section illustrates the calculation for the posterior uncertainty distribution when the prior distribution is assumed to be linear and the data are observed from a population whose distribution function is also linear.

Theorem 2

Suppose \(\xi \) is an uncertain variable with linear prior uncertainty distribution \(L (a,b)\), and \(\eta _1,\eta _2,\cdots ,\eta _n\) are iid uncertain variables from a population with linear uncertainty distribution \(L (\xi -c,\xi +d), \ c,d \ge 0\) and observed values \(y_1,y_2,\ldots ,y_n\), respectively. If it is assumed that

$$\begin{aligned} \bigvee \limits _{i=1}^{n} (y_i -d)\vee a \le \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b, \end{aligned}$$
(7)

then the posterior uncertainty distribution is

$$\begin{aligned} L \left( \bigvee \limits _{i=1}^{n} (y_i -d)\vee a, \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b\right) . \end{aligned}$$

Proof

It follows from Example 2, Definition 1 and

$$\begin{aligned} \Phi ' (x) = \left\{ \begin{array}{cl} \displaystyle \frac{1}{b-a}, \quad &{} \text{ if } \ x \in [a,b] \\ 0, \quad &{}\text{ otherwise } \end{array}\right. \end{aligned}$$

that the posterior uncertainty distribution satisfies

$$\begin{aligned} \begin{aligned}&\quad \Psi '(x | \ y_1,y_2,\ldots ,y_n) \\&\quad = \frac{ \text{ L } (x|y_1,y_2,\ldots ,y_n) \wedge \Phi '(x)}{\displaystyle \int _{-\infty }^{\infty } \text{ L } (s|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s) \ \mathrm{d}s} \\&\quad =\left\{ \begin{array}{cl} \frac{ \displaystyle \frac{\displaystyle 1}{c+d} \wedge \frac{1}{b-a}}{\displaystyle \int _{\bigvee \limits _{i=1}^{n} (y_i -d)\vee a}^{\bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b} \frac{1}{c+d} \wedge \frac{1}{b-a} \mathrm{d}s}, \quad &{}\text{ if } \ \bigvee \limits _{i=1}^{n} (y_i -d)\vee a \le x \le \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b \\ 0, \quad &{}\text{ otherwise } \end{array} \right. \\&\quad =\left\{ \begin{array}{cl} \frac{\displaystyle 1}{\displaystyle \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b-\bigvee \limits _{i=1}^{n} (y_i -d)\vee a}, \quad &{}\text{ if } \ \bigvee \limits _{i=1}^{n} (y_i -d)\vee a \le x \le \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b \\ 0, \quad &{}\text{ otherwise. } \end{array} \right. \end{aligned} \end{aligned}$$

Then we have

$$\begin{aligned} \begin{aligned}&\quad \Psi (x | \ y_1,y_2,\ldots ,y_n) \\&= \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ x \le \bigvee \limits _{i=1}^{n} (y_i -d)\vee a \\ \frac{\displaystyle x-\bigvee \limits _{i=1}^{n} (y_i -d)\vee a}{\displaystyle \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b-\bigvee \limits _{i=1}^{n} (y_i -d)\vee a}, \quad &{}\text{ if } \ \bigvee \limits _{i=1}^{n} (y_i -d)\vee a< x \le \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b \\ 1, \quad &{}\text{ if } \ \displaystyle \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b < x. \end{array} \right. \end{aligned} \end{aligned}$$

That means, the posterior uncertainty distribution is

$$\begin{aligned} L \left( \bigvee \limits _{i=1}^{n} (y_i -d)\vee a, \bigwedge \limits _{i=1}^{n} (y_i +c)\wedge b\right) \end{aligned}$$

and the theorem is proved. \(\square \)

Example 3

Suppose \(\xi \) is an uncertain variable with linear prior uncertainty distribution \(L (171,175)\), and \(\eta _1,\eta _2,\eta _3\) are iid uncertain variables from a population with linear uncertainty distribution \(L (\xi -1,\xi +2)\) and observed values \(y_1=173, y_2=172, y_3=174\), respectively. Then it follows from Theorem 2 that the posterior uncertainty distribution is \(L (172,173)\), i.e.,

$$\begin{aligned} \begin{aligned} \Psi (x | \ y_1,y_2) = \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ x \le 172 \\ \displaystyle x-172, \quad &{}\text{ if } \ 172< x \le 173 \\ 1,\quad &{}\text{ if } \ 173< x. \end{array}\right. \end{aligned} \end{aligned}$$

5 Normal prior distribution and linear population

A definition was provided by Liu (2010) that a normal uncertain variable has an uncertainty distribution

$$\begin{aligned} \Phi (x) = \left( 1+\exp \left( \frac{\pi (e-x)}{\sqrt{3}\sigma }\right) \right) ^{-1}, \quad x \in \Re \end{aligned}$$

denoted by \(\text{ N }(e,\sigma )\) where e and \(\sigma \) are real numbers with \(\sigma >0\). This section illustrates the calculation for the posterior uncertainty distribution when the prior distribution is assumed to be normal and the data are observed from a population whose distribution function is linear.

Theorem 3

Suppose \(\xi \) is an uncertain variable with normal prior uncertainty distribution \(\text{ N }(e,\sigma )\), and \(\eta _1,\eta _2,\ldots ,\eta _n\) are iid uncertain variables from a population with linear uncertainty distribution \(L (\xi -c,\xi +d), \ c,d\ge 0\) and observed values \(y_1,y_2,\ldots ,y_n\), respectively. If it is assumed that

$$\begin{aligned} \bigvee \limits _{i=1}^{n} y_i -d \le \bigwedge \limits _{i=1}^{n} y_i +c, \end{aligned}$$
(8)

then the posterior uncertainty distribution is

$$ \Psi \left( {x|\;y_{1} ,y_{2} , \ldots ,y_{n} } \right) = \left\{ \begin{array}{ll} 0,& {\text{ if }}\;x \le {\bigvee }_{{{\text{i = 1}}}}^{{\text{n}}} {\text{y}}_{{\text{i}}} - {\text{d}} \\ \frac{{\int\limits_{{{\bigvee }_{{{\text{i = 1}}}}^{{\text{n}}} {\text{y}}_{{\text{i}}} {\text{ - d}}}}^{{\text{x}}} {\frac{{\text{1}}}{{c + d}}} \wedge \frac{{\frac{\pi }{{\sqrt {\text{3}} \sigma }}\exp \left( {\frac{{\pi \left( {e - s} \right)}}{{\sqrt {\text{3}} \sigma }}} \right)}}{{\left( {{\text{1 + }}\exp \frac{{\pi \left( {e - s} \right)}}{{\sqrt {\text{3}} \sigma }}} \right)^{{\text{2}}} }}{\text{ds}}}}{{\int\limits_{{{\bigvee }_{{{\text{i = 1}}}}^{{\text{n}}} {\text{y}}_{{\text{i}}} - {\text{d}}}}^{{{\bigwedge }_{{{\text{i = 1}}}}^{{\text{n}}} {\text{y}}_{{\text{i}}} {\text{ + c}}}} {\frac{{\text{1}}}{{c + d}}} \wedge \frac{{\frac{\pi }{{\sqrt {\text{3}} \sigma }}\exp \left( {\frac{{\pi \left( {e - s} \right)}}{{\sqrt {\text{3}} \sigma }}} \right)}}{{\left( {{\text{1 + }}\exp \frac{{\pi \left( {e - s} \right)}}{{\sqrt {\text{3}} \sigma }}} \right)^{{\text{2}}} }}{\text{ds}}}},& {\text{if }}\;{\bigvee }_{{{\text{i = 1}}}}^{{\text{n}}} y_{i} - d < {\text{x}} \le {\bigwedge }_{{{\text{i = 1}}}}^{{\text{n}}} {\text{y}}_{{\text{i}}} {\text{ + c}} \hfill \\ {\text{1}},& {\text{if }}\;{\bigwedge }_{{{\text{i = 1}}}}^{{\text{n}}} y_{i} + c < x \\ \end{array} \right. $$

Proof

It follows from Example 2, Definition 1 and

$$\begin{aligned} \Phi ' (x) = \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) ^{2}} \end{aligned}$$

that the posterior uncertainty distribution satisfies

$$\begin{aligned} \begin{aligned}&\quad \Psi '(x | \ y_1,y_2,\ldots ,y_n) \\&= \frac{ \text{ L }(x|y_1,y_2,\ldots ,y_n) \wedge \Phi '(x)}{\displaystyle \int _{-\infty }^{\infty } \text{ L }(s|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s) \ \mathrm{d}s} \\&\quad =\left\{ \begin{array}{cl} \frac{ \displaystyle \frac{\displaystyle 1}{c+d} \wedge \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) ^{2}}}{\displaystyle \int _{\bigvee \limits _{i=1}^{n} y_i -d}^{\bigwedge \limits _{i=1}^{n} y_i +c} \frac{1}{c+d} \wedge \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) ^{2}} \mathrm{d}s}, \quad &{}\text{ if } \ \bigvee \limits _{i=1}^{n} y_i -d \le x \le \bigwedge \limits _{i=1}^{n} y_i +c \\ 0, \quad &{}\text{ otherwise }. \end{array} \right. \\ \end{aligned} \end{aligned}$$

Since

$$\begin{aligned} \Psi (x | \ y_1,y_2,\ldots ,y_n)=\int \limits _{-\infty }^{x} \Psi '(s | \ y_1,y_2,\ldots ,y_n) \mathrm{d}s, \end{aligned}$$

the theorem is verified. \(\square \)

Example 4

Suppose \(\xi \) is an uncertain variable with normal prior uncertainty distribution \(\text{ N }(174,1)\), and \(\eta _1,\eta _2\) are iid uncertain variables from a population with linear uncertainty distribution \(L (\xi -2,\xi +1)\) and observed values \(y_1=174, y_2=173\), respectively. Then it follows from Theorem 3 that the posterior uncertainty distribution is

$$ \Psi \left( {x|\;y_{1} ,y_{2} } \right) = \left\{ \begin{gathered} 0,\quad {\text{if }}\;x \le 173 \hfill \\ \frac{{\int\limits_{{173}}^{x} {\frac{1}{3}} \wedge \frac{{\frac{\pi }{{\sqrt 3 }}\exp \left( {\frac{{\pi (174 - s)}}{{\sqrt 3 }}} \right)}}{{\left( {1 + \exp \left( {\frac{{\pi (174 - s)}}{{\sqrt 3 }}} \right)} \right)^{2} }}{\text{d}}s}}{{\int\limits_{{173}}^{{175}} {\frac{1}{3}} \wedge \frac{{\frac{\pi }{{\sqrt 3 }}\exp \left( {\frac{{\pi (174 - s)}}{{\sqrt 3 }}} \right)}}{{\left( {1 + \exp \left( {\frac{{\pi (174 - s)}}{{\sqrt 3 }}} \right)} \right)^{2} }}{\text{d}}s}},\quad {\text{if }}\;173 < x \le 175 \hfill \\ 1,\quad {\text{if }}\;175 < x \hfill \\ \end{gathered} \right. $$

In fact, by taking

$$\begin{aligned} z=\exp \left( \frac{\pi (174-x)}{\sqrt{3}} \right) , \end{aligned}$$

the equation

$$\begin{aligned} \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi (174-x)}{\sqrt{3}} \right) }{\displaystyle \left( 1+\exp \left( \frac{\pi (174-x)}{\sqrt{3}} \right) \right) ^2} = \frac{1}{3} \end{aligned}$$
(9)

can be simplified as

$$\begin{aligned} z^2 + \left( 2-\sqrt{3}\pi \right) z + 1 =0. \end{aligned}$$

Hence we can suppose \(x^*\) is the analytic solution of Eq. (9) satisfying \(x^* \le 174\), i.e., \(x^* \approx 173.3725\). Then for any \(173 < x \le 175\), it can be further obtained that

$$ \Psi (x|\;y_{1} ,y_{2} ) \approx \left\{ \begin{gathered} \frac{{\Phi (x) - 0.1402}}{{0.6233}},\quad 173 < x \le 173.372 \hfill \\ \frac{{(x - 173.0651)/3}}{{0.6233}},\quad 173.3725 < x \le 174.6275 \hfill \\ \frac{{\Phi (x) - 0.2365}}{{0.6233}},\quad 174.6275 < x \le 175 \hfill \\ \end{gathered} \right. $$

where \(\Phi \) is the uncertainty distribution \(\text{ N }(174,1)\). Moreover, for any \(173 < x \le 175\) (otherwise, \(\Psi ' (x | \ y_1,y_2)=0\)), we have

$$ \Psi ^{\prime}\left( {x|\;y_{1} ,y_{2} } \right) \approx \left\{ \begin{gathered} \frac{{\Phi ^{\prime}(x)}}{{0.6233}},\quad 173 < x \le 173.3725\;{\text{ or }}\;174.6275 < x \le 175 \hfill \\ \frac{1}{{1.8699}},\quad 173.3725 < x \le 174.6275 \hfill \\ \end{gathered} \right. $$

where \(\Phi \) is the uncertainty distribution \(\text{ N }(174,1)\). The posterior uncertainty distribution \(\Psi \) and its derivative \(\Psi '\) are revealed in Figs. 1 and 2, respectively.

Fig. 1
figure 1

Function \(\Psi \) in Example 4

Fig. 2
figure 2

Function \(\Psi '\) in Example 4

6 Linear prior distribution and normal population

This section illustrates the calculation for the posterior uncertainty distribution when the prior distribution is assumed to be linear and the data are observed from a population whose distribution function is normal.

Theorem 4

Suppose \(\xi \) is an uncertain variable with linear prior uncertainty distribution \(L (a,b)\), and \(\eta _1,\eta _2,\ldots ,\eta _n\) are iid uncertain variables from a population with normal uncertainty distribution \(\text{ N }(\xi ,\sigma )\) and observed values \(y_1,y_2,\ldots ,y_n\), respectively. Then the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned}&\quad \Psi (x | \ y_1,y_2,\ldots ,y_n)\\&=\left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ x \le a \\ \frac{\displaystyle \int _{a}^{x} \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}\sigma } \exp \left( \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| s-y_{i}\right| \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| s-y_{i}\right| \right) \right) ^{2}} \wedge \frac{1}{b-a} \mathrm{d}s}{\displaystyle \int _{a}^{b} \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}\sigma } \exp \left( \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| s-y_{i}\right| \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n}\left| s-y_{i}\right| \right) \right) ^{2}} \wedge \frac{1}{b-a} \mathrm{d}s}, \quad &{}\text{ if } \ a< x \le b \\ 1, \quad &{}\text{ if } \ b<x. \end{array} \right. \end{aligned} \end{aligned}$$

Proof

It follows from Definition 1 and

$$\begin{aligned} \Phi ' (x) = \left\{ \begin{array}{cl} \displaystyle \frac{1}{b-a}, \quad &{} \text{ if } \ x \in [a,b] \\ 0, \quad &{}\text{ otherwise } \end{array}\right. \end{aligned}$$

that the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned} \Psi (x | \ y_1,y_2,\ldots ,y_n)&= \frac{\displaystyle \int _{-\infty }^{x} \text{ L }(x|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s)\mathrm{d}s}{\displaystyle \int _{-\infty }^{\infty } \text{ L }(s|y_1,y_2,\ldots ,y_n) \wedge \Phi '(s) \ \mathrm{d}s} \\&=\left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ x \le a \\ \frac{\displaystyle \int _{a}^{x} \text{ L }(x|y_1,y_2,\ldots ,y_n) \wedge \frac{1}{b-a} \mathrm{d}s}{\displaystyle \int _{a}^{b} \text{ L }(s|y_1,y_2,\ldots ,y_n) \wedge \frac{1}{b-a} \mathrm{d}s}, \quad &{}\text{ if } \ a< x \le b \\ 1, \quad &{}\text{ if } \ b<x. \end{array} \right. \\ \end{aligned} \end{aligned}$$

The theorem follows from Example 1 immediately. \(\square \)

Example 5

Suppose \(\xi \) is an uncertain variable with linear prior uncertainty distribution \(L (171,178)\), and \(\eta _1,\eta _2\) are iid uncertain variables from a population with normal uncertainty distribution \(\text{ N }(\xi ,1)\) and observed values \(y_1=173, y_2=175\), respectively. Then it follows from Theorem 4 that the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned}&\quad \Psi (x | \ y_1,y_2)\\&= \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ x \le 171 \\ \frac{\displaystyle \int _{171}^{x} \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|) \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|)\right) \right) ^{2}} \wedge \frac{1}{7} \mathrm{d}s}{\displaystyle \int _{171}^{178} \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|)\right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|)\right) \right) ^{2}} \wedge \frac{1}{7} \mathrm{d}s}, \quad &{}\text{ if } \ 171< x \le 178 \\ 1,\quad &{}\text{ if } \ 178< x. \end{array}\right. \end{aligned} \end{aligned}$$

Since the function

$$\begin{aligned} G(s)= \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|) \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}}(|s-173| \vee |s-175|)\right) \right) ^{2}} \end{aligned}$$

is symmetric about \(s=(173+175)/2=174\), reaches its maximum at \(s=174\), and decreases as s goes away from \(s=174\), it can be supposed that \(x^*\) is the analytic solution of equation

$$\begin{aligned} \begin{aligned}&\quad \displaystyle \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi }{\sqrt{3}}(|x-173| \vee |x-175|) \right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}}(|x-173| \vee |x-175|)\right) \right) ^{2}}\\&= \frac{\displaystyle \frac{\pi }{\sqrt{3}} \exp \left( \frac{\pi (x-175)}{\sqrt{3}} \right) }{\displaystyle \left( 1+\exp \left( \frac{\pi (x-175)}{\sqrt{3}} \right) \right) ^2} = \frac{1}{7} \end{aligned} \end{aligned}$$
(10)

satisfying \(x^* \le 174\), i.e., \(x^*\approx 173.6983\). Then for \(171 < x \le 178\), we have

$$ \Psi \left( {x|\;y_{1} ,y_{2} } \right) \approx \left\{ \begin{gathered} \frac{{\Phi _{2} (x) - 0.0007}}{{0.2578}},\quad {\text{if }}\;171 < x \le 173.6983 \hfill \\ \frac{{(x - 173.0999)/7}}{{0.2578}},\quad {\text{if }}\;173.6983 < x \le 174.3017 \hfill \\ \frac{{\Phi _{1} (x) - 0.7421}}{{0.2578}},\quad {\text{if }}\;174.3017 < x \le 178 \hfill \\ \end{gathered} \right. $$

where \(\Phi _{1}\) and \(\Phi _{2}\) are the uncertainty distributions \(\text{ N }(173,1)\) and \(\text{ N }(175,1)\), respectively. Moreover, for \(171 < x \le 178\) (otherwise, \(\Psi ' (x | \ y_1,y_2)=0\)), it can be obtained that

$$ \Psi ^{\prime}(x|\;y_{1} ,y_{2} ) \approx \left\{ \begin{gathered} \frac{{\Phi ^{\prime}_{2} (x)}}{{0.2578}},\quad {\text{if }}\;171 < x \le 173.6983 \hfill \\ \frac{1}{{1.8046}},\quad {\text{if }}\;173.6983 < x \le 174.3017 \hfill \\ \frac{{\Phi ^{\prime}_{1} (x)}}{{0.2578}},\quad {\text{if }}\;174.3017 < x \le 178 \hfill \\ \end{gathered} \right. $$

where \(\Phi _{1}\) and \(\Phi _{2}\) are the uncertainty distributions \(\text{ N }(173,1)\) and \(\text{ N }(175,1)\), respectively. The posterior uncertainty distribution \(\Psi \) and its derivative \(\Psi '\) are revealed in Figs. 3 and 4, respectively.

Fig. 3
figure 3

Function \(\Psi \) in Example 5

Fig. 4
figure 4

Function \(\Psi '\) in Example 5

7 Normal prior distribution and normal population

This section illustrates the calculation for the posterior uncertainty distribution when the prior distribution is assumed to be normal and the data are observed from a population whose distribution function is also normal.

Theorem 5

Suppose \(\xi \) is an uncertain variable with normal prior uncertainty distribution \(\text{ N }(e,\sigma )\), and \(\eta _1,\eta _2,\ldots ,\eta _n\) are iid uncertain variables from a population with normal uncertainty distribution \(\text{ N }(\xi ,\sigma )\) and observed values \(y_1,y_2,\ldots ,y_n\), respectively. Then the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned}&\quad \Psi (x | \ y_1,y_2,\ldots ,y_n) \\&= \left\{ \begin{array}{cl} \frac{\displaystyle \Phi _M (x) }{\displaystyle \Phi _M ((m+M)/2) +1- \Phi _m ((m+M)/2) }, \quad &{}\text{ if } \ x \le (m+M)/2 \\ \frac{\displaystyle \Phi _M ((m+M)/2) + \Phi _m(x) - \Phi _m((m+M)/2) }{\displaystyle \Phi _M ((m+M)/2) +1- \Phi _m ((m+M)/2) }, \quad &{}\text{ if } \ x > (m+M)/2 \\ \end{array} \right. \end{aligned} \end{aligned}$$
(11)

where \(\Phi _M\) and \(\Phi _m\) are the uncertainty distributions \(\text{ N }(M,\sigma )\) and \(\text{ N }(m,\sigma )\), respectively, and

$$\begin{aligned} M= \bigvee \limits _{i=1}^{n} y_i \vee e, \quad m= \bigwedge \limits _{i=1}^{n} y_i \wedge e. \end{aligned}$$
(12)

Proof

Since the function

$$\begin{aligned} G(z) = \frac{\exp (z)}{(1+\exp (z))^2} \end{aligned}$$

reaches its maximum at \(z=0\), decreases as z goes away from 0 and is symmetric about 0, it follows from Example 1, Definition 1 and

$$\begin{aligned} \Phi ' (x) = \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-x\right) }{\sqrt{3}\sigma }\right) ^{2}} \end{aligned}$$

that the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned}&\quad \Psi (x | \ y_1,y_2,\ldots ,y_n) \\&= \frac{ \displaystyle \int _{-\infty }^{x} \bigwedge \limits _{i=1}^{n} \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( s-y_i\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( s-y_i\right) }{\sqrt{3}\sigma }\right) ^{2}} \wedge \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) ^{2}} \mathrm{d}s}{ \displaystyle \int _{-\infty }^{+\infty } \bigwedge \limits _{i=1}^{n} \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( s-y_i\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( s-y_i\right) }{\sqrt{3}\sigma }\right) ^{2}} \wedge \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) }{\left( 1+\exp \displaystyle \frac{\pi \left( e-s\right) }{\sqrt{3}\sigma }\right) ^{2}} \mathrm{d}s} \\&= \frac{ \displaystyle \int _{-\infty }^{x} \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \right) ^{2}}\mathrm{d}s}{ \displaystyle \int _{-\infty }^{+\infty } \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) }{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \right) ^{2}} \mathrm{d}s}. \\ \end{aligned} \end{aligned}$$

For any \(s \le (m+M)/2\) and \(s>(m+M)/2\), it is clear that

$$\begin{aligned} \bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s| = M-s \end{aligned}$$

and

$$\begin{aligned} \bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s| = m-s, \end{aligned}$$

respectively. That means

$$\begin{aligned} \begin{aligned}&\quad \displaystyle \int \limits _{-\infty }^{+\infty } \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \mathrm{d}s}{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \right) ^{2}\mathrm{d}s} \\&= \displaystyle \int \limits _{-\infty }^{ (m+M)/2} \!\! \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi (M-s)}{\sqrt{3}\sigma } \right) \mathrm{d}s}{\left( 1+\exp \left( \displaystyle \frac{\pi (M-s)}{\sqrt{3}\sigma }\right) \right) ^{2}\mathrm{d}s}\!+\!\! \displaystyle \int _{(m+M)/2}^{+\infty } \!\!\frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi (m-s)}{\sqrt{3}\sigma } \right) \mathrm{d}s}{\left( 1+\exp \left( \displaystyle \frac{\pi (m-s)}{\sqrt{3}\sigma }\right) \right) ^{2}\!\mathrm{d}s} \\&=\Phi _M ((m+M)/2) +1- \Phi _m ((m+M)/2). \end{aligned} \end{aligned}$$

Similarly, it is clear that

$$\begin{aligned} \begin{aligned}&\quad \displaystyle \int \limits _{-\infty }^{x} \frac{\displaystyle \frac{\pi }{\sqrt{3} \sigma }\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \mathrm{d}s}{\left( 1+\exp \left( \displaystyle \frac{\pi }{\sqrt{3}\sigma }\bigvee \limits _{i=1}^{n} |y_i-s| \vee |e-s|\right) \right) ^{2}\mathrm{d}s} \\&= \left\{ \begin{array}{cl} \Phi _M (x), \quad &{}\text{ if } \ x \le (m+M)/2 \\ \Phi _M ((m+M)/2) + \Phi _m(x) - \Phi _m((m+M)/2), \quad &{}\text{ if } \ x > (m+M)/2. \\ \end{array} \right. \end{aligned} \end{aligned}$$

Then the theorem follows immediately. \(\square \)

Example 6

Suppose \(\xi \) is an uncertain variable with normal prior uncertainty distribution \(\text{ N }(174,1)\), and \(\eta _1,\eta _2\) are iid uncertain variables from a population with normal uncertainty distribution \(\text{ N }(\xi ,1)\) and observed values \(y_1=171, y_2=175\), respectively. Then it follows from Theorem 5 that the posterior uncertainty distribution is

$$\begin{aligned} \begin{aligned} \Psi (x | \ y_1,y_2) = \left\{ \begin{array}{cl} \frac{\displaystyle \Phi _{2} (x) }{\displaystyle \Phi _{2} (173) +1- \Phi _{1} (173) }, \quad &{}\text{ if } \ x \le 173 \\ \frac{\displaystyle \Phi _{2} (173) + \Phi _{1}(x) - \Phi _{1}(173) }{\displaystyle \Phi _{2} (173) +1- \Phi _{1} (173) }, \quad &{}\text{ if } \ x > 173 \\ \end{array}\right. \end{aligned} \end{aligned}$$

where \(\Phi _{1}\) and \(\Phi _{2}\) are the uncertainty distributions \(\text{ N }(171,1)\) and \(\text{ N }(175,1)\), respectively. It can be further obtained that

$$\begin{aligned} \begin{aligned} \Psi ' (x | \ y_1,y_2) = \left\{ \begin{array}{cl} \frac{\displaystyle \Phi '_{2} (x) }{\displaystyle \Phi _{2} (173) +1- \Phi _{1} (173) }, \quad &{}\text{ if } \ x \le 173 \\ \frac{\displaystyle \Phi '_{1}(x) }{\displaystyle \Phi _{2} (173) +1- \Phi _{1} (173) }, \quad &{}\text{ if } \ x > 173 \\ \end{array} \right. \end{aligned} \end{aligned}$$

where \(\Phi _{1}\) and \(\Phi _{2}\) are the uncertainty distributions \(\text{ N }(171,1)\) and \(\text{ N }(175,1)\), respectively. The posterior uncertainty distribution \(\Psi \) and its derivative \(\Psi '\) are revealed in Figs. 5 and 6, respectively.

Fig. 5
figure 5

Function \(\Psi \) in Example 6

Fig. 6
figure 6

Function \(\Psi '\) in Example 6

8 Application to uncertain urn problem

Assume Jason has filled an uncertain urn with 5 balls which are either red or black, and the distribution function is completely unknown. In this case, Laplace criterion makes you assign equal likelihood to the possible number of the red balls in the uncertain urn. That means, the number of red balls should be represented by

$$\begin{aligned} \xi = i \ \text{ with } \text{ likelihood } \ \frac{1}{6}, \quad i=0,1,\ldots ,5. \end{aligned}$$
(13)

Next, one ball is drawn from the urn, and the color of the drawn ball can be represented by

$$\begin{aligned} \eta =\left\{ \begin{array}{cl} 0,\quad \text{ if } \text{ the } \text{ color } \text{ of } \text{ the } \text{ drawn } \text{ ball } \text{ is } \text{ black } \\ 1,\quad \text{ if } \text{ the } \text{ color } \text{ of } \text{ the } \text{ drawn } \text{ ball } \text{ is } \text{ red. } \end{array} \right. \end{aligned}$$
(14)

According to the drawn ball’s color, the likelihood function is revealed in the following Table 1.

Table 1 Likelihood function of the uncertain urn problem

By probability theory, if the drawn ball is red, then the posterior probability distribution is

$$\begin{aligned} \Psi (i | \eta =1) = \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ i<1 \\ \displaystyle \frac{1+2+\cdots +\lfloor i \rfloor }{15}, \quad &{}\text{ if } \ 1 \le i < 5 \\ 1, \quad &{}\text{ if } \ i \ge 5; \end{array} \right. \end{aligned}$$
(15)

if the drawn ball is black, then the posterior probability distribution is

$$\begin{aligned} \Psi (i | \eta =0) = \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ i<0 \\ \displaystyle \frac{5+4+\cdots +(5-\lfloor i \rfloor )}{15}, \quad &{}\text{ if } \ 0 \le i < 4 \\ 1, \quad &{}\text{ if } \ i \ge 4. \end{array} \right. \end{aligned}$$
(16)

By uncertainty theory, if the drawn ball is red, then the posterior uncertainty distribution is

$$\begin{aligned} \Psi (i | \eta =1) = \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ i<1 \\ \displaystyle \frac{\lfloor i \rfloor }{5}, \quad &{}\text{ if } \ 1 \le i < 5 \\ 1, \quad &{}\text{ if } \ i \ge 5; \end{array} \right. \end{aligned}$$
(17)

if the drawn ball is black, then the posterior uncertainty distribution is

$$\begin{aligned} \Psi (i | \eta =0) = \left\{ \begin{array}{cl} 0, \quad &{}\text{ if } \ i<0 \\ \displaystyle \frac{\lfloor i+1 \rfloor }{5}, \quad &{}\text{ if } \ 0 \le i < 4 \\ 1, \quad &{}\text{ if } \ i \ge 4. \end{array} \right. \end{aligned}$$
(18)

Obviously, the posterior probability distribution and posterior uncertainty distribution are different. Which result is more reasonable? To answer this question, let us consider three options as follows:

A: You receive $5 if the total number of red balls is 5 and lose $2 otherwise;

B: You receive $5 if the total number of black balls is 5 and lose $2 otherwise;

C. Don’t bet.

What should be the best choice among A, B and C? If the drawn ball is red and probability theory is used, then the expected incomes of A and B are

$$\begin{aligned} A=-2 \times \frac{2}{3} + 5 \times \frac{1}{3} = \frac{1}{3} \end{aligned}$$

and

$$\begin{aligned} B=-2 \times 1 + 5 \times 0 = -2, \end{aligned}$$

respectively. Since the income of C is always 0, probability theory tells you choose A in this case; if the drawn ball is black and probability theory is used, then the expected incomes of A and B are

$$\begin{aligned} A=-2 \times 1 + 5 \times 0 = -2 \end{aligned}$$

and

$$\begin{aligned} B=-2 \times \frac{2}{3} + 5 \times \frac{1}{3} = \frac{1}{3}, \end{aligned}$$

respectively. Hence probability theory tells you choose B. If the drawn ball is red and uncertainty theory is used, then the expected incomes of A and B are

$$\begin{aligned} A=-2 \times \frac{4}{5} + 5 \times \frac{1}{5} = -\frac{3}{5} \end{aligned}$$

and

$$\begin{aligned} B=-2 \times 1 + 5 \times 0 = -2, \end{aligned}$$

respectively. Since the income of C is always 0, uncertainty theory tells you choose C in this case; if the drawn ball is black and uncertainty theory is used, then the expected incomes of A and B are

$$\begin{aligned} A=-2 \times 1 + 5 \times 0 = -2 \end{aligned}$$

and

$$\begin{aligned} B=-2 \times \frac{4}{5} + 5 \times \frac{1}{5} = -\frac{3}{5}, \end{aligned}$$

respectively. Again, uncertainty theory tells you choose C.

To compare the results, now let us see how Jason filled the uncertain urn. First Jason chose a distribution function,

$$\begin{aligned} \Upsilon (x) = \left\{ \begin{array}{cl} 0, \quad \text{ if } \ x<3 \\ 1, \quad \text{ if } \ x \ge 3 \end{array} \right. \end{aligned}$$
(19)

which is essentially the constant 3. Next a number k is got from the distribution function \(\Upsilon \) and filled k red balls and \(5-k\) black balls to the urn. Then there are 3 red balls and 2 black balls in the urn. No matter what color the drawn ball is, you would lose if probability theory is applied (you would win if and only if the number of the red balls is 0 or 5). Could you believe that the posterior distribution obtained by uncertainty theory is better? The core reason for the failure of probability theory in this case is that our uniform prior function is not close enough to the real frequency of the number of red balls (i.e., the distribution function chosen by Jason). In this case, the prior distribution should be regarded as uncertainty distribution and posterior uncertainty distribution can be obtained based on the method proposed in this paper.

9 Conclusion

This paper made a connection between posterior uncertainty distribution as well as likelihood function and suggested a new method to get the posterior uncertainty distribution from the prior uncertainty distribution based on given observed data. Some examples with special uncertainty distributions and an uncertain urn problem were provided to illustrate the application of the method.