Next Article in Journal
Maximum Pseudo-Likelihood Estimation of Copula Models and Moments of Order Statistics
Next Article in Special Issue
L1 Regularization for High-Dimensional Multivariate GARCH Models
Previous Article in Journal
Multivariate Spectral Backtests of Forecast Distributions under Unknown Dependencies
Previous Article in Special Issue
The Applications of Generalized Poisson Regression Models to Insurance Claim Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Invariance of the Mathematical Expectation of a Random Quantity and Its Consequences

by
Pierpaolo Angelini
Dipartimento di Scienze Statistiche, Università La Sapienza, Piazzale Aldo Moro 5, 00185 Roma, Italy
Risks 2024, 12(1), 14; https://doi.org/10.3390/risks12010014
Submission received: 10 November 2023 / Revised: 8 January 2024 / Accepted: 12 January 2024 / Published: 18 January 2024
(This article belongs to the Special Issue Risks Journal: A Decade of Advancing Knowledge and Shaping the Future)

Abstract

:
Possibility and probability are the two aspects of uncertainty, where uncertainty represents the ignorance of a given individual. The notion of alternative (or event) belongs to the domain of possibility. An event is intrinsically subdivisible and a quadratic metric, whose value is intrinsic or invariant, is used to study it. By subdividing the notion of alternative, a joint (bivariate) distribution of mass appears. The mathematical expectation of X is proved to be invariant using joint distributions of mass. The same is true for X 12 and X 12 m . This paper describes the notion of α -product, which refers to joint distributions of mass, as a way to connect the concept of probability with multilinear matters that can be treated through statistical inference. This multilinear approach is a meaningful innovation with regard to the current literature. Linear spaces over R with a different dimension can be used as elements of probability spaces. In this study, a more general expression for a measure of variability referred to a single random quantity is obtained. This multilinear measure is obtained using different joint distributions of mass, which are all considered together.

1. Introduction

Possibility and probability are the two aspects of uncertainty, where uncertainty represents the ignorance of a given individual (Capotorti et al. 2014). Possibility means that one is interested in establishing a given set of alternatives, where each alternative contained in this set is an event (Egidi et al. 2022). An event is not a repeatable fact whose meaning is collective. An event is not even a measurable set according to measure theory. In this paper, an event is a single case, whose meaning is atomistic (de Finetti 1982b). Possibility obeys the rules of ordinary logic or two-valued logic. Probability is distributed over a given set of alternatives (Fortini et al. 2018). It obeys the rules of the logic of prevision (Berti et al. 2020), which is many-valued logic. In this paper, ordinary logic is connected with linear spaces over R . Moreover, the logic of prevision is shown to be connected with linear spaces over R . The linear spaces over R treated in this paper are endowed with a quadratic metric, which is used to obtain indices of a statistical nature related to random quantities. Bounded and continuous random quantities, whose possible alternatives belong to a closed interval, can be estimated via discrete random quantities identifying a finite partition of elements. The same is true with respect to bounded random quantities containing a countable infinity of possible alternatives. In this study, the author was therefore interested in evaluating the probability over those incompatible and exhaustive elements that identify the possible alternatives for a discrete random quantity (Gilio and Sanfilippo 2014). Finitely additive probabilities are chosen with respect to all the different elements of the partition into account. Everything is known about probability and its formal properties, but, several times, researchers have not been at all concerned with defining it. In this paper, the notion of probability is shown to be an appropriate tool and valuable guide for estimating and choosing under uncertainty and riskiness, so it cannot be a first principle, unlike point and line in geometry. Probability is not something objective, substituting expectations and sensations associated with a given individual. The unique probability existing in all the cases depends on psychological expectations and sensations (Viscusi and Evans 2006). A subjective opinion is a reasonable object of serious study. For this reason, the methodological rigour associated with the theory of decision making based on subjective probabilities cannot be denied (Angelini and Maturo 2022a). It is necessary to believe that what is contained in the two paradigmatic collections “Studies in subjective probability”, edited by Kyburg and Smokler, must still be taken into account (Cassese et al. 2020). To believe that what has been said by these two collections and numerous papers in accordance with them is negligible, as several scholars dealing with uncertainty and riskiness tend to think nowadays, is not useful for current quantitative research in economics and finance.

Study Objectives

In Section 2, mathematical aspects related to the space of alternatives are treated. The space of possible alternatives for a given individual is embedded in E n , where E n is an n-dimensional Euclidean space. The latter is generated by n linearly independent basis vectors. A possible event or alternative is a possible value for a random quantity. It is preferable to speak about random quantity instead of random variable. This is because it is necessary to think of the numerical interpretation of single events. This interpretation is vectorial whenever one refers to a finite partition of alternatives. Section 3 focuses on financial assets. They are random goods. Their conceptual and mathematical characteristics are handled. Financial assets are studied under uncertainty and riskiness outside the budget set of a given decision maker whenever the structure of the expected utility function describing an individual’s specific attitude toward risk is taken into account. Section 4 shows why events are intrinsically subdivisible. Their subdivision is pursued as soon as it is sufficient for obtaining a joint (bivariate) distribution of mass. A more general expression for a measure of variability, referring to a single random quantity, is obtained by subdividing the notion of event. In Section 5, the mathematical expectation of X is proved to be invariant. Preliminary results to the proof of Proposition 1 are provided in its subsections. By subdividing the notion of alternative, it is possible to prove that the mathematical expectation of X 12 and X 12 m is invariant too. The mathematical expectation of X, X 12 , and X 12 m is always obtained inside linear spaces over R provided with a quadratic metric. An appropriate dimension of each linear space over R is taken into account. It is known that the notion of exchangeability referring to events is a way to connect the concept of subjective probability with the classical course of action of statistical inference. This paper shows that the notion of α -product, referring to bivariate distributions of mass, is a way to connect the concept of subjective probability with multilinear matters that can be treated using statistical inference. The α -product between two n-dimensional vectors, where the latter represent the possible values for two marginal random quantities, identifies the distance between two marginal distributions of mass. Finally, Section 6 provides conclusions and future perspectives.

2. Single Events Studied in the Space of Alternatives

2.1. Preliminaries

Let x 1 , x 2 , , x n be ordered real variables. All the n-tuples of real numbers being taken by such variables identify a set. Each n-tuple is a point in the real n-space denoted by R n . The real numbers of each n-tuple are the coordinates of a point. Let
P j = P ( x 1 j , x 2 j , , x n j )
be a point of R n , whereas the point denoted by
O = O ( 0 , 0 , , 0 )
is the origin of R n . If the rank of the matrix given by
P 1 , P 2 , , P n
is equal to n, then the dimension of R n is equal to n. A located vector of E n at its origin is defined as a pair of points written as O P j . O P j is also denoted by x ( j ) or x j , where the n real numbers expressed by x i ( j ) , i = 1 , 2 , , n are the components of x j . The dimension of E n is equal to n. E n is a closed structure with respect to two binary operations (see von Neumann 1936 for another closed structure related to E n ). Thus, it is endowed with a component-wise addition denoted by
x + y = { x 1 + y 1 , x 2 + y 2 , , x n + y n } .
Moreover, it is endowed with a component-wise scalar multiplication denoted by
α x = { α x 1 , α x 2 , , α x n } ,
where α is any real number. A point of R n is a vector of E n . The same ordered n-tuple of real numbers identifies them. Suppose that the coordinates of (1) are pairwise orthogonal. Since the same hypothesis holds for any P R n , the n real variables denoted by x 1 , x 2 , , x n identify a Cartesian coordinate system for R n . The structure of E n is of a Euclidean nature, so it is possible to use the Pythagorean theorem in order to determine the distance of P from O. One writes
2 d ( O , P ) = i = 1 n x i 2 .
With respect to E n , this distance identifies the length of x denoted by x .
Let P j and P h be two points belonging to R n , whose coordinates are pairwise orthogonal. A located vector denoted by O P j corresponds to P j . It is visualized as an arrow between O and P j . O is called its beginning point, and P j is its endpoint. The located vector under consideration is denoted as x j E n . Similarly, the located vector denoted with O P h corresponds to P h . It is visualized as an arrow between O and P h . O is called its beginning point, and P h is its endpoint. The located vector is denoted by x h E n . P j and P h also characterize P j P h ; hence, x h x j . With respect to R n , the square of the distance between P j and P h is given by
2 d 2 ( P j , P h ) = i = 1 n ( x i ( j ) x i ( h ) ) 2 = i = 1 n x i ( j ) 2 + i = 1 n x i ( h ) 2 2 i = 1 n x i ( j ) x i ( h ) .
With respect to E n , the norm of x h x j is given by
x h x j 2 = x h 2 + x j 2 2 x h , x j ,
where x h , x j is called the scalar or inner product between x h and x j . If x h , x j = 0 , then the two vectors are orthogonal. Given α R , the function expressed by f ( α ) = x h α x j 2 is such that it is f ( α ) 0 . The inequalities denoted by
| x h , x j | x h x j ,
and
x h + x j x h + x j
can therefore be written. The former is called the Schwarz inequality, whereas the latter is called the triangle inequality. The metric structure of E n is now complete. From (9), it follows that
x h , x j x h x j 1 ,
so there exists one and only one angle denoted by Θ , 0 Θ π , such that
cos Θ = x h , x j x h x j .
The angle between x h and x j is expressed by Θ .
Since the dimension of E n is equal to n, there are at most n linearly independent vectors. Given n vectors denoted by e i , i = 1 , 2 , , n , if the following expression
λ 1 e 1 + λ 2 e 2 + + λ n e n = λ i e i = 0
implies that λ i = 0 for every i, with i = 1 , 2 , , n , then the n vectors denoted by e 1 , e 2 , , e n are linearly independent. They form a basis of E n denoted by
B n = e i | i I n = { 1 , 2 , , n } .
The zero vector is expressed by 0 = x + ( x ) for any x E n . The n + 1 vectors denoted by e 1 , e 2 , , e n , x are linearly dependent, so, in the following linear combination given by
x i e i + ( x ) = 0 ,
the coefficients expressed by x i are not all equal to 0. This means that, using the Einstein summation notation, one has
x = x i e i ,
where x i are the contravariant components of x with respect to B n . A fundamental result is the following. With respect to B n , suppose that x can be represented by
x = y i e i .
The two sides of the following expression
0 = ( x i y i ) e i
are obtained from the left-hand side of (16) minus the left-hand side of (17) together with the right-hand side of (16) minus the right-hand side of (17). Nevertheless, e 1 , , e n are linearly independent, so x i = y i for every i, with i = 1 , 2 , , n . The conclusion is therefore as follows:
Remark 1.
With respect to a basis of E n denoted by B n , every vector of E n is uniquely expressed by one and only one set of contravariant components.
It is always possible to establish an orthonormal basis of E n using the Gram-Schmidt process. If B n is an orthonormal basis of E n , then one writes
e i , e j = δ i j ,
where δ i j is the Kronecker delta. The Kronecker delta is also called the metric tensor. If x = x i e i , with e i B n , i = 1 , 2 , , n , then its Euclidean norm is given by
x 2 = x , x = x i e i , x j e j = x i x j δ i j = i = 1 n ( x i ) 2 .
The contravariant components of a vector are geometrically its projections by parallelism onto the basis vectors. Conversely, the covariant components of a vector are, geometrically, its orthogonal projections onto the basis vectors. Let x be a vector. Let B n = { e i } be a basis of E n . Without loss of generality, suppose that all vectors of B n have a norm equal to 1. Hence, the ith covariant component of x is given by
x i = x , e i e i 2 = x , e i = x j e j , e i = x j e j , e i = x j g j i ,
where g j i is the metric tensor. It is not usually necessary to distinguish the metric tensor from its symmetric components. The number of its distinct and symmetric components is
n ( n + 1 ) 2 .
This is because such a number coincides with a 2-combination with repetitions from a set of size n. It is possible to note the following:
Remark 2.
The components of the metric tensor coincide with the Kronecker delta whenever an orthonormal basis of E n is used. Whenever linear spaces over R are of a Euclidean nature, the contravariant and covariant components of a same vector are given by the same numbers.
What was introduced by Ricci and Levi-Civita is used in this paper, so the contravariant components of a vector are denoted by upper indices, whereas the covariant ones are denoted by lower indices.

2.2. From Propositions (Logical Entities) to Real Numbers and Vice Versa

The space of alternatives contains what is objectively possible. In this paper, what is objectively possible is studied together with what is subjectively probable. The structure is mathematically the same. The space of alternatives is a linear space over R of a Euclidean nature. Possibility never depends on a subjective opinion (Coletti et al. 2016). Conversely, probability always depends on a subjective opinion. If X is a random quantity, then a linear combination of n incompatible and exhaustive events denoted by E 1 , E 2 , , E n is written as
X = x 1 | E 1 | + x 2 | E 2 | + + x n | E n | ,
where I ( X ) = { x 1 , x 2 , , x n } is the set of possible values for X. | E i | , i = 1 , 2 , , n , is the indicator of E i . Its values are 1 or 0 when E i is true or false, respectively. E i is true or false whenever uncertainty ceases. The elements of I ( X ) = { x 1 , x 2 , , x n } , where one can observe x 1 < x 2 < < x n , are the contravariant components of a vector of E n denoted by
x = ( x 1 , x 2 , , x n ) .
If X is a vector of E n , then it is possible to write
X = x 1 | E 1 | e 1 + x 2 | E 2 | e 2 + + x n | E n | e n ,
where B n = { e i } is an orthonormal basis of E n . With respect to B n , one writes
x = x 1 e 1 + x 2 e 2 + + x n e n = x i e i .
The covariant components of a vector of E n denoted by
p = ( p 1 , p 2 , , p n )
represent n masses associated with n possible values for X. In the first phase, each mass can be found between 0 and 1, endpoints included. The mathematical expectation of X is given by the following scalar or inner product
P ( X ) = x 1 p 1 + x 2 p 2 + + x n p n = x i p i ,
where x 1 P ( X ) x n (Berti et al. 2001). Even though contravariant and covariant components are used, x and p belong to the same Euclidean space. This means that it is also possible to use covariant components of x together with contravariant ones of p . Nothing changes. The two vectors x and p express the two aspects of uncertainty, which are studied inside E n . An attribute can be transformed into a number. A number can be transformed into a proposition containing an attribute. This proposition is either true or false whenever uncertainty ceases. The possible values for X are real numbers. They can be related to a random vector given by
| E 1 | | E 2 | | E n |
identifying a finite partition of alternatives, where E 1 is an unequivocally established proposition containing x 1 as an attribute, , E n is an unequivocally established proposition containing x n as an attribute. One writes
X = x 1 | E 1 | | E 2 | | E n | + x 2 | E 1 | | E 2 | | E n | + + x n | E 1 | | E 2 | | E n | ,
so n random vectors expressed in the form given by (29) are considered in E n . Within this context, the term “random” means that a given element is not known by an individual and consequently it is uncertain or possible for him or her, but well determined in itself (de Finetti 1982a). If Y = | E 1 | + | E 2 | + + | E n | is the number of successes, then an arithmetic operation expressed by the addition is used in connection with events. It is also possible to apply logical or Boolean operations to real numbers. In the field of real numbers, it is possible to define
x y = max ( x , y ) ,
x y = min ( x , y ) ,
and
x ˜ = 1 x ,
for any x and y that are real numbers. If one writes P ( E ˜ ) = P ( 1 E ) = 1 P ( E ) , where E ˜ = 1 E and E are events studied together with their probabilities denoted by P , then a logical or Boolean operation given by the negation is treated together with an arithmetic one expressed by the subtraction. After remarking on a unification between events and numbers, a unification between logical or Boolean operations and arithmetic operations is also used. Logical product, logical sum, and negation are not applicable to 0 and 1 only, but they are applicable to all real numbers.

3. Marginal and Bivariate Random Quantities: Conceptual and Mathematical Aspects

In this paper, a random quantity X is a financial asset. It is firstly studied under conditions of uncertainty. X is then a random good. Its expected return has to be estimated by a given decision maker based on returns that have been observed by him or her in the past. X is a mathematical function such that its image is the set of those real numbers assigned by X to a sample space denoted by Ω . The latter is a finite set. Suppose that the possible values for X belonging to I ( X ) = Ω are all nonnegative, so one writes X 0 . This means that inf I ( X ) 0 . In this paper, X is a bounded random good, so sup I ( X ) < + . Since it is always possible to write
X = X 1 X 2 ,
where one has
X 1 = 0 X ,
and
X 2 = | 0 X | ,
X is a random quantity that is certainly nonnegative. The set of the coherent previsions of (34) is uncountable. This set is a closed line segment. If 1 X and 2 X are two financial assets, then the possible values for 1 X belonging to I ( 1 X ) and the possible ones for 2 X belonging to I ( 2 X ) are all nonnegative. Two perpendicular straight lines are considered. 1 X and 2 X are linearly independent. In this paper, 1 X and 2 X are two marginal random goods giving rise to a bivariate random good denoted by 1 X 2 X . Its possible values are expressed by I ( 1 X ) × I ( 2 X ) . A subdivision of alternatives takes place in this way. The set of the coherent previsions of 1 X , 2 X , and 1 X 2 X is a right triangle belonging to the first quadrant of the Cartesian plane, where the vertex of its right angle is given by the point ( 0 , 0 ) . Every point belonging to such a triangle is denoted by
P ( 1 X 2 X ) P ( 1 X ) , P ( 2 X ) .
This triangle contains all fair estimations being made by a given individual. Given P ( 1 X ) and P ( 2 X ) , it is possible to estimate the joint masses identifying P ( 1 X 2 X ) in such a way that one out of three values, 1 , 0, and 1, associated with the correlation coefficient appears (it is usually n 10 ). The correlation coefficient is equal to the covariance between 1 X and 2 X divided by the product of their standard deviations. With respect to the correlation coefficient, the possible values for 1 X and 2 X are deviations. Nevertheless, all masses underlying every point denoted by (37) do not change. If 1 is the value being taken by the correlation coefficient, then a given individual making a specific estimation of joint masses inside his or her budget set is a risk averter; if 0 is the value being taken, then an individual is risk neutral. If the correlation coefficient is equal to 1, then a given decision maker is a risk lover. The right triangle where all fair estimations appear is determined by a straight line, whose slope is negative. This line is a hyperplane. Its horizontal and vertical intercepts are established. From the ratio of these intercepts, it is possible to estimate the ratio of the prices of the two marginal random goods. The space where all fair estimations appear is endowed with boundary points. It is a subset of a linear space over R . It is also possible to choose a specific point maximizing the subjective utility related to objects of decision maker estimation. This choice always depends on empirical aspects. Thus, it is essential to distinguish two phases: a formal and an empirical phase. The formal phase firstly establishes all fair estimations, whereas the empirical one secondly identifies a specific point belonging to the right triangle. The hyperplane never separates this specific point from all points expressing possible alternatives. In other words, the hyperplane under consideration leaves this specific point on the same side together with all points expressing possible alternatives. Applying the Pythagorean theorem to the points of the right triangle, it is possible to prove that the notion of ordinal utility is a distance. If 1 X and 2 X are the components of multiple goods of order 2 denoted by X 12 , then 2 2 = 4 bivariate distributions of mass have to be considered. This is because it is possible to exchange 1 X for P ( 1 X ) and 2 X for P ( 2 X ) . This implies that it is necessary to exchange X 12 for P ( X 12 ) . The expected return on X 12 , where X 12 is a portfolio containing two financial assets, is denoted by P ( X 12 ) . The latter index contains 2 2 = 4 bivariate distributions of mass, which are all summarized. This result is innovative. By considering more than two marginal random goods, similar indices are obtained. What is observed inside the right triangle can be used to handle the notion of expected utility (cardinal utility) outside it. Hence, choice problems related to financial assets that are studied under uncertainty and riskiness outside the budget set of a given decision maker depend on the structure of the expected utility function describing an individual’s specific attitude toward risk. This expected utility function is treated inside a subset of a linear space over R . The notion of cardinal utility is based on a distance too. It is of interest to take the following Figure 1 as a guide.

4. A Marginal Random Quantity Studied as a Double Quantity: The Central Role of Bivariate Distributions of Mass

Given two sets of alternatives, where each of them is denoted by { a , b , c } , it is possible to study the Cartesian product given by { a , b , c } × { a , b , c } . If such a product is handled, then 3 2 = 9 elements appear. A subdivision of alternatives appears in this way. Given a marginal random quantity denoted by 1 X , to study it as a double one denoted by X 12 means that four bivariate random quantities denoted by 1 X 1 X , 1 X 2 X , 2 X 1 X , and 2 X 2 X are treated. One writes
X 12 = { 1 X , 2 X } .
The components of X 12 are 1 X and 2 X . A multilinear approach is used. This approach is a meaningful innovation with respect to those in the current literature (Grechuk et al. 2012). This approach is useful for obtaining multilinear measures. Such measures can be used to study new types of relationships between variables. If a marginal random quantity has n values, then a bivariate random quantity has n 2 values (see Pompilj 1957 in connection with the notion of independence). One has
dim ( E n ) = n ,
so one observes
dim ( E n E n ) = n 2 ,
where E n E n is a linear space over R . From (25), it follows that
1 X 2 X = ( 1 ) x 1 ( 2 ) x 1 | ( 1 ) E 1 | | ( 2 ) E 1 | e 1 e 1 + + ( 1 ) x n ( 2 ) x n | ( 1 ) E n | | ( 2 ) E n | e n e n .
Two conditions hold if one wants to represent 1 X as X 12 . The first condition is the following:
P ( 1 X ) = i 1 = 1 n ( 1 ) x i 1 p i 1 .
The following expression
p = p i 1 i 2
is an affine tensor of order 2. Its covariant components express all joint masses. The following relationship
i 1 = 1 n ( 1 ) x i 1 p i 1 = i 1 , i 2 = 1 n ( 1 ) x i 1 p i 1 i 2
holds if (43) is used. The two sides of (44) are equal if and only if
i 1 = 1 n p i 1 = i 1 , i 2 = 1 n p i 1 i 2 .
One finally writes
i 1 = 1 n p i 1 = i 1 , i 2 = 1 n p i 1 i 2 = 1 ,
so 1 X and 1 X 2 X are two finite partitions of incompatible and exhaustive alternatives. The second condition is as follows: 1 X and 1 X 2 X must have the same mean value (Nunke and Savage 1952). One writes
i 1 , i 2 = 1 n ( 1 ) x i 1 ( 2 ) x i 2 p i 1 i 2 = i 1 , i 2 = 1 n ( 1 ) x i 1 p i 1 i 2 ,
so the two sides of (47) are equal if and only if
( 2 ) x i 2 = 1 , i 2 I n .
Hence, it is possible to note the following:
Remark 3.
Let B n = { e i } , i = 1 , , n , be an orthonormal basis of E n . The possible values for the other random quantity, such that 1 X is studied as X 12 , form the set denoted by
{ 1 i } .
The number of elements of (49) is equal to n. Such components depend on the basis of E n being arbitrarily chosen. From B n to B n = { e i } , i = 1 , , n , one has
1 i = a i i 1 i = i = 1 n a i i ,
where A = ( a i i ) is an n × n matrix expressing a change of basis.
Remark 4.
The vector of E n with contravariant components forming the set expressed by
{ ϕ 1 = 1 , ϕ 2 = 1 , , ϕ n = 1 }
is denoted by ϕ .

A Measure of Variability Obtained Using a Multilinear Approach: A Numerical Example

A marginal distribution of mass of 1 X can coincide with a bivariate distribution of 1 X and 2 X = ϕ . Nonparametric distributions of mass are used. For instance, Table 1 gives P ( 1 X ) = P ( 1 X 2 X ) = P ( 2 X 1 X ) = 5.2 . Since P ( 1 X 1 X ) = 28 and P ( 2 X 2 X ) = 1 are observed, the variance of 1 X is given by
σ 1 X 2 = P ( 1 X 1 X ) = 28 P ( 1 X 2 X ) = 5.2 P ( 2 X 1 X ) = 5.2 P ( 2 X 2 X ) = 1 = 0.96 .
The variance of 1 X is established as if 1 X coincides with X 12 = { 1 X , ϕ } . A measure of variability, related to a single random quantity, is obtained using a multilinear approach. The notion of alternative is subjected to a subdivision, so four bivariate distributions are treated. A quadratic metric is used (Berkhouch et al. 2018; Gerstenberger and Vogel 2015).

5. Invariance of the Notion of Mathematical Expectation of a Random Quantity

5.1. Change of Basis

Let B n = { e i } , i = 1 , , n , be an orthonormal basis of E n . Let B n = { e j } , j = 1 , , n , be another orthonormal basis of E n . Each vector of B n can be expressed as a linear combination of the vectors of B n , so one writes
e j = A j i e i .
If i varies, then the set given by { A j i } identifies the n contravariant components of e j with respect to the vectors of B n . If j varies together with i, then the set given by { A j i } identifies n × n real numbers leading to the notion of square matrix of order n. If a basis of E n changes, then x E n does not change. Conversely, its contravariant components change, so the following equalities appear
x = x i e i = x j e j = x j A j i e i .
From
x i e i = x j A j i e i ,
where e i is found in both sides of what one has just written, it follows that it is possible to obtain
x i = A j i x j .
(56) gives how the contravariant components of x E n change whenever one passes from B n to B n . From
x j e j = x j A j i e i ,
where x j is found in both sides of what one has just written, it follows that it is possible to obtain the expression given by (53). A = { A j i } is an orthogonal matrix. The inverse of A is therefore equal to its transpose.

5.2. Invariant or Intrinsic Metric

If the expression given by (56) is considered, then the Euclidean norm of x E n identified with (20) is invariant. This is because one writes
i = 1 n ( x i ) 2 = i = 1 n A j i A h i x j x h .
This happens because both B n and B n are considered. The set given by the following numbers
g j h = i = 1 n A j i A h i = e j , e h
identifies the components of the metric tensor, whose nature is symmetric. Accordingly, one writes
g j h = g h j .

5.3. Change-of-Basis Matrices

Let B n = { e j } and B n = { e h } be two orthonormal bases of E n . Since one writes
e j = B j h e h ,
and
e h = C h i e i ,
the second expressions can be put into the first ones. This means that one has
e j = B j h C h i e i .
(63) can also be written in the following form
e i δ j i = B j h C h i e i ,
so one has
δ j i = B j h C h i .
It follows that B = { B j h } is the inverse of C = { C h i } and vice versa, where B and C are two square matrices of order n. Let x E n be a vector. Its contravariant components identify a family of n incompatible and exhaustive alternatives. Such a vector can be expressed by
x = x i e i
with respect to the vectors of B n . The same vector x E n can be expressed by
x = x h e h
with respect to the vectors of B n . If one passes from B n to B n , then one has
x h = C i h x i .
The vector x E n is always the same. Only its contravariant components change whenever one passes from a basis of E n to the other one. One observes
x = x i e i = ( C i h x i ) ( C h j e j ) = ( C i h C h j ) ( x i e j )
because (68) and (62) are put into (67). Since
C i h C h j = δ i j ,
the square matrix, whose generic element is given by C i h , is the inverse of the square matrix, whose generic element is expressed by C h j .

5.4. Invariant Scalar Product

With respect to B n , the scalar or inner product between x , y E n is denoted by x , y . With respect to B n , the scalar or inner product between the same vectors of E n is conversely denoted by x , y . Thus, it is possible to prove the following:
Proposition 1.
The scalar or inner product between x , y E n is invariant, so x , y = x , y .
Proof. 
With respect to B n , one has
x , y = x i e i , y j e j = x i y j e i , e j = x i y j g i j .
Similarly, with respect to B n , one writes
x , y = x i y j g i j .
If one passes from B n to B n , then the components of the metric tensor become
g i j = e i , e j = C i h e h , C j k e k = C i h C j k g h k .
Since
x , y = ( C r i x r ) ( C s j y s ) C i h C j k g h k = ( C r i C i h ) ( C s j C j k ) x r y s g h k = δ r h δ s k x r y s g h k = x h y k g h k = x , y ,
the equality given by
x , y = x , y
proves that the scalar or inner product between x , y E n is invariant.  □

5.5. The Notion of α -Product

A marginal random quantity denoted by X can always be expressed using a bivariate distribution of mass. The mathematical expectation of X denoted by P ( X ) is obtained by summarizing a bivariate distribution of mass through the notion of α -product. The α -product is a scalar or inner product. Its four formal properties are as follows: it is a commutative, associative, distributive, and orthogonal scalar or inner product. The mathematical expectation of X 12 = { 1 X , 2 X } is denoted by P ( X 12 ) . One has
P ( X 12 ) = P ( 1 X 1 X ) P ( 1 X 2 X ) P ( 2 X 1 X ) P ( 2 X 2 X ) ,
where X 12 is a double random quantity. Four α -products are P ( 1 X 1 X ) , P ( 1 X 2 X ) , P ( 2 X 1 X ) , and P ( 2 X 2 X ) . Each of them is intrinsically of a two-dimensional nature. Each of them is then expressed using an ordered pair of real numbers. This is because each of them is a measure of a bilinear nature that is decomposed into two linear measures. P ( X 12 ) is, conversely, a real number. It is the determinant of a square matrix of order 2. In general, the mathematical expectation of a multiple random quantity of order m denoted by
X 12 m = { 1 X , 2 X , , m X }
is expressed as P ( X 12 m ) . One has
P ( X 12 m ) = P ( 1 X 1 X ) P ( 1 X 2 X ) P ( 1 X m X ) P ( 2 X 1 X ) P ( 2 X 2 X ) P ( 2 X m X ) P ( m X 1 X ) P ( m X 2 X ) P ( m X m X ) ,
where m 2 α -products are used. (75) is the α -norm of a particular tensor of order 2, whereas (77) is the α -norm of a particular tensor of order m. In either case, bivariate distributions of mass are always used. How the notion of α -product between ( 1 ) x and ( 2 ) x , with ( 1 ) x , ( 2 ) x E n , works is explained as follows:
Example 1.
Table 2 gives P ( 1 X 2 X ) = 55.2 . The contravariant components of ( 2 ) x identify the following column vector
0 6 7 ,
whereas its covariant components are given by
0 · 0 + 6 · 0 + 7 · 0 = 0 ,
0 · 0 + 6 · 0.2 + 7 · 0.3 = 3.3 ,
and
0 · 0 + 6 · 0.3 + 7 · 0.2 = 3.2 .
It is then possible to observe
0 8 9 , 0 3.3 3.2 = ( 1 ) x , ( 2 ) x α = P ( 1 X 2 X ) = 55.2 .
After calculating the covariant components of ( 1 ) x in a similar way, one writes
0 4.3 4.2 , 0 6 7 = ( 1 ) x , ( 2 ) x α = P ( 1 X 2 X ) = 55.2 .
The notion of α-norm is as follows: Table 3 gives ( 1 ) x α 2 = P ( 1 X 1 X ) = 72.5 , whereas Table 4 gives ( 2 ) x α 2 = P ( 2 X 2 X ) = 42.5 . It follows that
P ( X 12 ) = P ( 1 X 1 X ) = 72.5 P ( 1 X 2 X ) = 55.2 P ( 2 X 1 X ) = 55.2 P ( 2 X 2 X ) = 42.5 = 34.21 .
Given two orthonormal bases of E n that are arbitrarily chosen, if one passes from an orthonormal basis of E n to the other one, then the contravariant and covariant components of a same vector of E n change. Proposition 1 shows that it is possible to verify that P ( X ) , P ( X 12 ) , and P ( X 12 m ) do not change. The notion of mathematical expectation of a random quantity is therefore invariant. The joint masses of a bivariate distribution of mass never change after having been chosen by a given individual. The covariant components of a vector of E n change because its contravariant components firstly change whenever one passes from an orthonormal basis of E n to the other one. The joint masses of a bivariate distribution of mass are used in order to obtain the covariant components of a vector of E n (Table 2). The conclusion is that the essential nature of finite partitions of alternatives is represented by the degrees of belief attributed by a given individual (de Finetti 1981). Such a nature does not change whenever one passes from an orthonormal basis of E n to the other one. In this paper, statistical indices that were put forward by Gini are developed (La Haye and Zizler 2019). They are extended to probabilistic matters connected with uncertainty and riskiness. It is not admissible to make the conditions of coherence more restrictive (Berti and Rigo 2002; de Finetti 1989). The marginal masses do not change after having been established by a given individual, whereas the joint ones can change. The possible alternatives for the two marginal random quantities do not change. It is convenient to compare this with what happens when one passes from a given basis of E n to another one. If one passes from an orthonormal basis of E n to another one, then the possible alternatives for the two marginal random quantities change. The marginal and joint masses do not change. A multilinear approach is useful. This is because several topics can be studied in a broader way using it (see Chambers et al. 2017; Echenique 2020; Maturo and Angelini 2023; Nishimura et al. 2017 with respect to revealed preference theory). The two-variable linear model can be studied geometrically (Nelder and Wedderburn 1972). It is possible to extend the least-squares model by studying multilinear relationships between variables based on the notion of α -product. However, mean quadratic differences (Angelini and Maturo 2023), the correlation coefficient (Hotelling 1936), Jensen’s inequality, ordinal and cardinal utilities (Angelini 2023), and principal component analysis (Angelini and Maturo 2022b; Jolliffe and Cadima 2016; Pasini 2017; Rao 1964) can be based on conditions of uncertainty and riskiness studied through the notion of α -product. Linear spaces over R with a different dimension are used as elements of probability spaces. A probability space associated with a multiple random quantity of order 2 denoted by X 12 is formally given by
Ω , F , P ,
where Ω = I ( i X ) × I ( j X ) , with i , j = 1 , 2 , and F = { , Ω } . One has
X 12 ( 2 ) S ( 2 ) .
The basic case is expressed by X 12 . Since
( 2 ) S ( 2 ) E n E n ,
an ordered triple denoted by
( 1 ) x , ( 2 ) x , p
is technically established. From (82), it follows that ( 2 ) S ( 2 ) is decomposed. ( 1 ) x , ( 2 ) x E n are two vectors representing the possible values for two marginal random quantities. They belong to the set denoted by ( 2 ) S ( 1 ) . One has
( 2 ) S ( 1 ) E n ,
so Ω = I ( 1 X ) = ( 1 ) x 1 ( 1 ) x n E n , as well as Ω = I ( 2 X ) = ( 2 ) x 1 ( 2 ) x n E n . The set given by ( 2 ) S ( 1 ) contains the marginal random quantities denoted by 1 X and 2 X , which are the components of X 12 . p is an affine tensor of order 2. In particular, if a vector of the ordered triple given by (82) has all its contravariant components equal to 1, then the mathematical expectation of X denoted by P ( X ) , where X ( 1 ) S , is obtained using a bivariate distribution of mass only. One has
( 1 ) S E n .
Whenever the mathematical expectation of a multiple random quantity of order m is obtained using m 2 bivariate distributions of mass, each of them is identified with an ordered triple. Its first two elements are vectors of E n , whereas the third one is an affine tensor of order 2. Each marginal random quantity denoted by 1 X , , m X can be assumed to have n possible values.

6. Conclusions and Future Perspectives

In this paper, P ( X ) is defined using the notion of α -product. Given the masses associated with all possible values for X, P ( X ) is expressed as their function. Two phases arise, characterizing Bayes’ theorem. The formal phase establishes all fair estimations of X denoted by P ( X ) , whereas the empirical phase identifies a specific point of a convex set. With respect to each mass, the formal phase is characterized by infinite values between 0 and 1, endpoints included. Even though a specific point of a convex set is chosen in the empirical phase because a process of convergence occurs, an essentially and exclusively psychological value is attributed to each mass by a given individual. In this paper, summarized distributions of mass were handled. They are of a bivariate nature. They are two-dimensional distributions. Two phases still occur. A coherent extension of the domain of definition of a function of estimation denoted by P is treated. The statistical indices used in this paper are therefore based on a subdivision of alternatives. P ( X 12 ) and P ( X 12 m ) are defined as well, where both P ( X 12 ) and P ( X 12 m ) are two barycenters of masses obtained by considering the α -norm of two particular tensors. P ( X 12 ) is also the area of a t w o -parallelepiped, whose edges are given by ( 1 ) x and ( 2 ) x . Similarly, P ( X 12 m ) is also the m volume of an m-parallelepiped, whose edges are given by ( 1 ) x , ( 2 ) x , …, ( m ) x . Since the used metric is proved to be intrinsic or invariant, only the affine properties make sense. The statistical indices obtained inside linear spaces over R are therefore not dependent on the arbitrary choice of the coordinate system. Simple probabilistic rules are used. They are basically common sense rules. All the theorems of probability calculus are based on them. Another remarkable consequence of what is stated in this paper is the following: fair estimations can also be used in an axiomatic or qualitative way that is independent of any coordinate system.
This paper focuses on a methodological approach that was put forward by Gini. My current work involves extending this approach to issues connected with the theory of decision making, the correlation coefficient, the Sharpe ratio, Jensen’s inequality, measures of variability, regression models, principal component analysis, the mean-variance model, and so on. In the econometric investigation of the relationships between variables, statistical indices connected with the notion of α -product play an essential role. A stochastic model of choice based on positive and negative errors related to a specific function of estimation denoted by P is a further continuation of what is stated in this paper. Such errors are normally distributed. Their mean value is equal to zero. Every fair estimation being made by a given individual and based on real data is a zero-sum game taking place under imperfect information. This is found in another paper of mine that was recently made ready. Real data can be treated as sample data.

Funding

This study received no external funding.

Data Availability Statement

All relevant data are included in the article.

Conflicts of Interest

The author declares that he has no conflicts of interest.

References

  1. Angelini, Pierpaolo. 2023. Probability spaces identifying ordinal and cardinal utilities in problems of an economic nature: New issues and perspectives. Mathematics 11: 4280. [Google Scholar] [CrossRef]
  2. Angelini, Pierpaolo, and Fabrizio Maturo. 2022a. Jensen’s inequality connected with a double random good. Mathematical Methods of Statistics 31: 74–90. [Google Scholar] [CrossRef]
  3. Angelini, Pierpaolo, and Fabrizio Maturo. 2022b. The price of risk based on multilinear measures. International Review of Economics and Finance 81: 39–57. [Google Scholar] [CrossRef]
  4. Angelini, Pierpaolo, and Fabrizio Maturo. 2023. Tensors associated with mean quadratic differences explaining the riskiness of portfolios of financial assets. Journal of Risk and Financial Management 16: 369. [Google Scholar] [CrossRef]
  5. Berkhouch, Mohammed, Ghizlane Lakhnati, and Marcelo Brutti Righi. 2018. Extended Gini-type measures of risk and variability. Applied Mathematical Finance 25: 295–314. [Google Scholar] [CrossRef]
  6. Berti, Patrizia, and Pietro Rigo. 2002. On coherent conditional probabilities and disintegrations. Annals of Mathematics and Artificial Intelligence 35: 71–82. [Google Scholar] [CrossRef]
  7. Berti, Patrizia, Emanuela Dreassi, and Pietro Rigo. 2020. A notion of conditional probability and some of its consequences. Decisions in Economics and Finance 43: 3–15. [Google Scholar]
  8. Berti, Patrizia, Eugenio Regazzini, and Pietro Rigo. 2001. Strong previsions of random elements. Statistical Methods and Applications (Journal of the Italian Statistical Society) 10: 11–28. [Google Scholar]
  9. Capotorti, Andrea, Giulianella Coletti, and Barbara Vantaggi. 2014. Standard and nonstandard representability of positive uncertainty orderings. Kybernetika 50: 189–215. [Google Scholar]
  10. Cassese, Gianluca, Pietro Rigo, and Barbara Vantaggi. 2020. A special issue on the mathematics of subjective probability. Decisions in Economics and Finance 43: 1–2. [Google Scholar]
  11. Chambers, Christopher P., Federico Echenique, and Eran Shmaya. 2017. General revealed preference theory. Theoretical Economics 12: 493–511. [Google Scholar] [CrossRef]
  12. Coletti, Giulianella, Davide Petturiti, and Barbara Vantaggi. 2016. When upper conditional probabilities are conditional possibility measures. Fuzzy Sets and Systems 304: 45–64. [Google Scholar] [CrossRef]
  13. de Finetti, Bruno. 1981. The role of “Dutch Books” and of “proper scoring rules”. The British Journal of Psychology of Sciences 32: 55–56. [Google Scholar] [CrossRef]
  14. de Finetti, Bruno. 1982a. Probability: The different views and terminologies in a critical analysis. In Logic, Methodology and Philosophy of Science VI. Edited by L. Jonathan Cohen, Jerzy Łoś, Helmut Pfeiffer and Klaus-Peter Podewski. Amsterdam: North-Holland Publishing Company, pp. 391–94. [Google Scholar]
  15. de Finetti, Bruno. 1982b. The proper approach to probability. In Exchangeability in Probability and Statistics. Edited by Giorgio Koch and Fabio Spizzichino. Amsterdam: North-Holland Publishing Company, pp. 1–6. [Google Scholar]
  16. de Finetti, Bruno. 1989. Probabilism: A critical essay on the theory of probability and on the value of science. Erkenntnis 31: 169–223. [Google Scholar] [CrossRef]
  17. Echenique, Federico. 2020. New developments in revealed preference theory: Decisions under risk, uncertainty, and intertemporal choice. Annual Review of Economics 12: 299–316. [Google Scholar] [CrossRef]
  18. Egidi, Leonardo, Francesco Pauli, and Nicola Torelli. 2022. Avoiding prior-data conflict in regression models via mixture priors. The Canadian Journal of Statistics 50: 491–510. [Google Scholar] [CrossRef]
  19. Fortini, Sandra, Sonia Petrone, and Polina Sporysheva. 2018. On a notion of partially conditionally identically distributed sequences. Stochastic Processes and Their Applications 128: 819–46. [Google Scholar]
  20. Gerstenberger, Carina, and Daniel Vogel. 2015. On the efficiency of Gini’s mean difference. Statistical Methods & Applications 24: 569–96. [Google Scholar]
  21. Gilio, Angelo, and Giuseppe Sanfilippo. 2014. Conditional random quantities and compounds of conditionals. Studia logica 102: 709–29. [Google Scholar] [CrossRef]
  22. Grechuk, Bogdan, Anton Molyboha, and Michael Zabarankin. 2012. Mean-deviation analysis in the theory of choice. Risk Analysis: An International Journal 32: 1277–92. [Google Scholar] [CrossRef]
  23. Hotelling, Harold. 1936. Relations between two sets of variates. Biometrika 28: 321–77. [Google Scholar] [CrossRef]
  24. Jolliffe, Ian T., and Jorge Cadima. 2016. Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374: 20150202. [Google Scholar]
  25. La Haye, Roberta, and Petr Zizler. 2019. The Gini mean difference and variance. Metron 77: 43–52. [Google Scholar]
  26. Maturo, Fabrizio, and Pierpaolo Angelini. 2023. Aggregate bound choices about random and nonrandom goods studied via a nonlinear analysis. Mathematics 11: 2498. [Google Scholar] [CrossRef]
  27. Nelder, John A., and Robert W. M. Wedderburn. 1972. Generalized linear models. Journal of the Royal Statistical Society, Series A (General) 135: 370–84. [Google Scholar] [CrossRef]
  28. Nishimura, Hiroki, Efe A. Ok, and John K.-H. Quah. 2017. A comprehensive approach to revealed preference theory. American Economic Review 107: 1239–63. [Google Scholar] [CrossRef]
  29. Nunke, Ronald J., and Leonard J. Savage. 1952. On the set of values of a nonatomic, finitely additive, finite measure. Proceedings of the American Mathematical Society 3: 217–18. [Google Scholar] [CrossRef]
  30. Pasini, Giorgia. 2017. Principal component analysis for stock portfolio management. International Journal of Pure and Applied Mathematics 115: 153–67. [Google Scholar]
  31. Pompilj, Giuseppe. 1957. On intrinsic independence. Bulletin of the International Statistical Institute 35: 91–97. [Google Scholar]
  32. Rao, C. Radhakrishna. 1964. The use and interpretation of principal component analysis in applied research. Sankhya: The Indian Journal of Statistics, Series A 26: 329–58. [Google Scholar]
  33. Viscusi, W. Kip, and William N. Evans. 2006. Behavioral probabilities. Journal of Risk and Uncertainty 32: 5–15. [Google Scholar] [CrossRef]
  34. von Neumann, John. 1936. Examples of continuous geometries. Proceedings of the National Academy of Sciences of the United States of America 22: 101–8. [Google Scholar]
Figure 1. A flowchart representing an optimal decision making process.
Figure 1. A flowchart representing an optimal decision making process.
Risks 12 00014 g001
Table 1. A univariate distribution transformed into a bivariate distribution.
Table 1. A univariate distribution transformed into a bivariate distribution.
2 X = ϕ 111Sum
1 X
00000
400.400.4
6000.60.6
Sum00.40.61
Table 2. Random quantity 1 combined with random quantity 2.
Table 2. Random quantity 1 combined with random quantity 2.
Random Quantity 2067Sum
Random Quantity 1
00000
800.20.30.5
900.30.20.5
Sum00.50.51
Table 3. Random quantity 1 combined with itself.
Table 3. Random quantity 1 combined with itself.
Random Quantity 2089Sum
Random Quantity 1
00000
800.500.5
9000.50.5
Sum00.50.51
Table 4. Random quantity 2 combined with itself.
Table 4. Random quantity 2 combined with itself.
Random Quantity 2067Sum
Random Quantity 1
00000
600.500.5
7000.50.5
Sum00.50.51
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Angelini, P. Invariance of the Mathematical Expectation of a Random Quantity and Its Consequences. Risks 2024, 12, 14. https://doi.org/10.3390/risks12010014

AMA Style

Angelini P. Invariance of the Mathematical Expectation of a Random Quantity and Its Consequences. Risks. 2024; 12(1):14. https://doi.org/10.3390/risks12010014

Chicago/Turabian Style

Angelini, Pierpaolo. 2024. "Invariance of the Mathematical Expectation of a Random Quantity and Its Consequences" Risks 12, no. 1: 14. https://doi.org/10.3390/risks12010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop