Skip to main content
Log in

Convergence Conditions for the Dynamics of Reflexive Collective Behavior in a Cournot Oligopoly Model under Incomplete Information

  • CONTROL IN SOCIAL ECONOMIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

This paper considers a Cournot oligopoly model with an arbitrary number of rational agents under incomplete information in the classical case (linear cost and demand functions). Within the dynamic reflexive collective behavior model, at each time instant each agent adjusts his output, taking a step towards the maximum profit under the expected choice of the competitors. Convergence conditions to a Cournot–Nash equilibrium are analyzed using the errors transition matrices of the dynamics. Restrictions on the ranges of agents’ steps are imposed and their effect on the convergence properties of the dynamics is demonstrated. Finally, a method is proposed to determine the maximum step ranges ensuring the convergent dynamics of collective behavior for an arbitrary number of agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

REFERENCES

  1. Nash, J., Non-Cooperative Games, Ann. Math., 1951, no. 54, pp. 286–295.

  2. Askar, S.S. and Elettrebybc, M.F., The Impact of Cost Uncertainty on Cournot Oligopoly Games, Appl. Math. Comput., 2017, vol. 312, pp. 169–176.

    MathSciNet  MATH  Google Scholar 

  3. Al-Khedhairi, A., Dynamical Study of Competition Cournot-like Duopoly Games Incorporating Fractional Order Derivatives and Seasonal Influences, Int. J. Nonlin. Sci. Numer. Simulat., 2020, vol. 21, pp. 339–359.

    Article  MathSciNet  MATH  Google Scholar 

  4. Elsadany, A.A., Dynamics of a Cournot Duopoly Game with Bounded Rationality Based on Relative Profit Maximization, Appl. Math. Comput., 2017, vol. 294, pp. 253–263.

    MathSciNet  MATH  Google Scholar 

  5. Ueda, M., Effect of Information Asymmetry in Cournot Duopoly Game with Bounded Rationality, Appl. Math. Comput., 2019, vol. 362. https://doi.org/10.1016/j.amc.2019.06.049.124535

  6. Fedyanin, D.N., Monotonicity of Equilibriums in Cournot Competition with Mixed Interactions of Agents and Epistemic Models of Uncertain Market, Procedia Computer Science, 2021, vol. 186(3), pp. 411–417.

    Article  Google Scholar 

  7. Geraskin, M.I., Analysis of Equilibria in a Nonlinear Oligopoly Model, Autom. Remote Control, 2022, vol. 83, no. 8, pp. 1261–1277.

    Article  MathSciNet  MATH  Google Scholar 

  8. Korepanov, V.O., Control of Reflexive Behavior in Cournot Oligopoly, Large-Scale Systems Control, 2010, vol. 31, pp. 225–249.

    Google Scholar 

  9. Skarzhinskaya, E. and Tsurikov, V., On the Possibility of Successive Approximation towards an Equilibrium in a Coalition Game with Reiterating Collective Action, Economics and Mathematical Methods, 2020, vol. 56, no. 4, pp. 103–115. https://doi.org/10.31857/S042473880012405-4

    Article  Google Scholar 

  10. Cournot, A., Researches into the Mathematical Principles of the Theory of Wealth, London: Hafner, 1960. (Original 1838).

  11. Novikov, D.A. and Chkhartishvili, A.G., Reflexion and Control: Mathematical Models, Leiden: CRC Press, 2014.

    Book  MATH  Google Scholar 

  12. Novikov, D., Korepanov, V., and Chkhartishvili, A., Reflexion in Mathematical Models of Decision-Making, Int. J. Parallel Emerg. Distrib. Syst., 2018, vol. 33, no. 3, pp. 319–335.

    Article  Google Scholar 

  13. Opoitsev, V.I., Ravnovesie i ustoichivost’ v modelyakh kollektivnogo povedeniya (Equilibrium and Stability in Collective Behavior Models), Moscow: Nauka, 1977.

  14. Algazin, G.I. and Algazina, Yu.G., Reflexive Dynamics in the Cournot Oligopoly under Uncertainty, Autom. Remote Control, 2020, vol. 81, no. 2, pp. 345–359.

    Article  MATH  Google Scholar 

  15. Algazin, G.I. and Algazina, D.G., Modeling the Dynamics of Collective Behavior in a Reflexive Game with an Arbitrary Number of Leaders, Informatics and Automation, 2022, vol. 21, no. 2, pp. 339–375.

    Article  Google Scholar 

  16. Algazin, G.I. and Algazina, Yu.G., To the Analytical Investigation of the Convergence Conditions of the Processes of Reflexive Collective Behavior in Oligopoly Models, Autom. Remote Control, 2022, vol. 83, no. 3, pp. 367–388.

    Article  MathSciNet  MATH  Google Scholar 

  17. Malishevskii, A.V., Kachestvennye modeli v teorii slozhnykh sistem (Qualitative Models in Theory of Complex Systems), Moscow: Nauka, 1998.

  18. Samarskii, A.A. and Gulin, A.V., Chislennye metody (Numerical Methods), Moscow: Nauka, 1989.

  19. Belitskii, G.R. and Lyubich, Yu.I., Normy matrits i ikh prilozheniya (Matrix Norms and Their Applications), Kiev: Naukova Dumka, 1984.

  20. Geraskin, M.I., Reflexive Analysis of Equilibria in a Triopoly Game with Linear Cost Functions of the Agents, Autom. Remote Control, 2022, vol. 83, no. 3, pp. 389–406.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to G. I. Algazin or D. G. Algazina.

Additional information

This paper was recommended for publication D.A. Novikov, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Proposition 3. This result is established by transforming the right-hand side of (12):

$$\begin{gathered} \sum\limits_{i \in N}^{} {\left\{ {\left( {1 - \beta _{i}^{{t + 1}}} \right){{{\left[ {{{\varepsilon }_{i}} - \frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \right]}}^{2}} + \beta _{i}^{{t + 1}}{{{\left[ {{{\varepsilon }_{i}} - \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \right]}}^{2}}} \right\}} \\ = \sum\limits_{i \in N}^{} {{{{({{\varepsilon }_{i}})}}^{2}} - \sum\limits_{i \in N}^{} {{{\varepsilon }_{i}}\vec {\gamma }_{i}^{{t + 1}}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right) + {{{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}}^{2}}} } {{\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)}^{2}} \\ - \,\,\sum\limits_{i \in N}^{} {\beta _{i}^{{t + 1}}\left\{ {{{{\left[ {{{\varepsilon }_{i}} - \frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \right]}}^{2}} - {{{\left[ {{{\varepsilon }_{i}} - \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \right]}}^{2}}} \right\}} \\ \end{gathered} $$
$$\begin{gathered} = 1 - \sum\limits_{i \in N}^{} {{{\varepsilon }_{i}}\vec {\gamma }_{i}^{{t + 1}}} \left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right) + {{\left( {\frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)}^{2}}\sum\limits_{i \in N}^{} {{{{\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)}}^{2}}} \\ - \,\,\sum\limits_{i \in N}^{} {\beta _{i}^{{t + 1}}\left[ {2{{\varepsilon }_{i}} - \left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2} + \frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \right]} \left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2} - \frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right) \\ = 1 - \sum\limits_{i \in N}^{} {2{{\varepsilon }_{i}}} \left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)\left[ {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2} + \beta _{i}^{{t + 1}}\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2} - \frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)} \right] \\ \end{gathered} $$
$$\begin{gathered} + \,\,\sum\limits_{i \in N}^{} {{{{\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)}}^{2}}\left[ {{{{\left( {\frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)}}^{2}} + \beta _{i}^{{t + 1}}\left( {{{{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}}^{2}} - {{{\left( {\frac{{\vec {\gamma }_{i}^{{t + 1}}}}{2}} \right)}}^{2}}} \right)} \right]} \\ = 1 - \sum\limits_{i \in N}^{} {2{{\varepsilon }_{i}}\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} \mu _{i}^{{t + 1}} + {{\sum\limits_{i \in N}^{} {\left( {{{\varepsilon }_{i}} + \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)} }^{2}}\eta _{i}^{{t + 1}} \\ = 1 - 2\sum\limits_{i \in N}^{} {{{{({{\varepsilon }_{i}})}}^{2}}\mu _{i}^{{t + 1}}} - \sum\limits_{i \in N}^{} {2{{\varepsilon }_{i}}\mu _{i}^{{t + 1}}} \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}} + \sum\limits_{i \in N}^{} {{{{({{\varepsilon }_{i}})}}^{2}}\eta _{i}^{{t + 1}}} } + \sum\limits_{i \in N}^{} {2{{\varepsilon }_{i}}\eta _{i}^{{t + 1}}} \sum\limits_{j \in N}^{} {{{\varepsilon }_{j}} + \sum\limits_{i \in N}^{} {{{{\left( {\sum\limits_{j \in N}^{} {{{\varepsilon }_{j}}} } \right)}}^{2}}\eta _{i}^{{t + 1}}} } \\ \end{gathered} $$
$$\begin{gathered} = 1 - \sum\limits_{i \in N}^{} {{{{({{\varepsilon }_{i}})}}^{2}}} \left( {4\mu _{i}^{{t + 1}} - 3\eta _{i}^{{t + 1}} - \sum\limits_{k \in N}^{} {\eta _{k}^{{t + 1}}} } \right) \\ - \,\,\sum\limits_{i \in N}^{} {\sum\limits_{k \in N{{\backslash }}\{ i\} }^{} {{{\varepsilon }_{i}}{{\varepsilon }_{j}}\left[ {\left( {\mu _{i}^{{t + 1}} - \eta _{i}^{{t + 1}}} \right) + \left( {\mu _{j}^{{t + 1}} - \eta _{j}^{{t + 1}}} \right) - \sum\limits_{k \in N}^{} {\eta _{k}^{{t + 1}}} } \right]} } . \\ \end{gathered} $$

If the principal minors of the matrix (15) have positive determinants, then the quadratic form

$$\sum\limits_{i \in N}^{} {{{{({{\varepsilon }_{i}})}}^{2}}\left( {4\mu _{i}^{{t + 1}} - 3\eta _{i}^{{t + 1}} - \sum\limits_{k \in N}^{} {\eta _{k}^{{t + 1}}} } \right)} + \sum\limits_{i \in N}^{} {\sum\limits_{i \in N{{\backslash }}\{ i\} }^{} {{{\varepsilon }_{i}}{{\varepsilon }_{j}}\left[ {(\mu _{i}^{{t + 1}} - \eta _{i}^{{t + 1}}) + (\mu _{j}^{{t + 1}} - \eta _{j}^{{t + 1}}) - \sum\limits_{k \in N}^{} {\eta _{k}^{{t + 1}}} } \right]} } $$

is positive definite and ||Bt+1|| < 1 due to (7) and (10). In other words, the process (4), (5) converges for the parameter values \(\vec {\gamma }_{i}^{{t + 1}}\), \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\), \(\beta _{i}^{{t + 1}}\).

The proof of Proposition 3 is complete.

Proof of Lemma 2. We take advantage of the following properties of determinants.

(1) The determinant of a matrix does not change when multiplying any row by an arbitrary number and adding the result to any other row.

(2) The determinant of a triangular square matrix is equal to the product of its diagonal elements. Adding to each row the next row multiplied by (–1), we obtain

$$\det (A) = \left| {\begin{array}{*{20}{c}} a&b&b& \cdots &b \\ b&a&b& \cdots &b \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ b&b&b& \cdots &a \end{array}} \right| = \left| {\begin{array}{*{20}{c}} {a - b}&{b - a}&0& \cdots &0 \\ b&a&b& \cdots &b \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ b&b&b& \cdots &a \end{array}} \right| = \left| {\begin{array}{*{20}{c}} {a - b}&{b - a}&0& \cdots &0 \\ 0&{a - b}&{b - a}& \cdots &0 \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ b&b&b& \cdots &a \end{array}} \right|.$$

Using the row expansion for the last row and the triangular determinant formula, we finally arrive at det(A) = a(ab)m–1 + (m – 1)b(ab)m–1 = (ab)m–1 [a + (m – 1)b]. The proof of Lemma 2 is complete.

Proof of Lemma 3. We have \(\gamma _{i}^{{t + 1}}\) = \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\); in view of (15), for \(\beta _{i}^{{t + 1}}\) = 1,

$$\mu _{i}^{{t + 1}} = \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2},\quad \eta _{i}^{{t + 1}} = {{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}^{2}},$$
$$f_{{ii}}^{{t + 1}} = 2\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}} - (3 + n){{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}^{2}},\quad f_{{ij}}^{{t + 1}} = \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}} - (2 + n){{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}^{2}},\quad i,j \in N,\quad i \ne j.$$

According to Lemma 2, det(Ft+1) = (1 + n)\({{\left( {\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}^{n}}{{\left( {2 - \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right)}^{{n - 1}}}\left[ {2 - (1 + n)\frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}}}{2}} \right]\). The determinant is positive for \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\) < \(\frac{4}{{1 + n}}\). Under this inequality, the kth principal minor of the matrix F  t + 1 (k < n) has a positive determinant as well. Therefore, a + (k – 1)b = \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\)[2(1 + k) – (1 + 2k + nk)\(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\)/2]/2.

For \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\) = \(\frac{4}{{1 + n}}\), it follows that a + (k – 1)b = \(\frac{{4(n - k)}}{{{{{(1 + n)}}^{2}}}}\) > 0. For \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } _{i}^{{t + 1}}\) < \(\frac{4}{{1 + n}}\), the positivity of the determinant of the kth principal minor is obvious.

The proof of Lemma 3 is complete.

Proof of Proposition 4. When restricting the step ranges, it is desirable to have their right bounds as close to 1 as possible. Let all agents have the same step range at all time instants. Therefore, we omit the superscript (t + 1) for f and F.

Based on Lemma 2, for the Cournot duopoly, the right bound of \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } \) of the range should be taken equal to 1 and  f(1, 1) < 1.

Consider  f(\(\vec {\gamma }\), 1). The matrix (15) corresponding to this quadratic form is defined for β1 = 0 and β2 = 1: F = \(\left( {\begin{array}{*{20}{c}} {2\vec {\gamma } - {{{\vec {\gamma }}}^{2}} - 0.25}&{\vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2} \\ {\vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{1 - {{{\vec {\gamma }}}^{2}}{\text{/}}4} \end{array}} \right)\). For the convenience of calculations, the determinant of this matrix can be simplified to det(F) = \(\left| {\begin{array}{*{20}{c}} {3.75}&{\vec {\gamma }{\text{/}}2 - 2} \\ {\vec {\gamma }{\text{/}}2 - 2}&{1 - {{{\vec {\gamma }}}^{2}}{\text{/}}4} \end{array}} \right|\). The matrix is positive definite if \(\vec {\gamma }\) ≥ 0.136. Therefore,  f(0.136, 1) < 1 \(\forall \)ε1, ε2.

Naturally,  f(0.136, 1) = f(1, 0.136) < 1.

Consider  f(\(\vec {\gamma }\), \(\vec {\gamma }\)). The matrix (15) corresponding to this quadratic form is defined for β1 = β2 = 0: F = \(\left( {\begin{array}{*{20}{c}} {2\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{\vec {\gamma } - {{{\vec {\gamma }}}^{2}}} \\ {\vec {\gamma } - {{{\vec {\gamma }}}^{2}}}&{2\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4} \end{array}} \right)\). The matrix is positive definite if  \(\vec {\gamma }\) ≠ 0. Therefore,  f(0.136, 0.136) < 1 \(\forall \)ε1, ε2.

The desired result follows from (11), and the proof of Proposition 4 is complete.

Proof of Proposition 5.

Consider  f(1, 1, \(\vec {\gamma }\)). The matrix (15) corresponding to this quadratic form is defined for β1 = β2 = 1 and β3 = 0: F = \(\left( {\begin{array}{*{20}{c}} {0.75 - {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2} \\ { - {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{0.75 - {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2} \\ { - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{ - 0.5 + 2\vec {\gamma } - {{{\vec {\gamma }}}^{2}}} \end{array}} \right)\). For the convenience of calculations, the determinant of this matrix can be simplified to det(F) = \(\left| {\begin{array}{*{20}{c}} {0.75}&{ - {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2} \\ 0&{0.75 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}} \\ 0&{ - 0.25 + \vec {\gamma }{\text{/}}2 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{ - 0.5 + 2\vec {\gamma } - {{{\vec {\gamma }}}^{2}}} \end{array}} \right|\).

The minimum value \(\vec {\gamma }\) under which the matrix F remains positive definite is 0.334. Hence,  f(1, 1, 0.334) < 1 \(\forall \)ε1, ε2.

In addition, we have f(1, 1, 0.334) = f(1, 0.334, 1) = f(0.334, 1, 1) < 1 \(\forall \)ε1, ε2.

Consider  f(1, \(\vec {\gamma }\), \(\vec {\gamma }\)). The matrix (15) corresponding to this quadratic form is defined for β1 = 1 and β2 = β3 = 0: F = \(\left( {\begin{array}{*{20}{c}} {1 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{\vec {\gamma }{\text{/}}2 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{\vec {\gamma }{\text{/}}2 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4} \\ {\vec {\gamma }{\text{/}}2 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + 2\vec {\gamma } - 5{\kern 1pt} {\kern 1pt} {{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma } - {{{\vec {\gamma }}}^{2}}} \\ {\vec {\gamma }{\text{/}}2 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma } - {{{\vec {\gamma }}}^{2}}}&{ - 0.25 + 2\vec {\gamma } - 5{\kern 1pt} {\kern 1pt} {{{\vec {\gamma }}}^{2}}{\text{/}}4} \end{array}} \right)\). For the convenience of calculations, the determinant of this matrix can be simplified to det(F) = \(\left\| {\begin{array}{*{20}{c}} {1 - {{{\vec {\gamma }}}^{2}}{\text{/}}2}&{\vec {\gamma } - 3{{{\vec {\gamma }}}^{2}}{\text{/}}2}&{1 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4} \\ {\vec {\gamma }{\text{/}}2 - 3{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.5 + 3\vec {\gamma } - 9{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{ - 0.25 + \vec {\gamma } - {{{\vec {\gamma }}}^{2}}} \\ 0&0&{\vec {\gamma } - {{{\vec {\gamma }}}^{2}}{\text{/}}4} \end{array}} \right\|\).

The minimum value \(\vec {\gamma }\) under which the matrix F remains positive definite is 0.22. Hence,  f(1, 0.22, 0.22) < 1 \(\forall \)ε1, ε2.

Also,  f(0.22, 1, 0.22) = f(0.22, 0.22, 1) = f(1, 0.22, 0.22) < 1 \(\forall \)ε1, ε2.

Consider f(\(\vec {\gamma }\), \(\vec {\gamma }\), \(\vec {\gamma }\)). The matrix (15) corresponding to this quadratic form is defined for β1 = β2 = β3 = 0 : F = \(\left( {\begin{array}{*{20}{c}} {2\vec {\gamma } - 3{{{\vec {\gamma }}}^{2}}{\text{/4}}}&{\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4} \\ {\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{2\vec {\gamma } - 3{{{\vec {\gamma }}}^{2}}{\text{/}}\vec {\gamma }}&{\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4} \\ {\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{\vec {\gamma } - 5{{{\vec {\gamma }}}^{2}}{\text{/}}4}&{2\vec {\gamma } - 3{{{\vec {\gamma }}}^{2}}{\text{/}}\vec {\gamma }} \end{array}} \right)\) .

By Lemma 2, det(F) = \({{\vec {\gamma }}^{3}}\)(2 – \(\vec {\gamma }\)/2)2(1 – \(\vec {\gamma }\)). The matrix is positive definite if \(\vec {\gamma }\) ≠ 0.1. From these lower bounds we choose the maximum value, i.e., 0.334.

Based on Lemma 2, for the Cournot oligopoly model with three agents, the right bound \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\leftarrow}$}}{\gamma } \) of the step range should be chosen less than 1.

The desired result follows from (11), and the proof of Proposition 5 is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Algazin, G.I., Algazina, D.G. Convergence Conditions for the Dynamics of Reflexive Collective Behavior in a Cournot Oligopoly Model under Incomplete Information. Autom Remote Control 84, 486–496 (2023). https://doi.org/10.1134/S000511792305003X

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S000511792305003X

Keywords:

Navigation