Skip to main content
Log in

Optimum Complexing of Measurements when Maintaining a Maneuvering Object in Statistically Uncertain Situations

  • CONTROL IN STOCHASTIC SYSTEMS AND UNDER UNCERTAINTY
  • Published:
Journal of Computer and Systems Sciences International Aims and scope

Abstract

The problem of synthesizing optimal and quasi-optimal algorithms for complex information processing is solved using the methods of Markov theory for estimating random processes when maintaining a maneuvering object and two-channel vector observation with violations in statistically uncertain situations. The problem is solved in relation to a discrete-continuous Markov process for the case when its continuous part is a vector Markov sequence, and the discrete part is characterized by a three-component discrete Markov process, each component of which is described by a Markov chain to several positions. A block diagram of quasi-optimal complex information processing is given. Using a simple example, simulation modeling shows the performance of a quasi-optimal algorithm in statistically uncertain situations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.

Similar content being viewed by others

REFERENCES

  1. Y. Bar-Shalom, P. K. Willett, and X. Tian, Tracking and Data Fusion: A Handbook of Algorithms (YBS Publ., Storrs, CN, 2011).

    Google Scholar 

  2. Y. Bar-Shalom, T. Kirubarajan, and X. R. Li, Estimation with Applications to Tracking and Navigation (Wiley, New York, 2001).

    Book  Google Scholar 

  3. Z. Sutton, P. Willett, and Y. Bar-Shalom, “Target tracking applied to extraction of multiple evolving threats from a stream of surveillance data,” IEEE Trans. Comput. Soc. Syst. 8, 434–450 (2021).

    Article  Google Scholar 

  4. S. Schoenecker, P. Willett, and Y. Bar-Shalom, “Resolution limits for tracking closely spaced targets,” IEEE Trans. Aerospace Electron. Syst. 54, 2900–2910 (2018).

    Article  Google Scholar 

  5. Y. Gao, Y. Liu, and X. R. Li, “Tracking-aided classification of targets using multihypothesis sequential probability ratio test,” IEEE Trans. Aerospace Electron. Syst. 54, 233–245 (2018).

    Article  Google Scholar 

  6. W. Aftab and L. Mihaylova, “A learning gaussian process approach for maneuvering target tracking and smoothing,” IEEE Trans. Aerospace Electron. Syst. 57, 278–292 (2021).

    Article  Google Scholar 

  7. A. Buelta, A. Olivares, E. Staffetti, W. Aftab, and L. Mihaylova, “A Gaussian process iterative learning control for aircraft trajectory tracking,” IEEE Trans. Aerospace Electron. Syst. 57, 3962–3973 (2021).

    Article  Google Scholar 

  8. M. A. Mironov, Markov Theory of Optimal Estimation of Random Processes (GosNIIAS, Moscow, 2013) [in Russian].

  9. R. Rezaie and X. R. Li, “Destination-directed trajectory modeling, filtering, and prediction using conditionally Markov sequences,” IEEE Trans. Aerospace Electron. Syst. 57, 820–833 (2021).

    Article  Google Scholar 

  10. S. Li, Y. Cheng, D. Brown, and R. Tharmarasa, “Comprehensive time-offset estimation for multisensor target tracking,” IEEE Trans. Aerospace Electron. Syst. 56, 2351–2373 (2020).

    Article  Google Scholar 

  11. M. Kowalski, P. Willett, T. Fair, and Y. Bar-Shalom, “CRLB for estimating time-varying rotational biases in passive sensors,” IEEE Trans. Aerospace Electron. Syst. 56, 343–355 (2020).

    Article  Google Scholar 

  12. E. Taghavi, R. Tharmarasa, T. Kirubarajan, and Y. Bar-Shalom, “Track-to-track fusion with cross-covariances from radar and IR/EO sensor,” in Proceedings of the 22nd International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2019.

  13. M. Rashid and M. Ali Sebt, “Tracking a maneuvering target in the presence of clutter by multiple detection radar and infrared sensor,” in Proceedings of the 25th Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2017, pp. 1917–1922.

  14. A. N. Detkov, “Optimal evaluation of discrete continuous Markov processes from observed digital signals,” J. Commun. Technol. Electron. 66, 914 (2021).

    Article  Google Scholar 

  15. A. N. Detkov, “Optimization of digital filtration algorithms for mixed Markov processes under guidance of a maneuvering object,” J. Comput. Syst. Sci. Int. 36, 228 (1997).

    MATH  Google Scholar 

  16. V. I. Tikhonov and M. A. Mironov, Markov Processes (Sov. Radio, Moscow, 1977) [in Russian].

    MATH  Google Scholar 

  17. V. A. Bukhalev, Recognition, Estimation and Control in Systems with a Random Jump Structure (Nauka, Moscow, 1996) [in Russian].

    MATH  Google Scholar 

  18. E. A. Rudenko, “Finite-dimensional recurrent algorithms for optimal nonlinear logical-dynamical filtering,” J. Comput. Syst. Sci. Int. 55, 36 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  19. S. Ya. Zhuk, Optimization Methods for Discrete Dynamical Systems with Random Structure (NTUU KPI, Kiev, 2008) [in Russian].

    Google Scholar 

  20. V. G. Repin and G. P. Tartakovskii, Statistical Synthesis under A Priori Uncertainty and Adaptation of Information Systems (Sov. Radio, Moscow, 1978) [in Russian].

    Google Scholar 

  21. F. R. Gantmakher, Matrix Theory (Fizmatlit, Moscow, 2010) [in Russian].

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. N. Detkov.

Ethics declarations

The author declares that he has no conflicts interest.

APPENDIX

APPENDIX

Let us introduce into consideration the posterior distribution of the mixed state vector \({{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{\mathbf{S}}_{k}^{{\text{T}}}} \right]}^{{\,{\text{T}}}}} = {{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{{A}_{k}},{{B}_{k}},{{C}_{k}}} \right]}^{{\,{\text{T}}}}}\):

$$f\left( {{\mathbf{x}}_{k}^{{}},{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{y}}_{1}^{k}} \right) = f\left( {{\mathbf{x}}_{k}^{{}},{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{k}} \right) = w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right).$$

According to the definition of conditional distributions, we have

$$f\left( {{\mathbf{x}}_{k}^{{}},{{{\mathbf{s}}}_{k}},{\mathbf{y}}_{1}^{k}} \right) = f\left( {{\mathbf{y}}_{1}^{k}} \right){\kern 1pt} {\kern 1pt} f\left( {{\mathbf{x}}_{k}^{{}},{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{k}} \right) = f\left( {{\mathbf{y}}_{1}^{k}} \right){\kern 1pt} {\kern 1pt} w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right).$$
(A.1)

Using (A.1) and the main properties of the Markov processes, we write expressions for the distribution law of the posterior distribution of the mixed state vector \({{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{\mathbf{S}}_{k}^{{\text{T}}}} \right]}^{{\,{\text{T}}}}}\)

$$\begin{gathered} w_{{jmd}}^{*}({\mathbf{x}}_{k}^{{}}) = \frac{1}{{f({\mathbf{y}}_{1}^{k})}}f({\mathbf{x}}_{k}^{{}},{{{\mathbf{s}}}_{k}},{\mathbf{y}}_{1}^{k}) = \frac{1}{{f({\mathbf{y}}_{1}^{k})}}\sum\limits_{{{{\mathbf{s}}}_{{k - 1}}}} { \cdots \sum\limits_{{{{\mathbf{s}}}_{1}}} {\int { \cdots \int {f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}},...,{{{\mathbf{x}}}_{1}},{{{\mathbf{s}}}_{1}},{{{\mathbf{y}}}_{1}}} \right)\prod\limits_{g = 1}^{k - 1} d } } } } {{{\mathbf{x}}}_{g}} \\ \, = \frac{{f({\mathbf{y}}_{1}^{{k - 1}})}}{{f({\mathbf{y}}_{1}^{k})}}\sum\limits_{{{{\mathbf{s}}}_{{k - 1}}}} {\int {f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{y}}}_{{k - 1}}}} \right)f({{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}})\,d{{{\mathbf{x}}}_{{k - 1}}}} } \\ \, = \frac{{f({\mathbf{y}}_{1}^{{k - 1}})}}{{f({\mathbf{y}}_{1}^{k})}}\sum\limits_{{{{\mathbf{s}}}_{{k - 1}}}} {\int {f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{y}}}_{{k - 1}}}} \right){\kern 1pt} {\kern 1pt} w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right){\kern 1pt} {\kern 1pt} d{{{\mathbf{x}}}_{{k - 1}}}} } , \\ \end{gathered} $$
(A.2)

where integration over variable x is carried out in the area \({{\Re }^{{{{n}_{x}}}}}\).

We transform the first conditional probability density in the integrand:

$$f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{y}}}_{{k - 1}}}} \right) = f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}}} \right).$$

Here the absence \({{{\mathbf{y}}}_{{k - 1}}}\) does not matter for a given \({{{\mathbf{x}}}_{{k - 1}}}\). Using the property of conditional probability densities, we write

$$f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}}} \right) = f\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{s}}}_{k}}} \right){\kern 1pt} {\kern 1pt} {\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{x}}}_{{k - 1}}}} \right)f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{{\mathbf{S}}}_{k}},{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{S}}}_{{k - 1}}}} \right).$$
(A.3)

Taking into account the identities

$${\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{x}}}_{{k - 1}}}} \right) \equiv {\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}}} \right),$$
$$f\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}},{{{\mathbf{s}}}_{k}}} \right) \equiv f\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{A}_{k}} = {{a}_{j}}} \right) = {{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right),$$
$$f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{{\mathbf{S}}}_{k}},{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{S}}}_{{k - 1}}}} \right) \equiv f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right),$$

which are satisfied according to the conditions of the problem statement, we rewrite (A.3) in the form

$$f\left( {{{{\mathbf{x}}}_{k}},{{{\mathbf{s}}}_{k}},{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}},{{{\mathbf{s}}}_{{k - 1}}}} \right) = {\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}}} \right){{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right)f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right).$$
(A.4)

After substituting (A.3) into (A.1), we have

$$w_{{jmd}}^{*}({\mathbf{x}}_{k}^{{}}) = \frac{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}},{{C}_{k}}} \right)}}{{f({{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}})}}\sum\limits_{{{{\mathbf{S}}}_{{k - 1}}}} {{\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}}} \right)\int {{{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right){\kern 1pt} {\kern 1pt} w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right){\kern 1pt} {\kern 1pt} d{{{\mathbf{x}}}_{{k - 1}}}} } .$$
(A.5)

Taking into account the fact that \({\text{P}}\left( {{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{{{\mathbf{s}}}_{{k - 1}}}} \right) = \pi _{{ij}}^{a}\pi _{{nm}}^{b}\pi _{{ed}}^{c}\), we rewrite (A.5):

$$w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right) = \frac{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right)}}{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}} \right)}}\sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}\pi _{{nm}}^{b}} } \pi _{{de}}^{c}} \int {{{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right)\,w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right)\,d{{{\mathbf{x}}}_{{k - 1}}}} ,$$
(A.6)

where \({{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right)\) is the conditional PD, determined on the basis of equation (1.3); \(w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right)\) is the posterior distribution of the mixed state vector \({{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{\mathbf{S}}_{k}^{{\text{T}}}} \right]}^{{\,{\text{T}}}}}\)obtained at the previous k – 1st step in the measurement sequence y1, y2, …, \({{{\mathbf{y}}}_{{k - 1}}}\); \(f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right)\) is the one-step likelihood function, and \(f({{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}})\) is the normalizing factor.

Let us introduce into consideration the extrapolated distribution of the mixed state vector \({{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{\mathbf{S}}_{k}^{{\text{T}}}} \right]}^{{\,{\text{T}}}}}\)

$$\tilde {w}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right) = f({\mathbf{x}}_{k}^{{}},{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}) = f({\mathbf{x}}_{k}^{{}},{{{\mathbf{s}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}).$$
(A.7)

According to (A.7), we rewrite (A.6) as a system of two recurrent equations:

$$\tilde {w}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}\pi _{{nm}}^{b}} } \pi _{{de}}^{c}} \int {{{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right)w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right)d{{{\mathbf{x}}}_{{k - 1}}}} ,$$
(A.8)
$$w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right) = \frac{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right)}}{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}} \right)}}\tilde {w}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right).$$
(A.9)

Equation (A.7) describes the evolution of the extrapolated distribution \({{\tilde {w}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right)\) of the mixed state vector \({{\left[ {{\mathbf{X}}_{k}^{{\text{T}}},{\mathbf{S}}_{k}^{{\text{T}}}} \right]}^{{\,{\text{T}}}}}\). Using relation (A.9), the extrapolated distribution is refined based on the obtained measurement yk and the posterior distribution is determined \(w_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right)\).

Further transformation of expressions (A.8) and (A.9) can be performed taking into account the properties of conditional distributions. In this case, equation (A.6) can be represented by the method in [19] as the following system of recursive equations:

$${{{{\tilde {P}}}}_{{jmd}}}\left( k \right) = {\text{P}}\left( {{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}} \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}{\text{P}}_{{ine}}^{*}\left( {k - 1} \right),$$
(A.10)
$${\text{P}}_{{jmd}}^{*}\left( k \right) = {\text{P}}({{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}) = \frac{{f({{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}},{\mathbf{y}}_{1}^{{k - 1}})}}{{f({{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}})}}{{{{\tilde {P}}}}_{{jmd}}}\left( k \right),$$
(A.11)
$$\begin{gathered} \tilde {f}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right) = f({{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}},{\mathbf{y}}_{1}^{{k - 1}}) \\ \, = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}\pi _{{nm}}^{b}} } \pi _{{de}}^{c}} P_{{ine}}^{*}(k - 1)\int {{{f}_{j}}\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{{k - 1}}}} \right)w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right)d{{{\mathbf{x}}}_{{k - 1}}}} , \\ \end{gathered} $$
(A.12)
$$f_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right) = f\left( {{{{\mathbf{x}}}_{k}}\,{\text{|}}\,{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}},{\mathbf{y}}_{1}^{k}} \right) = \frac{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right)}}{{f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}},{\mathbf{y}}_{1}^{{k - 1}}} \right)}}{{\tilde {f}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right),$$
(A.13)
$$f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}},{{C}_{k}},{\mathbf{y}}_{1}^{{k - 1}}} \right) = \int {f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}} \right)\tilde {f}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right)d{{{\mathbf{x}}}_{k}}} ,$$
(A.14)
$$f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{\mathbf{y}}_{1}^{{k - 1}}} \right) = \sum\limits_i {\sum\limits_n {\sum\limits_j {\sum\limits_m {\sum\limits_e {\sum\limits_d {\pi _{{ij}}^{a}} } } } } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}f\left( {{{{\mathbf{y}}}_{k}}\,{\text{|}}\,{{{\mathbf{x}}}_{k}},{{B}_{k}},{{C}_{k}},{\mathbf{y}}_{1}^{{k - 1}}} \right){\text{P}}_{{jm}}^{*}\left( k \right),$$
(A.15)

where \(\tilde {f}_{{jmd}}^{{}}\left( {{{{\mathbf{x}}}_{k}}} \right)\) is the conditional extrapolated PD of the vector xk on the condition \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\); \(f_{{jmd}}^{*}\left( {{{{\mathbf{x}}}_{k}}} \right)\) is the conditional posterior PD of the vector xk on the condition \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\); \(P_{{jmd}}^{*}\left( k \right)\) is the posterior probability \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\), calculated by integrating (A.6) over the variables xk, \({{{\mathbf{x}}}_{{k - 1}}}\); \({{\tilde {P}}_{{jmd}}}\left( k \right)\) is the extrapolated probability \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\).

Thus, expression (A.6) corresponds to formula (2.1); expressions (A.7)–(A.8) to formulas (2.2)(2.3), expressions (A.10)–(A.11) to formulas (2.4) and (2.5).

To derive quasi-optimal filtering algorithms for the DNMP components, we use the assumption that the posterior distribution is normal \(w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right)\) at the previous k – 1-th filtering step [15]:

$$\begin{gathered} w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}}}}}\det \{ {\mathbf{R}}_{{ine}}^{*}\left( {k - 1} \right)\} } }} \\ \, \times \exp \left\{ { - \frac{1}{2}{{{({{{\mathbf{x}}}_{{k - 1}}} - {\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right))}}^{{\text{T}}}}{{{({\mathbf{R}}_{{ine}}^{*}\left( {k - 1} \right))}}^{{ - 1}}}({{{\mathbf{x}}}_{{k - 1}}} - {\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right))} \right\}, \\ \end{gathered} $$
(A.16)

where \({\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) \equiv {\mathbf{M}}\left\{ {{{{\mathbf{X}}}_{{k - 1}}},{{A}_{{k - 1}}} = {{a}_{i}},{{B}_{{k - 1}}} = {{b}_{n}},{{C}_{{k - 1}}} = {{c}_{e}}\,{\text{|}}\,{\mathbf{Y}}_{1}^{{k - 1}}} \right\}\) is the mathematical expectation and \({\mathbf{R}}_{{ine}}^{*}\left( {k - 1} \right) \equiv {\mathbf{cov}}\left\{ {{{{\mathbf{X}}}_{{k - 1}}},{{{\mathbf{X}}}_{{k - 1}}},{{A}_{{k - 1}}} = {{a}_{i}},{{B}_{{k - 1}}} = {{b}_{n}},{{C}_{{k - 1}}} = {{c}_{e}}\,{\text{|}}\,{\mathbf{Y}}_{1}^{{k - 1}}} \right\}\) is the covariance matrix of the posterior distribution \(w_{{ine}}^{*}\left( {{{{\mathbf{x}}}_{{k - 1}}}} \right)\), and det{R} is the matrix determinant R.

From (A.16) based on the linear transformation (1.3) of the conditionally Gaussian random variable \({{{\mathbf{X}}}_{{k - 1}}}\) we rewrite (A.12)

$$\begin{gathered} {{{\tilde {f}}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}}}}}} }}\sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}{\text{P}}_{{in}}^{*}(k - 1)\frac{1}{{\sqrt {\det \left\{ {{{{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}}}_{{jmd}}}\left( k \right)} \right\}} }} \\ \, \times \exp \left\{ {\frac{1}{2}{{{\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{X} }}}}_{{jmd}}}\left( k \right)} \right)}}^{{\text{T}}}}{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}_{{jmd}}^{{ - 1}}\left( k \right)\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{X} }}}}_{{jmd}}}\left( k \right)} \right)} \right\}, \\ \end{gathered} $$
(A.17)

where

$${{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{X} }}}_{{jmd}}}\left( k \right) = {\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right),$$
$${{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}}_{{jmd}}}\left( k \right) = {\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{R}}_{{ine}}^{*}\left( {k - 1} \right){\mathbf{\Phi }}_{{}}^{{\text{T}}}({{A}_{k}} = {{a}_{j}}) + {\mathbf{\Gamma }}({{A}_{k}} = {{a}_{j}}){\mathbf{\Gamma }}_{{}}^{{\text{T}}}({{A}_{k}} = {{a}_{j}}).$$

As a result of the two-moment Gaussian approximation of the extrapolated distribution (2.3), we write

$${{\tilde {f}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}}}}}\det \left\{ {{{{{\mathbf{\tilde {R}}}}}_{{jmd}}}\left( k \right)} \right\}} }}\exp \left\{ { - \frac{1}{2}{{{\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right)} \right)}}^{{\text{T}}}}{\mathbf{\tilde {R}}}_{{jmd}}^{{ - 1}}\left( k \right)\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right)} \right)} \right\},$$
(A.18)

where \({{{\mathbf{\tilde {X}}}}_{{jmd}}}\left( k \right) \equiv {\mathbf{M}}\left\{ {{{{\mathbf{X}}}_{k}},{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{Y}}_{1}^{{k - 1}}} \right\}\) is the conditionally predictive mathematical expectation at \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\), which, taking into account the weighted sum of conditional distributions, has the form:

$${{{\mathbf{\tilde {X}}}}_{{jmd}}}\left( k \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}_{{ine}}^{*}\left( {k - 1} \right)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}\left( k \right)}}{\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right),$$
(A.19)

\({{{\mathbf{\tilde {R}}}}_{{jmd}}}\left( k \right) \equiv {\mathbf{cov}}\left\{ {{{{\mathbf{X}}}_{k}},{{{\mathbf{X}}}_{k}},{{A}_{k}} = {{a}_{j}},{{B}_{k}} = {{b}_{m}},{{C}_{k}} = {{c}_{d}}\,{\text{|}}\,{\mathbf{Y}}_{1}^{{k - 1}}} \right\}\) is the covariance matrix of prediction errors for \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\), which can be calculated by the formula [14]

$${{{\mathbf{\tilde {R}}}}_{{jmd}}}\left( k \right) = \int\limits_{ - \infty }^\infty {\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right)} \right)} {{\left( {{{{\mathbf{x}}}_{k}} - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right)} \right)}^{{\text{T}}}}{{\tilde {f}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right)d{{{\mathbf{x}}}_{k}}.$$

Using the extrapolation probability density representation \({{\tilde {f}}_{{jmd}}}\left( {{{{\mathbf{x}}}_{k}}} \right)\) as a weighted sum of conditional probabilities, and also taking into account the fact that

$${{{\mathbf{x}}}_{k}} - {{{\mathbf{\tilde {X}}}}_{{jmd}}}\left( k \right) \equiv ({{{\mathbf{x}}}_{k}} - {\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right)) + ({\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {{{\mathbf{\tilde {X}}}}_{{jmd}}}\left( k \right)),$$

it can be shown that the covariance matrix of prediction errors \({{{\mathbf{\tilde {R}}}}_{{jmd}}}\left( k \right)\) has the form

$$\begin{gathered} {{{{\mathbf{\tilde {R}}}}}_{{jmd}}}\left( k \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{P_{{in}}^{*}(k - 1)}}{{{{{\tilde {P}}}_{{jm}}}(k)}} \\ \, \times \left\{ {{\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{R}}_{{ine}}^{*}\left( {k - 1} \right){\mathbf{\Phi }}_{{}}^{{\text{T}}}({{A}_{k}} = {{a}_{j}}) + {\mathbf{\Gamma }}({{A}_{k}} = {{a}_{j}}){\mathbf{\Gamma }}_{{}}^{{\text{T}}}({{A}_{k}} = {{a}_{j}})\mathop {}\limits_{_{{_{{_{{_{{}}}}}}}}} } \right. \\ \, + ({\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right))\left. {({\mathbf{\Phi }}({{A}_{k}} = {{a}_{j}}){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {{{{\mathbf{\tilde {X}}}}}_{{jmd}}}\left( k \right))_{{}}^{{\text{T}}}} \right\}. \\ \end{gathered} $$
(A.20)

Based on the linear measurement equation (1.4) from (A.19) and (A.20), it is easy to obtain expressions for conditionally predictive values of the measurement vectors \({\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)\) and \({\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)\) and conditional covariance matrix \({{{\mathbf{\tilde {W}}}}_{{jmd}}}\left( k \right)\) of measurement-prediction errors, respectively:

$${\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}{{{_{{ine}}^{*}}}_{{}}}(k - 1)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}(k)}}{{{\mathbf{H}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right),$$
(A.21)
$${\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right) = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}_{{ine}}^{*}(k - 1)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}(k)}}{{{\mathbf{H}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right),$$
(A.22)
$${{{\mathbf{\tilde {W}}}}_{{jmd}}}\left( k \right) = {\mathbf{H}}{{{\mathbf{\tilde {R}}}}_{{jmd}}}\left( k \right){{{\mathbf{H}}}^{{\text{T}}}} + {\mathbf{V}}\left( {{{b}_{m}},{{c}_{d}}} \right){{{\mathbf{V}}}^{{\text{T}}}}\left( {{{b}_{m}},{{c}_{d}}} \right).$$
(A.23)

Taking into account the fact that \({{{\mathbf{Y}}}_{k}} = {{\left[ {{\mathbf{Y}}_{k}^{{(1){\text{T}}}},{\mathbf{Y}}_{k}^{{(2){\text{T}}}}} \right]}^{{\,{\text{T}}}}}\), the conditional covariance matrix \({{{\mathbf{\tilde {W}}}}_{{jmd}}}\left( k \right)\) of measurement-prediction errors has a block form

$${{{\mathbf{\tilde {W}}}}_{{jmd}}}\left( k \right) = \left[ {\begin{array}{*{20}{c}} {{\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}} \\ {{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \end{array}} \right].$$
(A.24)

Using (A.20) and (A.23), it is easy to obtain expressions for the elements of the block matrix \({{{\mathbf{\tilde {W}}}}_{{jmd}}}\left( k \right)\):

$$\begin{gathered} {\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}} = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}_{{ine}}^{*}(k - 1)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}(k)}} \\ \, \times \left\{ {{{{\mathbf{H}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right){{{\mathbf{\Phi }}}^{{\text{T}}}}\left( {{{a}_{j}}} \right){\mathbf{H}}_{1}^{{\text{T}}}\left( {{{b}_{m}}} \right) + {{{\mathbf{V}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{V}}_{1}^{{\text{T}}}\left( {{{b}_{m}}} \right)\mathop {}\limits_{_{{_{{_{{_{{}}}}}}}}} } \right. \\ \, + \left\lfloor {{{{\mathbf{H}}}_{1}}\left( {{{B}_{k}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)} \right\rfloor \times \left. {{{{\left\lfloor {{{{\mathbf{H}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)} \right\rfloor }}^{{\text{T}}}}} \right\}, \\ \end{gathered} $$
(A.25)
$$\begin{gathered} {\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}} = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}_{{ine}}^{*}(k - 1)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}(k)}}\left\{ {{{{\mathbf{H}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right){{{\mathbf{\Phi }}}^{{\text{T}}}}\left( {{{a}_{j}}} \right){\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{d}}} \right)\mathop {}\limits_{_{{_{{_{{_{{}}}}}}}}} } \right. \\ \, + \left\lfloor {{{{\mathbf{H}}}_{1}}\left( {{{b}_{m}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)} \right\rfloor \times \left. {{{{\left\lfloor {{{{\mathbf{H}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)} \right\rfloor }}^{{\text{T}}}}} \right\}, \\ \end{gathered} $$
(A.26)
$$\begin{gathered} {\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}} = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{P_{{ine}}^{*}(k - 1)}}{{{{{\tilde {P}}}_{{jmd}}}(k)}}\left\{ {{{{\mathbf{H}}}_{2}}\left( {{{C}_{k}}} \right){\mathbf{\Phi }}\left( {{{A}_{k}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right){{{\mathbf{\Phi }}}^{{\text{T}}}}\left( {{{A}_{k}}} \right){\mathbf{H}}_{1}^{{\text{T}}}\left( {{{B}_{k}}} \right)\mathop {}\limits_{_{{_{{_{{}}}}}}} } \right. \\ \, + \left\lfloor {{{{\mathbf{H}}}_{2}}\left( {{{C}_{k}}} \right){\mathbf{\Phi }}\left( {{{A}_{k}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)} \right\rfloor \times \left. {{{{\left\lfloor {{{{\mathbf{H}}}_{1}}\left( {{{B}_{k}}} \right){\mathbf{\Phi }}\left( {{{A}_{k}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)} \right\rfloor }}^{{\text{T}}}}} \right\}, \\ \end{gathered} $$
(A.27)
$$\begin{gathered} {\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}} = \sum\limits_i {\sum\limits_n {\sum\limits_e {\pi _{{ij}}^{a}} } } \pi _{{nm}}^{b}\pi _{{ed}}^{c}\frac{{{\text{P}}_{{ine}}^{*}(k - 1)}}{{{{{{{\tilde {P}}}}}_{{jmd}}}(k)}} \\ \, \times \left\{ {{{{\mathbf{H}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right){{{\mathbf{\Phi }}}^{{\text{T}}}}\left( {{{a}_{j}}} \right){\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{d}}} \right) + {{{\mathbf{V}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{V}}_{2}^{{\text{T}}}\left( {{{c}_{d}}} \right)\mathop {}\limits_{_{{_{{_{{_{{}}}}}}}}} } \right. \\ \, + \left[ {{{{\mathbf{H}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)} \right] \times \left. {{{{\left[ {{{{\mathbf{H}}}_{2}}\left( {{{c}_{d}}} \right){\mathbf{\Phi }}\left( {{{a}_{j}}} \right){\mathbf{X}}_{{ine}}^{*}\left( {k - 1} \right) - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)} \right]}}^{{\,{\text{T}}}}}} \right\}. \\ \end{gathered} $$
(A.28)

To determine the matrix of the optimal transmission coefficient of the complex discrete filter [14]

$${{{\mathbf{K}}}_{{jmd}}}(k) = {{{\mathbf{\tilde {R}}}}_{{jmd}}}(k){{{\mathbf{H}}}^{{\text{T}}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{ - 1}}\left( k \right) = \left[ {\begin{array}{*{20}{c}} {{\mathbf{K}}_{{jmd}}^{{(1)}}\left( k \right)} \\ {{\mathbf{K}}_{{jmd}}^{{(2)}}\left( k \right)} \end{array}} \right] = {{{\mathbf{\tilde {R}}}}_{{jmd}}}\left[ {\begin{array}{*{20}{c}} {{\mathbf{H}}_{1}^{{\text{T}}}\left( {{{b}_{k}}} \right)} \\ {{\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{k}}} \right)} \end{array}} \right]{{\left[ {\begin{array}{*{20}{c}} {{\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}} \\ {{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \end{array}} \right]}^{{ - 1}}}$$
(A.29)

we represent the inverse block correlation matrix \({\mathbf{\tilde {W}}}_{{jmd}}^{{ - 1}}\), using the Frobenius formula [21]:

$${{\left[ {\begin{array}{*{20}{c}} {{\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}} \\ {{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \end{array}} \right]}^{{ - 1}}} = \left[ {\begin{array}{*{20}{c}} {{{{{\mathbf{\tilde {D}}}}}^{{ - 1}}}}&{ - {{{{\mathbf{\tilde {D}}}}}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}{{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}}^{{ - 1}}}} \\ { - {{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}{{{{\mathbf{\tilde {D}}}}}^{{ - 1}}}}&{{{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}}^{{ - 1}}} + {{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}{{{{\mathbf{\tilde {D}}}}}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}{{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}}^{{ - 1}}}} \end{array}} \right],$$
(A.30)

where \({\mathbf{\tilde {D}}} = {\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}} - {\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}{{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \right)}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}\).

Substituting into (A.29) the values (A.25)–(A.28) and taking into account (A.30), we obtain an expression for the optimal transmission coefficients of the complex discrete filter:

$$\begin{gathered} {\mathbf{K}}_{{jmd}}^{{(1)}}\left( k \right) = {{{{\mathbf{\tilde {R}}}}}_{{jmd}}}\left( k \right){\mathbf{H}}_{1}^{{\text{T}}}\left( {{{b}_{m}}} \right){{({\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}\left( k \right) - {\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}\left( k \right){{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}\left( k \right)} \right)}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}\left( k \right))}^{{ - 1}}} \\ \, - {{{{\mathbf{\tilde {R}}}}}_{{jmd}}}\left( k \right){\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{d}}} \right){{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}\left( k \right) - {\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}\left( k \right){{\left( {{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}\left( k \right)} \right)}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}\left( k \right))}^{{ - 1}}}, \\ \end{gathered} $$
(A.31)
$$\begin{gathered} {\mathbf{K}}_{{jmd}}^{{(2)}}\left( k \right) = - {{{{\mathbf{\tilde {R}}}}}_{{jmd}}}{\mathbf{H}}_{1}^{{\text{T}}}\left( {{{b}_{m}}} \right){{({\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}} - {\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}{{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}})}^{{ - 1}}}{{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}} \\ \, + {{{{\mathbf{\tilde {R}}}}}_{{jmd}}}{\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{d}}} \right)({{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}} + {{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}{{({\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}} - {\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}{{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}}{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}})}^{{ - 1}}}){\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}{{({\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}})}^{{ - 1}}}. \\ \end{gathered} $$
(A.32)

The current vector of the conditional estimate of the continuous component of the DNMP \({\mathbf{X}}_{{jmd}}^{*}\left( k \right)\) at \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\) is determined taking into account (A.19), (A.21), (A.22), (A.31) and (A.32) based on the measurement results \({\mathbf{Y}}_{k}^{{(1)}}\) and \({\mathbf{Y}}_{k}^{{(2)}}\) [15]:

$${\mathbf{X}}_{{jmd}}^{*}\left( k \right) = {{{\mathbf{\tilde {X}}}}_{{jmd}}}\left( k \right) + {\mathbf{K}}_{{jmd}}^{{(1)}}\left( k \right)({\mathbf{Y}}_{k}^{{(1)}} - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(1)}}\left( k \right)) + {\mathbf{K}}_{{jmd}}^{{(2)}}\left( k \right)({\mathbf{Y}}_{k}^{{(2)}} - {\mathbf{\tilde {Y}}}_{{jmd}}^{{(2)}}\left( k \right)).$$
(A.33)

The covariance matrix of errors in estimating the continuous component of DNMP \({\mathbf{R}}_{{jmd}}^{*}\left( k \right)\) on condition \({{A}_{k}} = {{a}_{j}}\), \({{B}_{k}} = {{b}_{m}}\), \({{C}_{k}} = {{c}_{d}}\) is determined taking into account (1.4), (A.20), (A.24), (A.30) [15]:

$${\mathbf{R}}_{{jmd}}^{*}\left( k \right) = {\mathbf{\tilde {R}}}_{{jmd}}^{{}}\left( k \right) - {{\left[ {\begin{array}{*{20}{c}} {{\mathbf{H}}_{1}^{{\text{T}}}\left( {{{b}_{k}}} \right){\mathbf{\tilde {R}}}_{{jmd}}^{{}}} \\ {{\mathbf{H}}_{2}^{{\text{T}}}\left( {{{c}_{k}}} \right){\mathbf{\tilde {R}}}_{{jmd}}^{{}}} \end{array}} \right]}^{{\text{T}}}}{{\left[ {\begin{array}{*{20}{c}} {{\mathbf{\tilde {W}}}_{{jmd}}^{{(11)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(12)}}} \\ {{\mathbf{\tilde {W}}}_{{jmd}}^{{(21)}}}&{{\mathbf{\tilde {W}}}_{{jmd}}^{{(22)}}} \end{array}} \right]}^{{ - 1}}}\left[ {\begin{array}{*{20}{c}} {{\mathbf{H}}_{1}^{{}}\left( {{{b}_{k}}} \right){\mathbf{\tilde {R}}}_{{jmd}}^{{}}} \\ {{\mathbf{H}}_{2}^{{}}\left( {{{c}_{k}}} \right){\mathbf{\tilde {R}}}_{{jmd}}^{{}}} \end{array}} \right].$$
(A.34)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Detkov, A.N. Optimum Complexing of Measurements when Maintaining a Maneuvering Object in Statistically Uncertain Situations. J. Comput. Syst. Sci. Int. 61, 902–917 (2022). https://doi.org/10.1134/S1064230722060077

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064230722060077

Navigation