Skip to main content
Log in

On Minimax Detection of Gaussian Stochastic Sequences with Imprecisely Known Means and Covariance Matrices

  • METHODS OF SIGNAL PROCESSING
  • Published:
Problems of Information Transmission Aims and scope Submit manuscript

Abstract

We consider the problem of detecting (testing) Gaussian stochastic sequences (signals) with imprecisely known means and covariance matrices. An alternative is independent identically distributed zero-mean Gaussian random variables with unit variances. For a given false alarm (1st-kind error) probability, the quality of minimax detection is given by the best miss probability (2nd-kind error probability) exponent over a growing observation horizon. We study the maximal set of means and covariance matrices (composite hypothesis) such that its minimax testing can be replaced with testing a single particular pair consisting of a mean and a covariance matrix (simple hypothesis) without degrading the detection exponent. We completely describe this maximal set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Wald, A., Statistical Decision Functions, New York: Wiley, 1950. Translated under the title Statisticheskie reshayushchie funktsii, in Pozitsionnye igry (Positional Games), Moscow: Nauka, 1967, pp. 300–522.

    MATH  Google Scholar 

  2. Lehmann, E.L., Testing Statistical Hypotheses, New York: Wiley, 1959. Translated under the title Proverka statisticheskikh gipotez, Moscow: Nauka, 1979.

    MATH  Google Scholar 

  3. Poor, H.V., An Introduction to Signal Detection and Estimation, New York: Springer-Verlag, 1994, 2nd ed.

    Book  Google Scholar 

  4. Zhang, W. and Poor, H.V., On Minimax Robust Detection of Stationary Gaussian Signals in White Gaussian Noise, IEEE Trans. Inform. Theory, 2011, vol. 57, no. 6, pp. 3915–3924. https://doi.org/10.1109/TIT.2011.2136210

    Article  MathSciNet  Google Scholar 

  5. Burnashev, M.V., On Detection of Gaussian Stochastic Sequences, Probl.Peredachi Inf., 2017, vol. 53, no. 4, pp. 49–68 [Probl. Inf. Transm. (Engl. Transl.), 2017, vol. 53, no. 4, pp. 349–367]. https://doi.org/10.1134/S0032946017040044

    MathSciNet  MATH  Google Scholar 

  6. Bellman, R., Introduction to Matrix Analysis, New York: McGraw-Hill, 1960. Translated under the title Vvedenie v teoriyu matrits, Moscow: Nauka, 1976.

    MATH  Google Scholar 

  7. Horn, R.A. and Johnson, C.R., Matrix Analysis, Cambridge: Cambridge Univ. Press, 1985. Translated under the title Matrichnyi analiz, Moscow: Mir, 1989.

    Book  Google Scholar 

  8. Burnashev, M.V., On Minimax Detection of Gaussian Stochastic Sequences and Gaussian Stationary Signals, Probl. Peredachi Inf., 2021, vol. 57, no. 3, pp. 55–72 [Probl. Inf. Transm. (Engl. Transl.), 2021, vol. 57, no. 3, pp. 248–264]. https://doi.org/10.1134/S0032946021030042

    MathSciNet  MATH  Google Scholar 

  9. Kullback, S., Information Theory and Statistics, New York: Wiley, 1959. Translated under the title Teoriya informatsii i statistika, Moscow: Nauka, 1967.

    MATH  Google Scholar 

  10. Burnashev, M.V., On Stein’s Lemma in Hypotheses Testing in General Non-Asymptotic Case, Stat. Inference Stoch. Process., 2022, Online First article. https://doi.org/10.1007/s11203-022-09278-4

  11. Burnashev, M.V., On the Minimax Detection of an Inaccurately Known Signal in a White Gaussian Noise Background, Teor. Veroyatnost. i Primenen., 1979, vol. 24, no. 1, pp. 106–118 [Theory Probab. Appl. (Engl. Transl.), 1979, vol. 24, no. 1, pp. 107–119]. https://doi.org/10.1137/1124008

    MathSciNet  MATH  Google Scholar 

  12. Burnashev, M.V., Discrimination of Hypotheses for Gaussian Measures, and a Geometrical Characterization of Gaussian Distribution, Mat. Zametki, 1982, vol. 32, no. 4, pp. 549–556 [Math. Notes (Engl. Transl.), 1982, vol. 32, no. 4, pp. 757–761]. https://doi.org/10.1007/BF01152385

    MathSciNet  Google Scholar 

  13. Petrov, V.V., Summy nezavisimykh sluchainykh velichin, Moscow: Nauka, 1972. Translated under the title Sums of Independent Random Variables, Berlin: Springer, 1975.

    Google Scholar 

Download references

Funding

Supported in part by the Russian Foundation for Basic Research, project no. 19-01-00364.

Author information

Authors and Affiliations

Authors

Additional information

Translated from Problemy Peredachi Informatsii, 2022, Vol. 58, No. 3, pp. 70–84. https://doi.org/10.31857/S0555292322030068

Appendix: Proof of Lemma 1

Appendix: Proof of Lemma 1

Let \(\boldsymbol{\xi}_n\) be a Gaussian random vector with the distribution \(\boldsymbol{\xi}_n \sim{\mathcal{N}}({\bf0},\boldsymbol{I}_n)\), and let \(\boldsymbol{A}_n\) be a symmetric \(n\times n\) matrix with eigenvalues \(\{a_i\}\). Consider the quadratic form \((\boldsymbol{\xi}_n,\boldsymbol{A}_n\boldsymbol{\xi}_n)\). There exists an orthogonal matrix \(\boldsymbol{T}_n\) such that \(\boldsymbol{T}_n'\boldsymbol{A}_n\boldsymbol{T}_n=\boldsymbol{B}_n\), where \(\boldsymbol{B}_n\) is the diagonal matrix with diagonal entries \(\{a_i\}\) [6, Section 4.7]. Since \(\boldsymbol{T}_n\boldsymbol{\xi}_n \sim{\mathcal{N}}({\bf0},\boldsymbol{I}_n)\), the quadratic forms \((\boldsymbol{\xi}_n,\boldsymbol{A}_n\boldsymbol{\xi}_n)\) and \((\boldsymbol{\xi}_n,\boldsymbol{B}_n\boldsymbol{\xi}_n)\) have the same distributions. Therefore, by equation (12) we have

$$\ln\frac{p^{}_{\boldsymbol{I}_n}}{p^{}_{\boldsymbol{a}_n,\boldsymbol{M}_n}} (\boldsymbol{y}_n)\stackrel{d}{=} \frac{1}{2}\bigl[\ln|\boldsymbol{M}_n|+(\boldsymbol{a}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)+\eta_n\bigr],$$
(71)

where

$$\eta_n= (\boldsymbol{y}_n, (\boldsymbol{M}_n^{-1}- \boldsymbol{I})\boldsymbol{y}_n) -2(\boldsymbol{y}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n).$$
(72)

Introduce the quantity (see (31))

$$\alpha_{\mu}=\operatorname{\mathbf P}\nolimits_{\boldsymbol{I}_n}\biggl\{\ln\frac{p^{}_{\boldsymbol{I}_n}}{p^{}_{\boldsymbol{a}_n,\boldsymbol{M}_n}}(\boldsymbol{x}_n)\le D(\boldsymbol{I}_n\,\|\,\boldsymbol{a}_n,\boldsymbol{M}_n)-\mu\biggr\}.$$
(73)

Then by (71), (72), and (17) or \(\alpha_{\mu}\) from (73) we have

$$\begin{aligned}[b]\alpha_{\mu}&\le \operatorname{\mathbf P}\nolimits_{\boldsymbol{\xi}_n}\Biggl\{\Biggl| (\boldsymbol{\xi}_n, (\boldsymbol{M}_n^{-1}-\boldsymbol{I})\boldsymbol{\xi}_n) - 2(\boldsymbol{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)-\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)\Biggr|>2\mu\Biggr\}\\ &=\operatorname{\mathbf P}\nolimits_{\boldsymbol{\xi}_n}\Biggl\{\Biggl|\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)(\xi_i^2-1)-2(\boldsymbol{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)\Bigr|>2\mu\Biggr\}\le P_1+P_2,\end{aligned}$$
(74)

where

$$\begin{aligned}P_1&=\operatorname{\mathbf P}\nolimits_{\boldsymbol{\xi}_n}\Biggl\{\Biggl|\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)(\xi_i^2-1)\Biggr|>\mu\Biggr\},\\ P_2&=\operatorname{\mathbf P}\nolimits_{\boldsymbol{\xi}_n}\bigl\{\bigl|(\boldsymbol{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)\bigl|>\mu/2\bigr\}.\end{aligned}$$
(75)

To estimate \(P_1\) in (75), we use the following result [13, Section III.5.15]: let \(\zeta_1,\ldots,\zeta_n\) be independent random variables with \(\operatorname{\mathbf E}\nolimits\zeta_i=0\), \(i=1,\ldots,n\). Then for any \(1\le p\le 2\) we have

$$\operatorname{\mathbf E}\nolimits\Biggl|\sum_{i=1}^n\zeta_i\Biggr|^p\le 2\sum_{i=1}^n\operatorname{\mathbf E}\nolimits|\zeta_i|^p.$$
(76)

Therefore, using Chebychev's inequality and (76) for \(P_1\), we obtain

$$\begin{aligned}[b]P_1&\le\mu^{-p}\operatorname{\mathbf E}\nolimits\Biggl|\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)(\xi_i^2-1)\Biggr|^p\\ &\le2\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p\operatorname{\mathbf E}\nolimits |\xi_i^2-1|^p\\ &\le2\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p\bigl(\operatorname{\mathbf E}\nolimits|\xi^2-1|^2\bigr)^{p/2}\\ &\le 2 \mu^{-p}6^{p2}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p\\ &\le12\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p.\end{aligned}$$
(77)

To estimate \(P_2\) in (74) and (75), note that

$$(\boldsymbol{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)\sim\mathcal{N}(0,\|\boldsymbol{M}_n^{-1}\boldsymbol{a}_n\|),$$

and so

$$(\boldsymbol{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n) \stackrel{d}{=}\|\boldsymbol{M}_n^{-1}\boldsymbol{a}_n\|\xi.$$

Therefore, using the standard bound

$$\operatorname{\mathbf P}\nolimits(|\xi|\ge z)\le e^{-z^2/2},\quad z\ge 0,$$

we obtain (\(\xi_i \sim \mathcal{N}(0,1)\))

$$P_2=\operatorname{\mathbf P}\nolimits_{\boldsymbol{\xi}_n}\bigl\{|(\boldmath{\xi}_n,\boldsymbol{M}_n^{-1}\boldsymbol{a}_n)|>\mu/2\bigr\}\le e^{-\mu^2/(8\|\boldsymbol{M}_n^{-1}\boldsymbol{a}_n\|^2)}.$$
(78)

For the condition \(\alpha_{\mu}\le\alpha\) to be satisfied, we choose \(\mu\) so that \(\max\{P_1,P_2\}\le\alpha/2\). For that, by (77) and (78), it suffices to take \(\mu\) satisfying (32).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Burnashev, M. On Minimax Detection of Gaussian Stochastic Sequences with Imprecisely Known Means and Covariance Matrices. Probl Inf Transm 58, 265–278 (2022). https://doi.org/10.1134/S0032946022030061

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0032946022030061

Keywords

Navigation