Skip to main content
Log in

k-Sparse Vector Recovery via \(\ell _1-\alpha \ell _2\) Local Minimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper studies the \(\ell _1-\alpha \ell _2\) local minimization model for \(\alpha \in (0,2]\), which is the first time to consider the case of \(\alpha >1\). We obtain the necessary and sufficient conditions for a fixed sparse signal to be recovered from this model. Based on this condition, we also obtain the necessary and sufficient conditions for any k-sparse signal to be recovered from \(\ell _1-\alpha \ell _2\) local minimization model with \(0<\alpha <1\), \(\alpha =1\) and \(1<\alpha \le 2\). The experimental data show that the size of \(\alpha \) is positively correlated with the success rate of signal recovery.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bi, N., Tan, J., Tang, W.: A new sufficient condition for sparse vector recovery via \({\ell }_1 - {\ell }_2\) local minimization. Anal. Appl. 19, 1019–1031 (2021)

  2. Bi, N., Tang, W.: A necessary and sufficient condition for sparse vector recovery via \({\ell }_2\) minimization. Appl. Comput. Harmonic Anal. 56, 337–350 (2022)

  3. Cai, T., Wang, L., Xu, G.: New bounds for restricted isometry constants. IEEE Press 56, 4388–4394 (2010)

    MathSciNet  Google Scholar 

  4. Cai, T., Zhang, A.: Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 60, 122–132 (2014)

    Article  MathSciNet  Google Scholar 

  5. Candes, E.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematiques 346, 589–592 (2008)

    Article  MathSciNet  Google Scholar 

  6. Candes, E., Eldar, Y.C., Needell, D.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31, 59–73 (2011)

    Article  MathSciNet  Google Scholar 

  7. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 657–682 (2010)

    MathSciNet  Google Scholar 

  8. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. Siam J. Sci. Comput. 32, 2832–2852 (2010)

    Article  MathSciNet  Google Scholar 

  9. Foucart, S.: A note on guaranteed sparse recovery via \(\ell _1\)-minimization. Appl. Comput. Harmonic Anal. 29, 97–103 (2010)

    Article  MathSciNet  Google Scholar 

  10. Foucart, S., Rauhut, H.: A mathematical introduction to compressive sensing. Appl. Numer. Harmonic Anal. Ser. 44, 501–502 (2013)

    Google Scholar 

  11. Ge, H., Li, p.: The Dantzig selector: Recovery of signal via \(\ell _1-\alpha \ell _2\) minimization, arXiv:2105.14229, (2021)

  12. Ge, H., Wen, J., Chen, W.: The null space property of the truncated \(\ell _{1-2}\) -minimization. IEEE Signal Process. Lett. 25, 1261–1265 (2018)

    Article  Google Scholar 

  13. Geng, P., Chen, W.: Unconstrained \({\ell }_1 - {\ell }_2\) minimization for sparse recovery via mutual coherence. Math. Found. Comput. 3, 65–79 (2020)

  14. Li, P., Chen, W.: Signal recovery under cumulative coherence. J. Comput. Appl. Math. 346, 399–417 (2018)

    Article  MathSciNet  Google Scholar 

  15. Liang, K., Bi, N.: A new upper bound of p for \(\ell _p\)-minimization in compressed sensing. Signal Process. 176, 107695 (2020)

    Article  Google Scholar 

  16. Lin, J., Li, S., Yi, S.: New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal Process. 61, 611–621 (2013)

    Article  MathSciNet  Google Scholar 

  17. Liu, L.: The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmonic Anal. 30, 402–406 (2011)

    Article  MathSciNet  Google Scholar 

  18. Lou, Y., Osher, S., Xin, J.: Computational aspects of constrained l1–l2 minimization for compressive sensing. Adv. Intell. Syst. Comput. 359, 169–180 (2015)

    Article  Google Scholar 

  19. Lou, Y., Yin, P., He, Q.: Computing sparse representation in a highly coherent dictionary based on difference of l1 and l2. J. Sci. Comput. 64, 178–96 (2015)

    Article  MathSciNet  Google Scholar 

  20. Lou, Y., Yin, P., Xin, J.: Minimization of l(1–2) for compressed sensing. SIAM J. Sci. Comput. 37, A536–A563 (2015)

    Article  MathSciNet  Google Scholar 

  21. Mo, Q., Li, S.: New bounds on the restricted isometry constant \(\delta _{2k}\). Appl. Comput. Harmonic Anal. 31, 460–468 (2011)

    Article  MathSciNet  Google Scholar 

  22. Rick, C.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  23. Suykens, A.J.: Two-level l(1) minimization for compressed sensing. Off. Publ. Eur. Assoc. Signal Process. (EURASIP) 108, 459–475 (2015)

    Google Scholar 

  24. Wang, J., Wang, W.: An improved sufficient condition of \(\ell _{1-2}\)-minimisation for robust signal recovery. Electron. Lett. 55 (2019)

  25. Wen, J., Weng, J., Tong, C.: Sparse signal recovery with minimization of 1-norm minus 2-norm. IEEE Trans. Veh. Technol. 68, 6847–6854 (2017)

    Article  Google Scholar 

  26. Xiu, X., Kong, L., Li, Y., Qi, H.: Iterative reweighted methods for \(\ell _1-\ell _p\) minimization, (2018)

  27. Yan, L., Shin, Y., Xiu, D.: Sparse approximation using \({\ell }_1-{\ell }_2\) minimization and its application to stochastic collocation. SIAM J. Sci. Comput. 39, A229–A254 (2017)

  28. Zhang, S., Xin, J.: Minimization of transformed \(l_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing. Math. Programm. 169, 307–336 (2018)

    Article  Google Scholar 

  29. Zhang, S.L.R.: Optimal rip bounds for sparse signals recovery via p minimization. Appl. Comput. Harmonic Anal. 47, 566–584 (2017)

    Article  MathSciNet  Google Scholar 

  30. Zhang, L.C.H., Yin, W.: Necessary and sufficient conditions of solution uniqueness in \(\ell _1\) minimization. J. Optim. Theory Appl. 164, 109–122 (2012)

    Article  Google Scholar 

  31. Zhao, Y.: Rsp-based analysis for sparsest and least \(\ell _1\)-norm solutions to underdetermined linear systems. IEEE Trans. Signal Process. 61, 5777–5788 (2013)

    Article  MathSciNet  Google Scholar 

  32. Zhao, Y.: Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications. J. Oper. Res. Soc. China 2, 171–193 (2014)

    Article  MathSciNet  Google Scholar 

  33. Zhao, Y., Li, D.: Reweighted \(\ell _1\)-minimization for sparse solutions to underdetermined linear systems. Siam J. Optim. 22, 1065–1088 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Jia Li’s work is partially supported by the National Key R &D Program of China (No. 2021YFA1001300), Guangdong-Hong Kong-Macau Applied Math Center under Grant No. 2020B1515310011 and NSFC general under Grant No. 12171496. Kaihao Liang’s work is partially supported by the NSF of Guangdong under Grant No. 2018A0303130136, the Science and Technology Planning Project of Guangdong under Grant Nos. 2015A070704059 and 2015A030402008. The authors declare that no funds, grants, or other support was received during the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose. All the authors contributed to the conception and proof in the study. The first draft of the manuscript was written by Shaohua Xie, and all the authors commented on the previous versions of the manuscript. All the authors have read and approved the final manuscript. All data generated or analyzed during this study are included in this published article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jia Li.

Additional information

Communicated by Lam M. Nguyen.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

1.1 Proof of Theorem 1

Proof

Necessity. From (7) and Remark 1,

$$\begin{aligned} f(x,th)=t\left( \sum \limits _{i\in S} \textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle -\alpha g(t)\right) >0. \end{aligned}$$

Hence, from (8)

$$\begin{aligned} \sum \limits _{i\in S} \textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle >\alpha g(t)\ge 0. \end{aligned}$$

Sufficiency. Let (9) holds for all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\). Then there exists an \(h^*\in \textrm{ker} A\) and \(\Vert h^*\Vert _2=1\) such that

$$\begin{aligned} \begin{aligned}&\textrm{min}\left\{ \sum \limits _{i\in S} \textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle \right\} \\&=\sum \limits _{i\in S} \textrm{sign}(x_i)h^*_i+\Vert h^*_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h^*\right\rangle \\&=:\rho >0. \end{aligned} \end{aligned}$$

On the other hand,

$$\begin{aligned} 0\le g(t)= & {} \frac{t+2\langle x,h\rangle }{\Vert x+th\Vert _2+\Vert x\Vert _2}-\left\langle \frac{x}{\Vert x\Vert _2}, h\right\rangle \\= & {} \frac{(t+2\langle x,h\rangle )\Vert x\Vert _2-\langle x,h\rangle (\Vert x+th\Vert _2+\Vert x\Vert _2)}{(\Vert x+th\Vert _2+\Vert x\Vert _2)\Vert x\Vert _2}\\= & {} \frac{t\Vert x\Vert _2+\langle x, h\rangle \Vert x\Vert _2-\langle x,h\rangle \Vert x+th\Vert _2}{\Vert x\Vert _2(\Vert x+th\Vert _2+\Vert x\Vert _2)}\\{} & {} \le \frac{t\Vert x\Vert _2+|\langle x, h\rangle ||\Vert x\Vert _2-\Vert x+th\Vert _2|}{\Vert x\Vert _2(\Vert x+th\Vert _2+\Vert x\Vert _2)}\\{} & {} \le \frac{t\Vert x\Vert _2+|\langle x, h\rangle |\Vert th\Vert _2}{\Vert x\Vert _2(\Vert x+th\Vert _2+\Vert x\Vert _2)} \le \frac{2t\Vert x\Vert _2}{\Vert x\Vert _2(\Vert x+th\Vert _2+\Vert x\Vert _2)}\\{} & {} \le \frac{2t\Vert x\Vert _2}{\Vert x\Vert _2^2}=\frac{2t}{\Vert x\Vert _2}. \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} f(x,th)&=t\left( \sum \limits _{i\in S} \textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle -\alpha g(t)\right) \\&\ge t(\rho -\alpha g(t))\ge t\left( \rho - \alpha \frac{2t}{\Vert x\Vert _2}\right) >0, \end{aligned} \end{aligned}$$

when \(t<\textrm{min}\left\{ \frac{\rho \Vert x\Vert _2}{2\alpha },\textrm{min}_{i\in S}|x_i|\right\} \). \(\square \)

1.2 Proof of Lemma 1

Proof

\((i)\Rightarrow (ii)\). \(\forall h \in \textrm{ker} A\), \(\Vert h\Vert _2=1\), and \(\forall x\ne 0\), \(\textrm{supp}(x)=S\), \(|S|\le k\). Set

$$\begin{aligned} x^*_i= {\left\{ \begin{array}{ll} -\textrm{sign}(h_i)|x_i|&{} i\in S_1(x) \text { and }h_i\ne 0\\ \textrm{sign}(h_i)|x_i|&{} i\in S_2(x) \text { and }h_i\ne 0\\ x_i &{} i\in S \text { and }h_i=0\\ 0 &{} i\notin S \end{array}\right. }, \end{aligned}$$

then \(\textrm{supp}(x^*)=S\) and

$$\begin{aligned} \begin{aligned}&\Vert h_{\overline{S}}\Vert _1-\sum \limits _{i\in S_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|-\sum \limits _{i\in S_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\\&=\Vert h_{\overline{S}}\Vert _1+\sum \limits _{i\in S}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) \textrm{sign}(x^*_i)h_i\\&=\sum \limits _{i\in S}\textrm{sign}(x^*_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x^*}{\Vert x^*\Vert _2},\alpha h\right\rangle \\&>0. \end{aligned} \end{aligned}$$

\((ii)\Rightarrow (i)\). Note that \(\forall h \in \textrm{ker} A\), \(\Vert h\Vert _2=1\), and \(\forall x\ne 0\), \(\textrm{supp}(x)=S\), \(|S|\le k\)

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle \\&=\Vert h_{\overline{S}}\Vert _1+\sum \limits _{i\in S}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) \textrm{sign}(x_i)h_i\\&\ge \Vert h_{\overline{S}}\Vert _1-\sum \limits _{i\in S_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|-\sum \limits _{i\in S_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\\&>0. \end{aligned} \nonumber \\ \end{aligned}$$

\(\square \)

1.3 Proof of Lemma 2

Proof

We note that, \(\forall x\ne 0\), satisfying \(\textrm{supp}(x)=S\), \(S_1(x)\ne \emptyset \), \(S_2(x)\ne \emptyset \), then for any \(i\in S_1(x)\), we have

$$\begin{aligned} \begin{aligned} \frac{|x^*(x)_i|}{\Vert x^*(x)\Vert _2}&=\frac{t|x_i|}{\sqrt{\sum \limits _{j\in S_1(x)\cup S_2(x)}|x^*(x)_j|^2}}=\frac{t|x_i|}{\sqrt{\sum \limits _{j\in S_1(x)}t^2x_j^2+\sum \limits _{j\in S_2(x)}|x_i|^2}}\\&=\frac{|x_i|}{\sqrt{\sum \limits _{j\in S_1(x)}x_j^2+\sum \limits _{j\in S_2(x)}\frac{|x_i|^2}{t^2}}}<\frac{|x_i|}{\Vert x\Vert _2}, \end{aligned} \end{aligned}$$

which means that when all elements in \(S_1(x)\) are reduced in absolute for the same proportion, any \(i\in S_1(x)\), \(\frac{|x_i|}{\Vert x\Vert _2}\) will also become smaller. In addition, for any \(j\in S_2(x)\), \(i\in S_1(x)\) it is easy to know \(\frac{|x_j|}{\Vert x\Vert _2}\) increases as \(|x_i|\) decreases. Hence, \(S_1(x^*(x))=S_1(x)\), \(S_2(x^*(x))=S_2(x)\) and

$$\begin{aligned}{} & {} \sum \limits _{i\in S_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|\le \sum \limits _{i\in S_1(x^*)}\left( 1-\frac{\alpha |x^*(x)_i|}{\Vert x^*(x)\Vert _2}\right) |h_i|,\\{} & {} \sum \limits _{i\in S_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\le \sum \limits _{i\in S_2(x^*)}\left( \frac{\alpha |x^*(x)_i|}{\Vert x^*\Vert _2}-1\right) |h_i|. \end{aligned}$$

It follows that \(f_{S,h}(x)\ge f_{S,h}(x^*(x))\). Similarly,

$$\begin{aligned}{} & {} \sum \limits _{i\in S_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|\le \sum \limits _{i\in S_1(x^*)}\left( 1-\frac{\alpha |x^{**}(x)_i|}{\Vert x^{**}(x)\Vert _2}\right) |h_i|,\\{} & {} \sum \limits _{i\in S_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\le \sum \limits _{i\in S_2(x^*)}\left( \frac{\alpha |x^{**}(x)_i|}{\Vert x^{**}\Vert _2}-1\right) |h_i|. \end{aligned}$$

This yields, \(f_{S,h}(x)\ge f_{S,h}(x^{**}(x))\). \(\square \)

1.4 Proof of Lemma 3

Proof

Set \(j_0=\textrm{argmin}_{i\in S}|x_i|\). We gradually reduce \(|x_{j_0}|\) but not to 0 and keep the other components in x unchanged. Set the changed vector as \(x^*\). It is easy to know that \(0<\frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\le \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\) for \(i\in S\backslash \{j_0\}\). Additionally, because \(1<\alpha \le 2\),

$$\begin{aligned} 0<\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }=\min \limits _{i\in S}\frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\le \frac{1}{\sqrt{S}}-\frac{1}{\alpha }\le \frac{1}{\sqrt{2}}-\frac{1}{\alpha }< \frac{1}{2}. \end{aligned}$$

However, when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow 0\). Hence, \(0\le \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\). Since \(\frac{1}{\alpha }\in [\frac{1}{2},1)\) we have \( \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\ge \frac{1}{2}>\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\) when \(|x^*_{j_0}|\rightarrow 0\). Combined with the above, the following formula is generated,

$$\begin{aligned} \begin{aligned}&g_{S,h}(x)-g_{S,h}(x^*)\\&=\sum \limits _{i\in S}\left( \frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_i|-\sum \limits _{i\i S_1(x^*)}\left( \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|-\sum \limits _{i\in S_2(x^*)}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_i|\\&=\sum \limits _{i\in S}\left( \frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_i|-\left( \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\right) |h_{j_0}|-\sum \limits _{i\in S\backslash \{j_0\}}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_i|\\&=\sum \limits _{i\in S\backslash \{j_0\}}\left( \frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_i|+\left( \frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_{j_0}|-\left( \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\right) |h_{j_0}|\\&\quad -\sum \limits _{i\in S\backslash \{j_0\}}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_i|\le 0. \end{aligned} \end{aligned}$$
(16)

Therefore, the conclusion holds. \(\square \)

1.5 Proof of Lemma 4

Proof

Set \(j_0=\textrm{argmin}_{i\in S_2(x)}|x_i|\), \(j_1=\textrm{argmax}_{i\in S_2(x)}|x_i|\). Since \(j_0\in S_2(x)\), \(\frac{1}{\alpha }\in [\frac{1}{2},1)\), we have

$$\begin{aligned} 0<\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }<\frac{|x_{j_0}|}{\Vert x_{S_2(x)}\Vert _2}-\frac{1}{\alpha }\le \frac{1}{\sqrt{|S_2(x)|}}-\frac{1}{\alpha }\le \frac{1}{\sqrt{2}}-\frac{1}{\alpha }<\frac{1}{2}\le \frac{1}{\alpha }. \end{aligned}$$

Now we reduce \(|x_{j_0}|\) but not to 0, increase \(|x_{j_1}|\), and keep the other components in x and \(\Vert x\Vert _2\) constant. We set the changed vector as \(x^*\). Obviously, \(\frac{|x_{j_1}|}{\Vert x\Vert _2}<\frac{|x^*_{j_1}|}{\Vert x^*\Vert _2}\). In addition, when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow 0\), and further more \(0<\frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\). So when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}>\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\). Combined with the above, the following formula is generated,

$$\begin{aligned} \begin{aligned}&g_{S,h}(x)-g_{S,h}(x^*)\\&=\sum \limits _{i\in S_1(x)}\left( \frac{1}{\alpha }-\frac{|x_i|}{\Vert x\Vert _2}\right) |h_i|+\sum \limits _{i\in S_2(x)}\left( \frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_i|\\&\quad -\sum \limits _{i\in S_1(x^*)}\left( \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|-\sum \limits _{i\in S_2(x^*)}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_i|\\&=\left( \frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_{j_0}|+\left( \frac{|x_{j_1}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\right) |h_{j_1}|\\&\quad -\left( \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\right) |h_{j_0}|-\left( \frac{|x^*_{j_1}|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_{j_1}| \le 0. \end{aligned} \end{aligned}$$
(17)

Now we find an \(x^*\) satisfying \(\textrm{supp}(x^*)=\textrm{supp}(x)\), \(g_{S,h}(x^*)\ge g_{S,h}(x)\) and \(|S_2(x^*)|=|S_2(x)|-1\). If \(|S_2(x^*)|\ge 2\), we continue the above operation and find an \(x^{**}\) again such that \(x^{**}\) satisfies \(\textrm{supp}(x^{**})=\textrm{supp}(x^*)=\textrm{supp}(x)\), \(g_{T,h}(x^{**})\ge g_{S,h}(x^*)\ge g_{S,h}(x)\) and \(|S_2(x^{**})|=|S_2(x^*)|-1=|S_2(x)|-2\). Finally, we find an \(\overline{x}\) satisfying \(\textrm{supp}(\overline{x})=\textrm{supp}(x)\), \(g_{S,h}(\overline{x})\ge g_{S,h}(x)\) and \(|S_2(\overline{x})|=1\). \(\square \)

1.6 Proof of Lemma 5

Proof

We prove it in two cases.

(i) \(0<\alpha \le 1\). First, \(S_1\subset S_2\) implies \(\Vert h_{\overline{S_1}}\Vert _1\ge \Vert h_{\overline{S_2}}\Vert _1\). Second, take

$$\begin{aligned} x^*_i= {\left\{ \begin{array}{ll} x_i &{} i\in S_1\\ 1&{} i\in S_2\backslash S_1\\ 0 &{} i\notin S_2 \end{array}\right. }, \end{aligned}$$

then \(\textrm{supp}(x^*)=S_2\). In addition, since \(0<\alpha \le 1\), we have \(S_{12}(x)=\emptyset \), \(S_{11}(x)=S_1\), \(S_{22}(x^*)=\emptyset \), \(S_{21}(x^*)=S_2\) where \(S_{11}(x)\), \(S_{12}(x)\), \(S_{21}(x^*)\), \(S_{22}(x^*)\) are defined as (10). So

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S_{11}(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|+\sum \limits _{i\in S_{12}(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|=\sum \limits _{i\in S_{1}}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|\\&\le \sum \limits _{i \in S_{1}}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|\le \sum \limits _{i \in S_{1}}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|+\sum \limits _{i\in S_2\backslash S_1}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|\\&=\sum \limits _{i \in S_{2}}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|=\sum \limits _{i\in S_{21}(x^*)}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|+\sum \limits _{i\in S_{22}(x^*)}\left( \frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}-1\right) |h_i|. \end{aligned} \end{aligned}$$

Combined with the above, it is easy to know \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\).

(ii) \(1<\alpha \le 2\). When \(S_{12}(x)=\emptyset \), it is similar to case (i), the result is true. When \(S_{12}(x)\ne \emptyset \), since \(|S_1|>1\), Lemma 4 shows that there exists a vector \(\overline{x}\) such that \(\textrm{supp}(\overline{x})=S_1\), \(S_{11}(\overline{x})\ne \emptyset \), \(|S_{12}(\overline{x})|=1\), \(f_{S_1,h}(\overline{x})\le f_{S_1,h}(x)\). Taking \(l\in S_2\backslash S_1\) and setting \(j_0\in S_{12}(\overline{x})\), we set vector \(\tilde{x}\) as follows:

$$\begin{aligned} \tilde{x}_i= {\left\{ \begin{array}{ll} \overline{x}_i &{} i\in S_{11}(\overline{x})\\ 1 &{} i=l\\ \lambda \overline{x}_i &{} i=j_0 \end{array}\right. }, \end{aligned}$$

where \(\lambda \ge 1\) is a constant such that \(\frac{\tilde{x}_{j_0}}{\Vert \tilde{x}\Vert _2}=\frac{\overline{x}_{j_0}}{\Vert \overline{x}\Vert _2}\). This implies that \(\textrm{supp}(\tilde{x})=S_1\cup \{l\}\). It follows that

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S_{11}(\overline{x})}\left( 1-\frac{\alpha |\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_i|+\sum \limits _{i\in S_{12}(\overline{x})}\left( \frac{\alpha |\overline{x}_i|}{\Vert \overline{x}\Vert _2}-1\right) |h_i|\\&\quad =\sum \limits _{i\in S_{1}\backslash j_0}\left( 1-\frac{\alpha |\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_i|+\left( \frac{\alpha |\overline{x}_{j_0}|}{\Vert \overline{x}\Vert _2}-1\right) |h_{j_0}|\\&\quad \le \sum \limits _{i\in S_{1}\backslash j_0}\left( 1-\frac{\alpha |\tilde{x}_i|}{\Vert \tilde{x}\Vert _2}\right) |h_i|+\left( \frac{\alpha |\tilde{x}_{j_0}|}{\Vert \tilde{x}\Vert _2}-1\right) |h_{j_0}|\\&\quad \le \sum \limits _{i\in S_{1}\backslash j_0}\left( 1-\frac{\alpha |\tilde{x}_i|}{\Vert \tilde{x}\Vert _2}\right) |h_i|+\left( \frac{\alpha |\tilde{x}_{j_0}|}{\Vert \tilde{x}\Vert _2}-1\right) |h_{j_0}|+\left| \frac{\alpha |\tilde{x}_{l}|}{\Vert \tilde{x}\Vert _2}-1\right| |h_l|\\&\quad =\sum \limits _{i\in S_{11}(\tilde{x})}\left( 1-\frac{\alpha |\tilde{x}_i|}{\Vert \tilde{x}\Vert _2}\right) |h_i|+\sum \limits _{i\in S_{12}(\tilde{x})}\left( \frac{\alpha |\tilde{x}_i|}{\Vert \tilde{x}\Vert _2}-1\right) |h_i|, \end{aligned} \end{aligned}$$

which means \(g_{S_1\cup \{l\},h}(\tilde{x})\ge g_{S_1,h}(\overline{x})\), i.e., \(f_{S_1\cup \{l\},h}(\tilde{x})\le f_{S_1,h}(\overline{x})\). If \(S_1\cup \{l\}=S_2\), we stop, and find a vector \(x^*=\tilde{x}\) satisfying \(\textrm{supp}(x^*)=S_2\), \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\). If not, we continue to repeat the previous operation and eventually find a vector \(x^*\) satisfying \(\textrm{supp}(x^*)=S_2\), \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\). \(\square \)

1.7 Proof of Lemma 6

Proof

First, the choice of T indicates that \(\Vert h_{\overline{T}}\Vert _1\le \Vert h_{\overline{S}}\Vert _1\). Second, \(\forall x\), satisfying \(\textrm{supp}(x)=S\), take \(x^*\), satisfying \(\textrm{supp}(x^*)=T\), and \(x^*_{\sigma _i(h_T)}=x_{\sigma _i(h_S)}\), where \(\sigma _i(h)\) means the subscript of the \(i-\)th largest component of |h|. From the choice method of \(x^*\), we have

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|+\sum \limits _{i\in S_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\\&\quad \le \sum \limits _{i \in T_1(x^*)}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|+\sum \limits _{i\in T_2(x^*)}\left( \frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}-1\right) |h_i|, \end{aligned} \end{aligned}$$

where \(S_1(x)\), \(S_2(x)\), \(T_1(x^*)\), \(T_2(x^*)\) are defined in (10). Combined with the above, it is easy to determine that \(f_{T,h}(x^*)\le f_{S,h}(x)\). \(\square \)

1.8 Proof of Lemma 7

Proof

\((i)\Leftrightarrow (ii)\) is Lemma 1.

\((ii)\Rightarrow (iii)\). Taking \(S=T\) in (ii), then (iii) holds obviously.

\((iii)\Rightarrow (ii)\). We discuss this in terms of two cases.

\((*)\). \(|S|>1\). Lemmas 5 and 6 show that for an arbitrary subscript set S satisfying \(1<|S|\le k\), and arbitrary k-sparse vector x, \(\textrm{supp}(x)=S\), there exists an \(x^*\) satisfying \(\textrm{supp}(x^*)=T\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\) holds. Hence, if (iii) holds, then (ii) holds too.

\((**)\) \(|S|=1\). If \(0<\alpha \le 1\), from Remark 6, Lemma 5 holds; hence, similar to the proof of case \((*)\), the result holds. Now, we assume that \(1<\alpha \le 2\). In this case, \(S_{2}(x)=S:=\{l\}\). and

$$\begin{aligned} f_{S,h}(x)= & {} \Vert h_{\overline{S}}\Vert _1-\left( \frac{\alpha |x_l|}{\Vert x\Vert _2}-1\right) |h_l|=\Vert h_{\overline{S}}\Vert _1-(\alpha -1)|h_l|=\Vert h\Vert _1-\alpha |h_l|\\{} & {} \ge \Vert h\Vert _1-\alpha |h_{j_0}|, \end{aligned}$$

where \(j_0=\textrm{argmax}_i\{|h_i|\}\). Setting \(j_1=\textrm{argmax}_i\{|h_i|,i\ne j_0\}\), we assume that \(|h_{j_1}|\ne 0\), which is normal, because \(|h_{j_1}|=0\) if and only if A contains a column of zero vectors, such A is not allowed. We define the vector \(x^*\) as follows:

$$\begin{aligned} \begin{aligned} x^*_i= {\left\{ \begin{array}{ll} x_l &{} i=j_0\\ \lambda x^l&{} i=j_1\\ 0 &{} \textrm{other} \end{array}\right. }, \end{aligned} \end{aligned}$$
(18)

where \(\lambda \) is a sufficiently small number such that \(\frac{\lambda \alpha }{\sqrt{1+\lambda ^2}}\le 1\le \frac{\alpha }{\sqrt{1+\lambda ^2}}\). Setting \(\overline{S}=\{j_0,j_1\}\), then \(\textrm{supp}(x^*)=\overline{S}\), \(|\overline{S}|>1\) and

$$\begin{aligned} \begin{aligned} f_{\overline{S},h}(x^*)&=\Vert h_{\overline{(\overline{S})}}\Vert _1-\left( 1-\frac{\alpha |x^*_{j_1}|}{\Vert x^*\Vert _2}\right) |h_{j_1}|-\left( \frac{\alpha |x^*_{j_0}|}{\Vert x^*\Vert _2}-1\right) |h_{j_0}|\\&=\Vert h\Vert _1-\left( 2-\frac{\alpha |x^*_{j_1}|}{\Vert x^*\Vert _2}\right) |h_{j_1}|-\frac{\alpha |x^*_{j_0}|}{\Vert x^*\Vert _2}|h_{j_0}|\\&=\Vert h\Vert _1-\alpha |h_{j_0}|+\alpha |h_{j_0}|-\left( 2-\frac{\alpha |x^*_{j_1}|}{\Vert x^*\Vert _2}\right) |h_{j_1}|-\frac{\alpha |x^*_{j_0}|}{\Vert x^*\Vert _2}|h_{j_0}|\\&=\Vert h\Vert _1-\alpha |h_{j_0}|-\left( 2-\frac{\alpha |x^*_{j_1}|}{\Vert x^*\Vert _2}\right) |h_{j_1}|-\left( \frac{\alpha |x^*_{j_0}|}{\Vert x^*\Vert _2}-\alpha \right) |h_{j_0}|\\&=\Vert h\Vert _1-\alpha |h_{j_0}|-\left( 2-\frac{\alpha \lambda }{\sqrt{1+\lambda ^2}}\right) |h_{j_1}|-\left( \frac{\alpha }{\sqrt{1+\lambda ^2}}-\alpha \right) |h_{j_0}|. \end{aligned} \end{aligned}$$

Since

$$\begin{aligned} \lim \limits _{\lambda \longrightarrow 0^+}\left( 2-\frac{\alpha \lambda }{\sqrt{1+\lambda ^2}}\right) |h_{j_1}|+\left( \frac{\alpha }{\sqrt{1+\lambda ^2}}-\alpha \right) |h_{j_0}|=2|h_{j_1}|>0, \end{aligned}$$

there is r such that for \(0<\lambda <r\), we have \(\left( 2-\frac{\alpha \lambda }{\sqrt{1+\lambda ^2}}\right) |h_{j_1}|+\left( \frac{\alpha }{\sqrt{1+\lambda ^2}}-\alpha \right) |h_{j_0}|>0\). Taking \(\lambda \in (0,r)\) in (11) yields that

$$\begin{aligned} f_{S,h}(x)\ge \Vert h\Vert _1-\alpha |h_{j_0}|\ge f_{\overline{S},h}(x^*). \end{aligned}$$

The next proof is similar to the case of \((*)\) due to \(|\overline{S}|>1\). \(\square \)

1.9 Proof of Lemma 8

Proof

Since \(|\langle x,y\rangle |\le \Vert x\Vert _2\Vert y\Vert _2=\Vert y\Vert _2\), the infimum of \(\langle x,y\rangle \) exists. In addition, \(\langle x,y\rangle =\Vert x\Vert _2\Vert y\Vert _2cos \theta =\Vert y\Vert _2cos \theta \), where \(\theta \) is the angle between x and y. Hence, the larger \(\theta \) is, the smaller \(\langle x,y\rangle \) is. Obviously, the closer x is to an axis, the larger \(\theta \) is. Thus, the infimum of \(\langle x,y\rangle \) is obtained when x is on a certain number of axis. When x takes the unit vector on the number axis, \(\Vert y\Vert _2cos \theta \) is the projection on this number axis, and the minimum value is equal to \(\min \limits _{i\in R^n}y_i\). Therefore, the conclusion holds. \(\square \)

1.10 Proof of Proposition 1

Proof

For any \(k-\)sparse vector x, set \(\textrm{supp}(x)=S\), we have \(\Vert h_S\Vert _1\le \Vert h_T\Vert _1\), and \(\Vert h_{\overline{S}}\Vert _1\ge \Vert h_{\overline{T}}\Vert _1\). So, if \(S\subset T\), \(\Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|\). If \(S\not \subset T\)

$$\begin{aligned}&\Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\\&\quad = \Vert h_{\overline{S}}\Vert _1-\left( \Vert h_S\Vert _1-\min \limits _{i\in S}|h_i|\right) +(\alpha -1)\min \limits _{i\in S}|h_i|\\&\ge \Vert h_{\overline{T}}\Vert _1-\left( \Vert h_T\Vert _1-\min \limits _{i\in T}|h_i|\right) +(\alpha -1)\min \limits _{i\in S}|h_i|\\&\quad \ge \Vert h_{\overline{T}}\Vert _1-\left( \Vert h_T\Vert _1-\min \limits _{i\in T}|h_i|\right) +(\alpha -1)\min \limits _{i\in T}|h_i|\\&\quad = \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|. \end{aligned}$$

Combined with the above two situations, we always have

$$\begin{aligned} \begin{aligned} \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i| \ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|. \end{aligned} \end{aligned}$$
(19)

(13) and (19) yield

$$\begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle =\Vert h_{\overline{S}}\Vert _1+\sum \limits _{i\in S}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) \textrm{sign}(x_i)h_i\\&\ge \Vert h_{\overline{S}}\Vert _1-\sum \limits _{i\in S}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|=\Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\sum \limits _{i\in S}\frac{\alpha |x_i|}{\Vert x\Vert _2}|h_i|\\&\ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\sum \limits _{i\in S}\frac{\alpha |x_i|}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|= \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\frac{\alpha \Vert x\Vert _1}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|\\&\ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i| >0. \end{aligned}$$

From Theorem 1, we know k-sparse vector x can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization. \(\square \)

1.11 Proof of Proposition 2

Proof

If an arbitrary k-sparse vector x can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization, then from Theorem 1, we have, for all k-sparse vector x, (9) holds for all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\) with \(\textrm{supp}(x)=S\). Combining Lemma 7, we get \(f_{T,h}(x^*)>0\) with arbitrary \(x^*\ne 0\), \(\textrm{supp}(x^*)=T\). Since \(\alpha \le 1\), \(T_2(x^*)=\emptyset \), \(T_1(x^*)=T\), So

$$\begin{aligned} f_{T,h}(x^*)=\Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T}\left( 1-\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|=\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\sum \limits _{i\in T}\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}|h_i|>0 \end{aligned}$$

for all \(x^*\) with \(\textrm{supp}(x^*)=T\). From Lemma 8,

$$\begin{aligned} \textrm{inf}\left( \sum \limits _{i\in T}\frac{\alpha |x^*_i|}{\Vert x^*\Vert _2}|h_i|\right) =\alpha \, \textrm{inf}\left( \sum \limits _{i\in T}\frac{|x^*_i|}{\Vert x^*\Vert _2}|h_i|\right) =\alpha \min \limits _{i\in T}|h_i|. \end{aligned}$$

Hence, the theorem holds. \(\square \)

1.12 Proof of Theorem 2

Proof

Necessity is Proposition 2. Now, we prove the sufficiency. Since \(\Vert x\Vert _0>1\), we have \(\frac{\Vert x\Vert _1}{\Vert x\Vert _2}>1\). If \(\min \limits _{i\in S}|h_i|>0\), then from the proof of Proposition 1 and (19), we have

$$\begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle \ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\sum \limits _{i\in S}\frac{\alpha |x_i|}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|\\&\ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\frac{\alpha \Vert x\Vert _1}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|>\Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\\&\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|\ge 0. \end{aligned}$$

If \(\min \limits _{i\in S}|h_i|=0\), \(\max \limits _{i\in S}|h_i|>0\), then

$$\begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle \ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\sum \limits _{i\in S}\frac{\alpha |x_i|}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|\\&> \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1= \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\\&\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|\ge 0. \end{aligned}$$

If \(\max \limits _{i\in S}|h_i|=0\), then

$$\begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2},\alpha h\right\rangle \ge \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\sum \limits _{i\in S}\frac{\alpha |x_i|}{\Vert x\Vert _2}\min \limits _{i\in S}|h_i|\\&= \Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1=\Vert h_{\overline{S}}\Vert _1=\Vert h\Vert _1\ge \Vert h\Vert _2=1>0. \end{aligned}$$

According to Theorem 1, the conclusion of the theorem is true. \(\square \)

1.13 Proof of Theorem 3

Proof

Necessity is Proposition 2. Now, we prove sufficiency. We prove it in two cases.

\((*)\). \(\Vert x\Vert _0>1\). This case is similar to the proof of Theorem 2.

\((**)\). \(\Vert x\Vert _0=1\). Set \(\textrm{supp}(x)=S=\{j_0\}\). If \(h_{j_0}=0\), then

$$\begin{aligned} \sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, h\right\rangle =\Vert h_{\overline{S}}\Vert _1=\Vert h\Vert _1\ge \Vert h\Vert _2=1>0. \end{aligned}$$

If \(h_{j_0}\ne 0\), then

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, h\right\rangle \\&\quad =\textrm{sign}(x_{j_0})h_{j_0}+\Vert h_{\overline{S}}\Vert _1-\textrm{sign}(x_{j_0})h_{j_0}\\&\quad =\Vert h_{\overline{S}}\Vert _1=\Vert h_{[n]\backslash {j_0}}\Vert _1>0. \end{aligned} \end{aligned}$$

The last inequality is because of the assumption \(\Vert h\Vert _0>1\), which is normal, because \(\Vert h\Vert _0=1\) if and only if A contains a column of 0 vectors, such A is not allowed. Theorem 1 demonstrates that the conclusion of this theorem is valid. \(\square \)

1.14 Proof of Lemma 9

Proof

This proof is similar to that of Lemma 2. However, it should be noted that there exists an \(i\in T\) such that \(|h_i|\ne 0\). Hence, at least one of the following two inequalities is strict:

$$\begin{aligned}{} & {} \sum \limits _{i\in T_1(x)}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|\le \sum \limits _{i\in T_1(x^*)}\left( 1-\frac{\alpha |x^*(x)_i|}{\Vert x^*(x)\Vert _2}\right) |h_i|,\\{} & {} \sum \limits _{i\in T_2(x)}\left( \frac{\alpha |x_i|}{\Vert x\Vert _2}-1\right) |h_i|\le \sum \limits _{i\in T_2(x^*)}\left( \frac{\alpha |x^*(x)_i|}{\Vert x^*\Vert _2}-1\right) |h_i|. \end{aligned}$$

Therefore, our lemma conclusion is a strict inequality. \(\square \)

1.15 Proof of Lemma 10

Proof

Since \(T_2(x)=\emptyset \), we know \(|T|>1\). Suppose \(T=\{i_1,i_2,\cdots i_k\}\) satisfying \(|h_{i_1}|\le |h_{i_2}|\le \cdots \le |h_{i_k}|\), \(0\le \frac{1}{\alpha }-\frac{|x_{j_1}|}{\Vert x\Vert _2}\le \frac{1}{\alpha }-\frac{|x_{j_2}|}{\Vert x\Vert _2}\le \cdots \le \frac{1}{\alpha }-\frac{|x_{j_k}|}{\Vert x\Vert _2}\), where \(\{j_1,j_2,\cdots ,j_k\}\) is a rearrangement of elements in T. We set \(\overline{x}_{i_p}=x_{j_p}\) where \(p\in [k]\), Otherwise, \(\overline{x}_i=0\). From the definition of \(\overline{x}\), we have \(\textrm{supp}(x)=\textrm{supp}(\overline{x})\), \(T_2(\overline{x})=\emptyset \) and the ordering inequality shows \(g_{T,h}(\overline{x})\ge g_{T,h}(x)\). Now, let us keep \(\overline{x_{i_1}}\) constant and let \(|\overline{x}_i|\) for \(i\in T\backslash \{i_1\}\) decrease but not to 0. We assume that after this change, \(\overline{x}\) becomes \(x^*\). Obviously, for all \(i\in T\backslash \{i_1\}\), when \(|x^*_i|\rightarrow 0\), we have \(\frac{|x^*_i|}{\Vert x^*\Vert _2}\rightarrow 0\), \(0\le \frac{1}{\alpha }-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\le \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\) and \(\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}\rightarrow 1>\frac{1}{\alpha }\). This shows that, when \(x_i\) is sufficiently small for all \(i\in T\backslash \{i_1\}\), we have \(T_2(x^*)=\{i_1\}\), \(T_1(x^*)=T\backslash \{i_1\}\) and \(\frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\le 0\). Hence,

$$\begin{aligned} g_{T,h}(x)-g_{T,h}(x^*){} & {} \le g_{T,h}(\overline{x})-g_{T,h}(x^*)\nonumber \\= & {} \sum \limits _{i\in T_1(\overline{x})}\left( \frac{1}{\alpha }-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_i|-\sum \limits _{i\in T_1(x^*)}\left( \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|\nonumber \\{} & {} -\sum \limits _{i\in T_2(x^*)}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_i|\nonumber \\= & {} \sum \limits _{i\in T\backslash \{i_1\}}\left( \frac{1}{\alpha }-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_i|+\left( \frac{1}{\alpha }-\frac{|\overline{x}_{i_1}|}{\Vert \overline{x}\Vert _2}\right) |h_{i_1}|\nonumber \\{} & {} -\sum \limits _{i\in T\backslash \{i_1\}}\left( \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\right) |h_i|-\left( \frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\right) |h_{i_1}|\nonumber \\= & {} \sum \limits _{i\in T\backslash \{i_1\}}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_i|+\left( \frac{2}{\alpha }-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_{i_1}|}{\Vert \overline{x}\Vert _2}\right) |h_{i_1}|\nonumber \\{} & {} \le \sum \limits _{i\in T\backslash \{i_1\}}\left( \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\right) |h_{i_1}|+\left( \frac{2}{\alpha }-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_{i_1}|}{\Vert \overline{x}\Vert _2}\right) |h_{i_1}|\nonumber \\= & {} \left( \frac{2}{\alpha }-\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}+\sum \limits _{i\in T\backslash \{i_1\}}\frac{|x_i^*|}{\Vert x^*\Vert _2}\right) |h_{i_1}|.\nonumber \\ \end{aligned}$$
(20)

In addition, since \(\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}>1\), \(\frac{2}{\alpha }\in [1,2)\), we have

$$\begin{aligned} \lim \limits _{x_i^*\rightarrow 0,\forall i\in T\backslash \{i_1\}}\frac{2}{\alpha }-\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}+\sum \limits _{i\in T\backslash \{i_1\}}\frac{|x_i^*|}{\Vert x^*\Vert _2}=\frac{2}{\alpha }-\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}-1+0<0. \end{aligned}$$

By the limit sign preserving property, there exists an \(x^*\) such that \(\frac{2}{\alpha }-\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}+\sum \limits _{i\in T\backslash \{i_1\}}\frac{|x_i^*|}{\Vert x^*\Vert _2}<0\). Combined with (20), \(g_{T,h}(x^*)\ge g_{T,h}(x)\), that is \(f_{T,h}(x^*)\le f_{T,h}(x)\). \(\square \)

1.16 Proof of Proposition 3

Proof

For an arbitrary k-sparse vector \(x\in R^n\), set \(S=\textrm{supp}(x)\). For all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\), \(T=\textrm{supp}(h_{\textrm{max}(k)})\), from Lemma 5, Lemma 6 and the proof process of case \((**)\) in Lemma 7, there exists an \(x^*\), \(\textrm{supp}(x^*)=T\), \(|T_2(x^*)|=1\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\). Hence,

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, \alpha h\right\rangle \\&=\Vert h_{\overline{S}}\Vert _1+\sum \limits _{i\in S}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) \textrm{sign}(x_i)h_i\\&\ge f_{S,h}(x)\ge f_{T,h}(x^*). \end{aligned} \end{aligned}$$
(21)

In the case of \(k=1\), \(T_2(x^*)=T\) and \(|T_2(x^*)|=1\). Set \(T_2(x^*)=\{j_0\}\) then

$$\begin{aligned} \begin{aligned} f_{T,h}(x^*)&=\Vert h_{\overline{T}}\Vert _1-\left( \frac{\alpha |x_{j_0}|}{\Vert x\Vert _2}-1\right) |h_{j_0}|\\&=\Vert h_{\overline{T}}\Vert _1-(\alpha -1)|h_{j_0}|\\&=\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )|h_{j_0}|. \end{aligned} \end{aligned}$$
(22)

where we use the fact \(\Vert h_T\Vert _1=|h_{j_0}|\). Combining (15), (21) and (22), we get

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, \alpha h\right\rangle \\&\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )|h_{j_0}|\\&=\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )\min \limits _{i\in T}|h_i|>0. \end{aligned} \end{aligned}$$
(23)

From Theorem 1, the conclusion is true in the case of \(k=1\).

In the case of \(k>1\), we define a set \(\Omega (x^*)=\{x|\textrm{supp}(x)=\textrm{supp}(x^*),T_2(x)=T_2(x^*)\}\). Setting \(T_2(x^*)=\{j_0\}\), by (15) and (21) we yield

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, \alpha h\right\rangle \ge f_{T,h}(x^*)\ge \inf \limits _{x\in \varOmega (x^*)}\,f_{T,h}(x)\\&=\inf \limits _{x\in \varOmega (x^*)}\left\{ \Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|-\left( \frac{\alpha |x_{j_0}|}{\Vert x\Vert _2}-1\right) |h_{j_0}|\right\} \\&=\Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}|h_i|-(\alpha -1)|h_{j_0}|\\&=\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )|h_{j_0}|\\&\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )\min \limits _{i\in T}|h_i|>0. \end{aligned} \end{aligned}$$
(24)

Theorem 1 shows that this conclusion holds for \(k>1\). Therefore, we finish the proof. \(\square \)

1.17 Proof of Proposition 4

Proof

From Theorem 1 and Lemma 7, if an arbitrary k-sparse vector \(x\in R^n\) can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization, then \(f_{T,h}(x^*)>0\) holds for all \( h\in \textrm{ker} A\), \(\Vert h\Vert _2=1\) with \(T=\textrm{supp}(h_{\textrm{max}(k)})\) and arbitrary \(x^*\ne 0\) satisfying \(\textrm{supp}(x^*)=T\). Setting \(j_0=\textrm{argmin}_{i\in T}|h_i|\), we define a vector x(h) as

$$\begin{aligned} x(h)_i= {\left\{ \begin{array}{ll} 1&{} i=j_0\\ \varepsilon &{} i\in T\backslash \{j_0\}\\ 0&{} i\in \overline{T}\\ \end{array}\right. }, \end{aligned}$$

where \(\varepsilon \) is sufficiently small so that \(S_1(x(h))= T\backslash \{j_0\}\) and \(S_2(x(h))= \{j_0\}\). We define a set \(\Omega (x(h))=\{x|\textrm{supp}(x)=\textrm{supp}(x(h)),T_2(x)=T_2(x(h))\}\), then

$$\begin{aligned} \begin{aligned}&\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )\min \limits _{i\in T}|h_i|\\&=\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )|h_{j_0}|\\&=\Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}|h_i|-(\alpha -1)|h_{j_0}|\\&=\inf \limits _{x\in \varOmega (x(h))}\left\{ \Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|-\left( \frac{\alpha |x_{j_0}|}{\Vert x\Vert _2}-1\right) |h_{j_0}|\right\} \\&=\inf \limits _{x\in \varOmega (x(h))}\,f_{T,h}(x)\ge 0, \end{aligned} \end{aligned}$$

The last inequality is because \(f_{T,h}(x)>0\) holds for all \(x\ne 0\) with \(\textrm{supp}(x)=T\). \(\square \)

1.18 Proof of Theorem 4

Proof

Necessity is Proposition 4. Now, we prove sufficiency. First, similar to Proposition 3, for an arbitrary k-sparse vector \(x\in R^n\) with \(S=\textrm{supp}(x)\), there exists an \(x^*\), \(\textrm{supp}(x^*)=T\), \(|T_2(x^*)|=1\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\). Set \(T_2(x^*)=\{j_0\}\), and, we define a vector \(x^{**}\) as

$$\begin{aligned} x^{**}_i= {\left\{ \begin{array}{ll} 0.5x^*_i&{} i\ne j_0\\ x^*_i&{} i=j_0\\ \end{array}\right. }, \end{aligned}$$

We know that \(T_1(x^*)=T_1(x^{**})\), \(T_2(x^*)=T_2(x^{**})\). Since \(\Vert x\Vert _0>1\), \(|T_2(x^{**})|=1\), we know \(|T_1(x^{**})|\ne \emptyset \). From Lemma 9, we have \(f_{T,h}(x^{**})<f_{T,h}(x^*)\). Next, we define a set \(\Omega (x^{**})=\{x|\textrm{supp}(x)=\textrm{supp}(x^{**}),T_2(x)=T_2(x^{**})\}\), then

$$\begin{aligned} \begin{aligned}&\sum \limits _{i\in S}\textrm{sign}(x_i)h_i+\Vert h_{\overline{S}}\Vert _1-\left\langle \frac{x}{\Vert x\Vert _2}, \alpha h\right\rangle \ge f_{T,h}(x^*)>f_{T,h}(x^{**})\ge \inf \limits _{x\in \varOmega (x^{**})}\,f_{T,h}(x)\\&\quad =\inf \limits _{x\in \varOmega (x^{**})}\left\{ \Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}\left( 1-\frac{\alpha |x_i|}{\Vert x\Vert _2}\right) |h_i|-\left( \frac{\alpha |x_{j_0}|}{\Vert x\Vert _2}-1\right) |h_{j_0}|\right\} \\&\quad =\Vert h_{\overline{T}}\Vert _1-\sum \limits _{i\in T\backslash \{j_0\}}|h_i|-(\alpha -1)|h_{j_0}|\\&\quad =\Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )|h_{j_0}|\\&\quad \ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+(2-\alpha )\min \limits _{i\in T}|h_i|\ge 0. \end{aligned} \end{aligned}$$

Because of Theorem 1, the result of the theorem is true. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, S., Li, J. & Liang, K. k-Sparse Vector Recovery via \(\ell _1-\alpha \ell _2\) Local Minimization. J Optim Theory Appl 201, 75–102 (2024). https://doi.org/10.1007/s10957-024-02380-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-024-02380-y

Keywords

Navigation