Abstract
This paper studies the \(\ell _1-\alpha \ell _2\) local minimization model for \(\alpha \in (0,2]\), which is the first time to consider the case of \(\alpha >1\). We obtain the necessary and sufficient conditions for a fixed sparse signal to be recovered from this model. Based on this condition, we also obtain the necessary and sufficient conditions for any k-sparse signal to be recovered from \(\ell _1-\alpha \ell _2\) local minimization model with \(0<\alpha <1\), \(\alpha =1\) and \(1<\alpha \le 2\). The experimental data show that the size of \(\alpha \) is positively correlated with the success rate of signal recovery.
Similar content being viewed by others
References
Bi, N., Tan, J., Tang, W.: A new sufficient condition for sparse vector recovery via \({\ell }_1 - {\ell }_2\) local minimization. Anal. Appl. 19, 1019–1031 (2021)
Bi, N., Tang, W.: A necessary and sufficient condition for sparse vector recovery via \({\ell }_2\) minimization. Appl. Comput. Harmonic Anal. 56, 337–350 (2022)
Cai, T., Wang, L., Xu, G.: New bounds for restricted isometry constants. IEEE Press 56, 4388–4394 (2010)
Cai, T., Zhang, A.: Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 60, 122–132 (2014)
Candes, E.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematiques 346, 589–592 (2008)
Candes, E., Eldar, Y.C., Needell, D.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31, 59–73 (2011)
Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 657–682 (2010)
Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. Siam J. Sci. Comput. 32, 2832–2852 (2010)
Foucart, S.: A note on guaranteed sparse recovery via \(\ell _1\)-minimization. Appl. Comput. Harmonic Anal. 29, 97–103 (2010)
Foucart, S., Rauhut, H.: A mathematical introduction to compressive sensing. Appl. Numer. Harmonic Anal. Ser. 44, 501–502 (2013)
Ge, H., Li, p.: The Dantzig selector: Recovery of signal via \(\ell _1-\alpha \ell _2\) minimization, arXiv:2105.14229, (2021)
Ge, H., Wen, J., Chen, W.: The null space property of the truncated \(\ell _{1-2}\) -minimization. IEEE Signal Process. Lett. 25, 1261–1265 (2018)
Geng, P., Chen, W.: Unconstrained \({\ell }_1 - {\ell }_2\) minimization for sparse recovery via mutual coherence. Math. Found. Comput. 3, 65–79 (2020)
Li, P., Chen, W.: Signal recovery under cumulative coherence. J. Comput. Appl. Math. 346, 399–417 (2018)
Liang, K., Bi, N.: A new upper bound of p for \(\ell _p\)-minimization in compressed sensing. Signal Process. 176, 107695 (2020)
Lin, J., Li, S., Yi, S.: New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal Process. 61, 611–621 (2013)
Liu, L.: The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmonic Anal. 30, 402–406 (2011)
Lou, Y., Osher, S., Xin, J.: Computational aspects of constrained l1–l2 minimization for compressive sensing. Adv. Intell. Syst. Comput. 359, 169–180 (2015)
Lou, Y., Yin, P., He, Q.: Computing sparse representation in a highly coherent dictionary based on difference of l1 and l2. J. Sci. Comput. 64, 178–96 (2015)
Lou, Y., Yin, P., Xin, J.: Minimization of l(1–2) for compressed sensing. SIAM J. Sci. Comput. 37, A536–A563 (2015)
Mo, Q., Li, S.: New bounds on the restricted isometry constant \(\delta _{2k}\). Appl. Comput. Harmonic Anal. 31, 460–468 (2011)
Rick, C.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)
Suykens, A.J.: Two-level l(1) minimization for compressed sensing. Off. Publ. Eur. Assoc. Signal Process. (EURASIP) 108, 459–475 (2015)
Wang, J., Wang, W.: An improved sufficient condition of \(\ell _{1-2}\)-minimisation for robust signal recovery. Electron. Lett. 55 (2019)
Wen, J., Weng, J., Tong, C.: Sparse signal recovery with minimization of 1-norm minus 2-norm. IEEE Trans. Veh. Technol. 68, 6847–6854 (2017)
Xiu, X., Kong, L., Li, Y., Qi, H.: Iterative reweighted methods for \(\ell _1-\ell _p\) minimization, (2018)
Yan, L., Shin, Y., Xiu, D.: Sparse approximation using \({\ell }_1-{\ell }_2\) minimization and its application to stochastic collocation. SIAM J. Sci. Comput. 39, A229–A254 (2017)
Zhang, S., Xin, J.: Minimization of transformed \(l_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing. Math. Programm. 169, 307–336 (2018)
Zhang, S.L.R.: Optimal rip bounds for sparse signals recovery via p minimization. Appl. Comput. Harmonic Anal. 47, 566–584 (2017)
Zhang, L.C.H., Yin, W.: Necessary and sufficient conditions of solution uniqueness in \(\ell _1\) minimization. J. Optim. Theory Appl. 164, 109–122 (2012)
Zhao, Y.: Rsp-based analysis for sparsest and least \(\ell _1\)-norm solutions to underdetermined linear systems. IEEE Trans. Signal Process. 61, 5777–5788 (2013)
Zhao, Y.: Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications. J. Oper. Res. Soc. China 2, 171–193 (2014)
Zhao, Y., Li, D.: Reweighted \(\ell _1\)-minimization for sparse solutions to underdetermined linear systems. Siam J. Optim. 22, 1065–1088 (2012)
Acknowledgements
Jia Li’s work is partially supported by the National Key R &D Program of China (No. 2021YFA1001300), Guangdong-Hong Kong-Macau Applied Math Center under Grant No. 2020B1515310011 and NSFC general under Grant No. 12171496. Kaihao Liang’s work is partially supported by the NSF of Guangdong under Grant No. 2018A0303130136, the Science and Technology Planning Project of Guangdong under Grant Nos. 2015A070704059 and 2015A030402008. The authors declare that no funds, grants, or other support was received during the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose. All the authors contributed to the conception and proof in the study. The first draft of the manuscript was written by Shaohua Xie, and all the authors commented on the previous versions of the manuscript. All the authors have read and approved the final manuscript. All data generated or analyzed during this study are included in this published article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Lam M. Nguyen.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A
Appendix A
1.1 Proof of Theorem 1
Proof
Necessity. From (7) and Remark 1,
Hence, from (8)
Sufficiency. Let (9) holds for all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\). Then there exists an \(h^*\in \textrm{ker} A\) and \(\Vert h^*\Vert _2=1\) such that
On the other hand,
Hence,
when \(t<\textrm{min}\left\{ \frac{\rho \Vert x\Vert _2}{2\alpha },\textrm{min}_{i\in S}|x_i|\right\} \). \(\square \)
1.2 Proof of Lemma 1
Proof
\((i)\Rightarrow (ii)\). \(\forall h \in \textrm{ker} A\), \(\Vert h\Vert _2=1\), and \(\forall x\ne 0\), \(\textrm{supp}(x)=S\), \(|S|\le k\). Set
then \(\textrm{supp}(x^*)=S\) and
\((ii)\Rightarrow (i)\). Note that \(\forall h \in \textrm{ker} A\), \(\Vert h\Vert _2=1\), and \(\forall x\ne 0\), \(\textrm{supp}(x)=S\), \(|S|\le k\)
\(\square \)
1.3 Proof of Lemma 2
Proof
We note that, \(\forall x\ne 0\), satisfying \(\textrm{supp}(x)=S\), \(S_1(x)\ne \emptyset \), \(S_2(x)\ne \emptyset \), then for any \(i\in S_1(x)\), we have
which means that when all elements in \(S_1(x)\) are reduced in absolute for the same proportion, any \(i\in S_1(x)\), \(\frac{|x_i|}{\Vert x\Vert _2}\) will also become smaller. In addition, for any \(j\in S_2(x)\), \(i\in S_1(x)\) it is easy to know \(\frac{|x_j|}{\Vert x\Vert _2}\) increases as \(|x_i|\) decreases. Hence, \(S_1(x^*(x))=S_1(x)\), \(S_2(x^*(x))=S_2(x)\) and
It follows that \(f_{S,h}(x)\ge f_{S,h}(x^*(x))\). Similarly,
This yields, \(f_{S,h}(x)\ge f_{S,h}(x^{**}(x))\). \(\square \)
1.4 Proof of Lemma 3
Proof
Set \(j_0=\textrm{argmin}_{i\in S}|x_i|\). We gradually reduce \(|x_{j_0}|\) but not to 0 and keep the other components in x unchanged. Set the changed vector as \(x^*\). It is easy to know that \(0<\frac{|x_i|}{\Vert x\Vert _2}-\frac{1}{\alpha }\le \frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{1}{\alpha }\) for \(i\in S\backslash \{j_0\}\). Additionally, because \(1<\alpha \le 2\),
However, when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow 0\). Hence, \(0\le \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\). Since \(\frac{1}{\alpha }\in [\frac{1}{2},1)\) we have \( \frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\ge \frac{1}{2}>\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\) when \(|x^*_{j_0}|\rightarrow 0\). Combined with the above, the following formula is generated,
Therefore, the conclusion holds. \(\square \)
1.5 Proof of Lemma 4
Proof
Set \(j_0=\textrm{argmin}_{i\in S_2(x)}|x_i|\), \(j_1=\textrm{argmax}_{i\in S_2(x)}|x_i|\). Since \(j_0\in S_2(x)\), \(\frac{1}{\alpha }\in [\frac{1}{2},1)\), we have
Now we reduce \(|x_{j_0}|\) but not to 0, increase \(|x_{j_1}|\), and keep the other components in x and \(\Vert x\Vert _2\) constant. We set the changed vector as \(x^*\). Obviously, \(\frac{|x_{j_1}|}{\Vert x\Vert _2}<\frac{|x^*_{j_1}|}{\Vert x^*\Vert _2}\). In addition, when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow 0\), and further more \(0<\frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}\rightarrow \frac{1}{\alpha }\). So when \(|x^*_{j_0}|\rightarrow 0\), we have \(\frac{1}{\alpha }-\frac{|x^*_{j_0}|}{\Vert x^*\Vert _2}>\frac{|x_{j_0}|}{\Vert x\Vert _2}-\frac{1}{\alpha }\). Combined with the above, the following formula is generated,
Now we find an \(x^*\) satisfying \(\textrm{supp}(x^*)=\textrm{supp}(x)\), \(g_{S,h}(x^*)\ge g_{S,h}(x)\) and \(|S_2(x^*)|=|S_2(x)|-1\). If \(|S_2(x^*)|\ge 2\), we continue the above operation and find an \(x^{**}\) again such that \(x^{**}\) satisfies \(\textrm{supp}(x^{**})=\textrm{supp}(x^*)=\textrm{supp}(x)\), \(g_{T,h}(x^{**})\ge g_{S,h}(x^*)\ge g_{S,h}(x)\) and \(|S_2(x^{**})|=|S_2(x^*)|-1=|S_2(x)|-2\). Finally, we find an \(\overline{x}\) satisfying \(\textrm{supp}(\overline{x})=\textrm{supp}(x)\), \(g_{S,h}(\overline{x})\ge g_{S,h}(x)\) and \(|S_2(\overline{x})|=1\). \(\square \)
1.6 Proof of Lemma 5
Proof
We prove it in two cases.
(i) \(0<\alpha \le 1\). First, \(S_1\subset S_2\) implies \(\Vert h_{\overline{S_1}}\Vert _1\ge \Vert h_{\overline{S_2}}\Vert _1\). Second, take
then \(\textrm{supp}(x^*)=S_2\). In addition, since \(0<\alpha \le 1\), we have \(S_{12}(x)=\emptyset \), \(S_{11}(x)=S_1\), \(S_{22}(x^*)=\emptyset \), \(S_{21}(x^*)=S_2\) where \(S_{11}(x)\), \(S_{12}(x)\), \(S_{21}(x^*)\), \(S_{22}(x^*)\) are defined as (10). So
Combined with the above, it is easy to know \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\).
(ii) \(1<\alpha \le 2\). When \(S_{12}(x)=\emptyset \), it is similar to case (i), the result is true. When \(S_{12}(x)\ne \emptyset \), since \(|S_1|>1\), Lemma 4 shows that there exists a vector \(\overline{x}\) such that \(\textrm{supp}(\overline{x})=S_1\), \(S_{11}(\overline{x})\ne \emptyset \), \(|S_{12}(\overline{x})|=1\), \(f_{S_1,h}(\overline{x})\le f_{S_1,h}(x)\). Taking \(l\in S_2\backslash S_1\) and setting \(j_0\in S_{12}(\overline{x})\), we set vector \(\tilde{x}\) as follows:
where \(\lambda \ge 1\) is a constant such that \(\frac{\tilde{x}_{j_0}}{\Vert \tilde{x}\Vert _2}=\frac{\overline{x}_{j_0}}{\Vert \overline{x}\Vert _2}\). This implies that \(\textrm{supp}(\tilde{x})=S_1\cup \{l\}\). It follows that
which means \(g_{S_1\cup \{l\},h}(\tilde{x})\ge g_{S_1,h}(\overline{x})\), i.e., \(f_{S_1\cup \{l\},h}(\tilde{x})\le f_{S_1,h}(\overline{x})\). If \(S_1\cup \{l\}=S_2\), we stop, and find a vector \(x^*=\tilde{x}\) satisfying \(\textrm{supp}(x^*)=S_2\), \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\). If not, we continue to repeat the previous operation and eventually find a vector \(x^*\) satisfying \(\textrm{supp}(x^*)=S_2\), \(f_{S_2,h}(x^*)\le f_{S_1,h}(x)\). \(\square \)
1.7 Proof of Lemma 6
Proof
First, the choice of T indicates that \(\Vert h_{\overline{T}}\Vert _1\le \Vert h_{\overline{S}}\Vert _1\). Second, \(\forall x\), satisfying \(\textrm{supp}(x)=S\), take \(x^*\), satisfying \(\textrm{supp}(x^*)=T\), and \(x^*_{\sigma _i(h_T)}=x_{\sigma _i(h_S)}\), where \(\sigma _i(h)\) means the subscript of the \(i-\)th largest component of |h|. From the choice method of \(x^*\), we have
where \(S_1(x)\), \(S_2(x)\), \(T_1(x^*)\), \(T_2(x^*)\) are defined in (10). Combined with the above, it is easy to determine that \(f_{T,h}(x^*)\le f_{S,h}(x)\). \(\square \)
1.8 Proof of Lemma 7
Proof
\((i)\Leftrightarrow (ii)\) is Lemma 1.
\((ii)\Rightarrow (iii)\). Taking \(S=T\) in (ii), then (iii) holds obviously.
\((iii)\Rightarrow (ii)\). We discuss this in terms of two cases.
\((*)\). \(|S|>1\). Lemmas 5 and 6 show that for an arbitrary subscript set S satisfying \(1<|S|\le k\), and arbitrary k-sparse vector x, \(\textrm{supp}(x)=S\), there exists an \(x^*\) satisfying \(\textrm{supp}(x^*)=T\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\) holds. Hence, if (iii) holds, then (ii) holds too.
\((**)\) \(|S|=1\). If \(0<\alpha \le 1\), from Remark 6, Lemma 5 holds; hence, similar to the proof of case \((*)\), the result holds. Now, we assume that \(1<\alpha \le 2\). In this case, \(S_{2}(x)=S:=\{l\}\). and
where \(j_0=\textrm{argmax}_i\{|h_i|\}\). Setting \(j_1=\textrm{argmax}_i\{|h_i|,i\ne j_0\}\), we assume that \(|h_{j_1}|\ne 0\), which is normal, because \(|h_{j_1}|=0\) if and only if A contains a column of zero vectors, such A is not allowed. We define the vector \(x^*\) as follows:
where \(\lambda \) is a sufficiently small number such that \(\frac{\lambda \alpha }{\sqrt{1+\lambda ^2}}\le 1\le \frac{\alpha }{\sqrt{1+\lambda ^2}}\). Setting \(\overline{S}=\{j_0,j_1\}\), then \(\textrm{supp}(x^*)=\overline{S}\), \(|\overline{S}|>1\) and
Since
there is r such that for \(0<\lambda <r\), we have \(\left( 2-\frac{\alpha \lambda }{\sqrt{1+\lambda ^2}}\right) |h_{j_1}|+\left( \frac{\alpha }{\sqrt{1+\lambda ^2}}-\alpha \right) |h_{j_0}|>0\). Taking \(\lambda \in (0,r)\) in (11) yields that
The next proof is similar to the case of \((*)\) due to \(|\overline{S}|>1\). \(\square \)
1.9 Proof of Lemma 8
Proof
Since \(|\langle x,y\rangle |\le \Vert x\Vert _2\Vert y\Vert _2=\Vert y\Vert _2\), the infimum of \(\langle x,y\rangle \) exists. In addition, \(\langle x,y\rangle =\Vert x\Vert _2\Vert y\Vert _2cos \theta =\Vert y\Vert _2cos \theta \), where \(\theta \) is the angle between x and y. Hence, the larger \(\theta \) is, the smaller \(\langle x,y\rangle \) is. Obviously, the closer x is to an axis, the larger \(\theta \) is. Thus, the infimum of \(\langle x,y\rangle \) is obtained when x is on a certain number of axis. When x takes the unit vector on the number axis, \(\Vert y\Vert _2cos \theta \) is the projection on this number axis, and the minimum value is equal to \(\min \limits _{i\in R^n}y_i\). Therefore, the conclusion holds. \(\square \)
1.10 Proof of Proposition 1
Proof
For any \(k-\)sparse vector x, set \(\textrm{supp}(x)=S\), we have \(\Vert h_S\Vert _1\le \Vert h_T\Vert _1\), and \(\Vert h_{\overline{S}}\Vert _1\ge \Vert h_{\overline{T}}\Vert _1\). So, if \(S\subset T\), \(\Vert h_{\overline{S}}\Vert _1-\Vert h_S\Vert _1+\alpha \min \limits _{i\in S}|h_i|\ge \Vert h_{\overline{T}}\Vert _1-\Vert h_T\Vert _1+\alpha \min \limits _{i\in T}|h_i|\). If \(S\not \subset T\)
Combined with the above two situations, we always have
From Theorem 1, we know k-sparse vector x can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization. \(\square \)
1.11 Proof of Proposition 2
Proof
If an arbitrary k-sparse vector x can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization, then from Theorem 1, we have, for all k-sparse vector x, (9) holds for all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\) with \(\textrm{supp}(x)=S\). Combining Lemma 7, we get \(f_{T,h}(x^*)>0\) with arbitrary \(x^*\ne 0\), \(\textrm{supp}(x^*)=T\). Since \(\alpha \le 1\), \(T_2(x^*)=\emptyset \), \(T_1(x^*)=T\), So
for all \(x^*\) with \(\textrm{supp}(x^*)=T\). From Lemma 8,
Hence, the theorem holds. \(\square \)
1.12 Proof of Theorem 2
Proof
Necessity is Proposition 2. Now, we prove the sufficiency. Since \(\Vert x\Vert _0>1\), we have \(\frac{\Vert x\Vert _1}{\Vert x\Vert _2}>1\). If \(\min \limits _{i\in S}|h_i|>0\), then from the proof of Proposition 1 and (19), we have
If \(\min \limits _{i\in S}|h_i|=0\), \(\max \limits _{i\in S}|h_i|>0\), then
If \(\max \limits _{i\in S}|h_i|=0\), then
According to Theorem 1, the conclusion of the theorem is true. \(\square \)
1.13 Proof of Theorem 3
Proof
Necessity is Proposition 2. Now, we prove sufficiency. We prove it in two cases.
\((*)\). \(\Vert x\Vert _0>1\). This case is similar to the proof of Theorem 2.
\((**)\). \(\Vert x\Vert _0=1\). Set \(\textrm{supp}(x)=S=\{j_0\}\). If \(h_{j_0}=0\), then
If \(h_{j_0}\ne 0\), then
The last inequality is because of the assumption \(\Vert h\Vert _0>1\), which is normal, because \(\Vert h\Vert _0=1\) if and only if A contains a column of 0 vectors, such A is not allowed. Theorem 1 demonstrates that the conclusion of this theorem is valid. \(\square \)
1.14 Proof of Lemma 9
Proof
This proof is similar to that of Lemma 2. However, it should be noted that there exists an \(i\in T\) such that \(|h_i|\ne 0\). Hence, at least one of the following two inequalities is strict:
Therefore, our lemma conclusion is a strict inequality. \(\square \)
1.15 Proof of Lemma 10
Proof
Since \(T_2(x)=\emptyset \), we know \(|T|>1\). Suppose \(T=\{i_1,i_2,\cdots i_k\}\) satisfying \(|h_{i_1}|\le |h_{i_2}|\le \cdots \le |h_{i_k}|\), \(0\le \frac{1}{\alpha }-\frac{|x_{j_1}|}{\Vert x\Vert _2}\le \frac{1}{\alpha }-\frac{|x_{j_2}|}{\Vert x\Vert _2}\le \cdots \le \frac{1}{\alpha }-\frac{|x_{j_k}|}{\Vert x\Vert _2}\), where \(\{j_1,j_2,\cdots ,j_k\}\) is a rearrangement of elements in T. We set \(\overline{x}_{i_p}=x_{j_p}\) where \(p\in [k]\), Otherwise, \(\overline{x}_i=0\). From the definition of \(\overline{x}\), we have \(\textrm{supp}(x)=\textrm{supp}(\overline{x})\), \(T_2(\overline{x})=\emptyset \) and the ordering inequality shows \(g_{T,h}(\overline{x})\ge g_{T,h}(x)\). Now, let us keep \(\overline{x_{i_1}}\) constant and let \(|\overline{x}_i|\) for \(i\in T\backslash \{i_1\}\) decrease but not to 0. We assume that after this change, \(\overline{x}\) becomes \(x^*\). Obviously, for all \(i\in T\backslash \{i_1\}\), when \(|x^*_i|\rightarrow 0\), we have \(\frac{|x^*_i|}{\Vert x^*\Vert _2}\rightarrow 0\), \(0\le \frac{1}{\alpha }-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\le \frac{1}{\alpha }-\frac{|x^*_i|}{\Vert x^*\Vert _2}\) and \(\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}\rightarrow 1>\frac{1}{\alpha }\). This shows that, when \(x_i\) is sufficiently small for all \(i\in T\backslash \{i_1\}\), we have \(T_2(x^*)=\{i_1\}\), \(T_1(x^*)=T\backslash \{i_1\}\) and \(\frac{|x^*_i|}{\Vert x^*\Vert _2}-\frac{|\overline{x}_i|}{\Vert \overline{x}\Vert _2}\le 0\). Hence,
In addition, since \(\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}>1\), \(\frac{2}{\alpha }\in [1,2)\), we have
By the limit sign preserving property, there exists an \(x^*\) such that \(\frac{2}{\alpha }-\frac{\Vert \overline{x}\Vert _1}{\Vert \overline{x}\Vert _2}-\frac{|x^*_{i_1}|}{\Vert x^*\Vert _2}+\sum \limits _{i\in T\backslash \{i_1\}}\frac{|x_i^*|}{\Vert x^*\Vert _2}<0\). Combined with (20), \(g_{T,h}(x^*)\ge g_{T,h}(x)\), that is \(f_{T,h}(x^*)\le f_{T,h}(x)\). \(\square \)
1.16 Proof of Proposition 3
Proof
For an arbitrary k-sparse vector \(x\in R^n\), set \(S=\textrm{supp}(x)\). For all \(h\in \textrm{ker} A\) and \(\Vert h\Vert _2=1\), \(T=\textrm{supp}(h_{\textrm{max}(k)})\), from Lemma 5, Lemma 6 and the proof process of case \((**)\) in Lemma 7, there exists an \(x^*\), \(\textrm{supp}(x^*)=T\), \(|T_2(x^*)|=1\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\). Hence,
In the case of \(k=1\), \(T_2(x^*)=T\) and \(|T_2(x^*)|=1\). Set \(T_2(x^*)=\{j_0\}\) then
where we use the fact \(\Vert h_T\Vert _1=|h_{j_0}|\). Combining (15), (21) and (22), we get
From Theorem 1, the conclusion is true in the case of \(k=1\).
In the case of \(k>1\), we define a set \(\Omega (x^*)=\{x|\textrm{supp}(x)=\textrm{supp}(x^*),T_2(x)=T_2(x^*)\}\). Setting \(T_2(x^*)=\{j_0\}\), by (15) and (21) we yield
Theorem 1 shows that this conclusion holds for \(k>1\). Therefore, we finish the proof. \(\square \)
1.17 Proof of Proposition 4
Proof
From Theorem 1 and Lemma 7, if an arbitrary k-sparse vector \(x\in R^n\) can be recovered from Ax via \(\ell _1-\alpha \ell _2\) local minimization, then \(f_{T,h}(x^*)>0\) holds for all \( h\in \textrm{ker} A\), \(\Vert h\Vert _2=1\) with \(T=\textrm{supp}(h_{\textrm{max}(k)})\) and arbitrary \(x^*\ne 0\) satisfying \(\textrm{supp}(x^*)=T\). Setting \(j_0=\textrm{argmin}_{i\in T}|h_i|\), we define a vector x(h) as
where \(\varepsilon \) is sufficiently small so that \(S_1(x(h))= T\backslash \{j_0\}\) and \(S_2(x(h))= \{j_0\}\). We define a set \(\Omega (x(h))=\{x|\textrm{supp}(x)=\textrm{supp}(x(h)),T_2(x)=T_2(x(h))\}\), then
The last inequality is because \(f_{T,h}(x)>0\) holds for all \(x\ne 0\) with \(\textrm{supp}(x)=T\). \(\square \)
1.18 Proof of Theorem 4
Proof
Necessity is Proposition 4. Now, we prove sufficiency. First, similar to Proposition 3, for an arbitrary k-sparse vector \(x\in R^n\) with \(S=\textrm{supp}(x)\), there exists an \(x^*\), \(\textrm{supp}(x^*)=T\), \(|T_2(x^*)|=1\) such that \(f_{S,h}(x)\ge f_{T,h}(x^*)\). Set \(T_2(x^*)=\{j_0\}\), and, we define a vector \(x^{**}\) as
We know that \(T_1(x^*)=T_1(x^{**})\), \(T_2(x^*)=T_2(x^{**})\). Since \(\Vert x\Vert _0>1\), \(|T_2(x^{**})|=1\), we know \(|T_1(x^{**})|\ne \emptyset \). From Lemma 9, we have \(f_{T,h}(x^{**})<f_{T,h}(x^*)\). Next, we define a set \(\Omega (x^{**})=\{x|\textrm{supp}(x)=\textrm{supp}(x^{**}),T_2(x)=T_2(x^{**})\}\), then
Because of Theorem 1, the result of the theorem is true. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xie, S., Li, J. & Liang, K. k-Sparse Vector Recovery via \(\ell _1-\alpha \ell _2\) Local Minimization. J Optim Theory Appl 201, 75–102 (2024). https://doi.org/10.1007/s10957-024-02380-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-024-02380-y