Skip to content
Licensed Unlicensed Requires Authentication Published online by De Gruyter October 4, 2023

Relaxation Quadratic Approximation Greedy Pursuit Method Based on Sparse Learning

  • Shihai Li and Changfeng Ma EMAIL logo

Abstract

A high-performance sparse model is very important for processing high-dimensional data. Therefore, based on the quadratic approximate greed pursuit (QAGP) method, we can make full use of the information of the quadratic lower bound of its approximate function to get the relaxation quadratic approximate greed pursuit (RQAGP) method. The calculation process of the RQAGP method is to construct two inexact quadratic approximation functions by using the m-strongly convex and L-smooth characteristics of the objective function and then solve the approximation function iteratively by using the Iterative Hard Thresholding (IHT) method to get the solution of the problem. The convergence analysis is given, and the performance of the method in the sparse logistic regression model is verified on synthetic data and real data sets. The results show that the RQAGP method is effective.

MSC 2020: 68T05; 68Q32

Award Identifier / Grant number: 12371378

Award Identifier / Grant number: 2020J05034

Funding statement: This research is supported by the Natural Science Foundation of Fujian Province, China (Grant No. 2020J05034) and the National Natural Science Foundation of China (Grant No. 12371378).

References

[1] N. Agarwal, B. Bullins and E. Hazan, Second-order stochastic optimization in linear time, Stat 1050 (2016), Paper No. 15. Search in Google Scholar

[2] Y. Arjevani, S. Shalev-Shwartz and O. Shamir, On lower and upper bounds in smooth and strongly convex optimization, J. Mach. Learn. Res. 17 (2016), 4303–4353. Search in Google Scholar

[3] S. Bahmani, B. Raj and P. T. Boufounos, Greedy sparsity-constrained optimization, J. Mach. Learn. Res. 14 (2013), 807–841. Search in Google Scholar

[4] C. M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, 2006. Search in Google Scholar

[5] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal. 27 (2009), no. 3, 265–274. 10.1016/j.acha.2009.04.002Search in Google Scholar

[6] J.-H. Chen and Q.-Q. Gu, Fast Newton hard thresholding pursuit for sparsity constrainednonconvex optimization, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, New York (2017), 757–766. 10.1145/3097983.3098165Search in Google Scholar

[7] Y.-Y. Chen and Y. Xia, Iterative sparse and deep learning for accurate diagnosis of Alzheimer’s disease, Pattern Recognit. 116 (2021), Article ID 107944. 10.1016/j.patcog.2021.107944Search in Google Scholar

[8] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), no. 4, 1289–1306. 10.1109/TIT.2006.871582Search in Google Scholar

[9] S. Foucart, Hard thresholding pursuit: An algorithm for compressive sensing, SIAM J. Numer. Anal. 49 (2011), no. 6, 2543–2563. 10.1137/100806278Search in Google Scholar

[10] F.-F. Ji, H. Shuai and X.-T. Yuan, Quadratic approximation greedy pursuit for cardinality-constrained sparse learning, Pattern Recognition and Computer Vision: Second Chinese Conference, Springer, Cham (2019), 337–348. 10.1007/978-3-030-31654-9_29Search in Google Scholar

[11] F.-F. Ji, H. Shuai and X.-T. Yuan, A globally convergent approximate Newton method for non-convex sparse learning, Pattern Recognition 126 (2022), Article ID 108560. 10.1016/j.patcog.2022.108560Search in Google Scholar

[12] T. Joachims, Training linear SVMs in linear time, Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, New York (2006), 217–226. 10.1145/1150402.1150429Search in Google Scholar

[13] R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Advances in Neural Information Processing Systems, ACM, New York (2013), 315–323. Search in Google Scholar

[14] D. D. Lewis, Y.-M. Yang, T. G. Rose and F. Li, Rcv1: A new benchmark collection for text categorization research, J. Mach. Learn. Res. 5 (2004), 361–397. Search in Google Scholar

[15] X.-G. Li, T. Zhao, R. Arora, H. Liu and J. Haupt, Stochastic variance reduced optimization for nonconvex sparse learning, Proceedings of The 33rd International Conference on Machine Learning., PMLR, New York (2016), 917–925. Search in Google Scholar

[16] B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24 (1995), no. 2, 227–234. 10.1137/S0097539792240406Search in Google Scholar

[17] D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal. 26 (2009), no. 3, 301–321. 10.1016/j.acha.2008.07.002Search in Google Scholar

[18] J. Shen and P. Li, A tight bound of hard thresholding, J. Mach. Learn. Res. 18 (2017), 2807–2832. Search in Google Scholar

[19] W.-C. Xie, X. Jia, L.-L. Shen and M. Yang, Sparse deep feature learning for facial expression recognition, Pattern Recognit. 96 (2019), Article ID 106966. 10.1016/j.patcog.2019.106966Search in Google Scholar

[20] X.-T. Yuan, P. Li and T. Zhang, Gradient hard thresholding pursuit, J. Mach. Learn. Res. 18 (2017), 1–43. Search in Google Scholar

[21] X.-T. Yuan, P. Li and T. Zhang, Gradient hard thresholding pursuit for sparsity-constrained optimization, International Conference on Machine Learning PMLR, New York (2014), 127–135. Search in Google Scholar

[22] X.-T. Yuan and Q.-S. Liu, Newton greedy pursuit: A quadratic approximation method for sparsity-constrained optimization[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Press, Piscataway (2014), 4122–4129. 10.1109/CVPR.2014.525Search in Google Scholar

[23] X.-T. Yuan and Q. S. Liu, Newton-type greedy selection methods for l 0 -constrained minimization, IEEE Trans. Pattern Anal. Mach. Intell. 39 (2017), no. 12, 2437–2450. 10.1109/TPAMI.2017.2651813Search in Google Scholar PubMed

[24] T. Zhang, Adaptive forward-backward greedy algorithm for sparse learning with linear models, Proceedings of the 21st International Conference on Neural Information Processing Systems, ACM, New York (2008), 1921–1928. Search in Google Scholar

Received: 2022-12-16
Revised: 2023-05-24
Accepted: 2023-08-13
Published Online: 2023-10-04

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 3.5.2024 from https://www.degruyter.com/document/doi/10.1515/cmam-2023-0050/html
Scroll to top button