Skip to main content
Log in

Enhancing image steganography security via universal adversarial perturbations

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Image steganography refers to embedding secret information into a cover image without drawing perceptible distortions. Nevertheless, steganalyzers are potentially reveal steganography by detecting subtle modifications, especially with the introduction of deep learning into image steganalysis. Recent researches show that adversarial examples can greatly enhance the security of image steganography works. In this work, a new terminology of Universal Adversarial Perturbations (UAPs) is presented to further improve the security of image steganography. Specifically, we introduce a generator within the framework of generative adversarial networks (GAN) that learns to generate UAPs, where the UAPs can be applied to universal images without the need to design perturbation specific to an individual image. The UAPs are directly added to the embedding probability map of the image, which can make the generated stego image more deceptive. Experimental results show that the proposed UAPs can effectively improve the security of image steganography.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Bender W, Gruhl D, Morimoto N et al (1996) Techniques for data hiding. IBM Syst J 35(3.4):313–336

    Article  Google Scholar 

  2. Pevný T, Filler T, Bas P (2010) Using high-dimensional image models to perform highly undetectable steganography. Berlin, Heidelberg, International Workshop on Information Hiding. Springer, pp 161–177

    Google Scholar 

  3. Pevny T, Bas P, Fridrich J (2010) Steganalysis by subtractive pixel adjacency matrix. IEEE Trans Inf Forensics Secur 5(2):215–224

    Article  Google Scholar 

  4. Holub V, Fridrich J (2012) Designing steganographic distortion using directional filters. IEEE international workshop on information forensics and security (WIFS). IEEE, pp 234-239

  5. Holub V, Fridrich J (2013) Digital image steganography using universal distortion. Proceedings of the first ACM workshop on information hiding and multimedia security, pp 59-68

  6. Li B, Wang M, Huang J et al (2014) A new cost function for spatial image steganography. IEEE international conference on image processing (ICIP). IEEE, pp 4206-4210

  7. Holub V, Fridrich J, Denemark T (2014) Universal distortion function for steganography in an arbitrary domain. EURASIP J Inf Secur 2014(1):1–13

    Article  Google Scholar 

  8. Guo L, Ni J, Shi YQ (2014) Uniform embedding for efficient JPEG steganography. IEEE Trans Inf Forensics Secur 9(5):814–825

    Article  Google Scholar 

  9. Guo L, Ni J, Su W et al (2015) Using statistical image model for JPEG steganography: Uniform embedding revisited. IEEE Trans Inf Forensics Secur 10(12):2669–2680

    Article  Google Scholar 

  10. Farid H (2001) Detecting steganographic messages in digital images

  11. Fan RE, Chang KW, Hsieh CJ et al (2008) LIBLINEAR: A library for large linear classification. J Mach Learn Res 9:1871–1874

    Google Scholar 

  12. Kodovský J, Fridrich J (2011) Steganalysis in high dimensions: Fusing classifiers built on random subspaces. Media watermarking, security, and forensics III. SPIE, 7880, pp 204-216

  13. Kodovsky J, Fridrich J, Holub V (2011) Ensemble classifiers for steganalysis of digital media. IEEE Trans Inf Forensics Secur 7(2):432–444

    Article  Google Scholar 

  14. Xu G, Wu HZ, Shi YQ (2016) Structural design of convolutional neural networks for steganalysis. IEEE Signal Process Lett 23(5):708–712

    Article  Google Scholar 

  15. Deng X, Chen B, Luo W et al (2019) Fast and effective global covariance pooling network for image steganalysis. Proceedings of the ACM workshop on information hiding and multimedia security, pp 230-234

  16. Boroumand M, Chen M, Fridrich J (2018) Deep residual network for steganalysis of digital images. IEEE Trans Inf Forensics Secur 14(5):1181–1193

    Article  Google Scholar 

  17. Szegedy C, Zaremba W, Sutskever I et al (2013) Intriguing properties of neural networks. Preprint arXiv:1312.6199

  18. Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of the IEEE conference on computer vision and pattern recognition, pp 427-436

  19. Zhang Y, Zhang W, Chen K et al (2018) Adversarial examples against deep neural network based steganalysis. Proceedings of the 6th ACM workshop on information hiding and multimedia security, pp 67–72

  20. Tang W, Li B, Tan S et al (2019) CNN-based adversarial embedding for image steganography. IEEE Trans Inf Forensics Secur 14(8):2074–2087

    Article  Google Scholar 

  21. Bernard S, Bas P, Klein J et al (2020) Explicit optimization of min max steganographic game. IEEE Trans Inf Forensics Secur 16:812–823

    Article  Google Scholar 

  22. Mo H, Song T, Chen B et al (2019) Enhancing JPEG, steganography using iterative adversarial examples. 2019 IEEE international workshop on information forensics and security (WIFS). IEEE, pp 1–6

  23. Liu M, Luo W, Zheng P et al (2021) A New Adversarial Embedding Method for Enhancing Image Steganography. IEEE Trans Inf Forensics Secur 16:4621–4634

    Article  Google Scholar 

  24. Moosavi-Dezfooli SM, Fawzi A, Fawzi O et al (2017) Universal adversarial perturbations. Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765-1773

  25. Goodfellow I J, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. Preprint arXiv:1412.6572

  26. Huang S, Papernot N, Goodfellow I et al (2017) Adversarial attacks on neural network policies. Preprint arXiv:1702.02284

  27. Baluja S, Fischer I (2018) Learning to attack: adversarial transformation networks. Thirty-second aaai conference on artificial intelligence

  28. Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial networks. Adv Neural Inf Process Syst 3:2672–2680

    Google Scholar 

  29. Hayes J, Danezis G (2018) Learning universal adversarial perturbations with generative models. 2018 IEEE security and privacy workshops (SPW). IEEE, pp 43–49

  30. Filler T, Judas J, Fridrich J (2011) Minimizing additive distortion in steganography using syndrome-trellis codes. IEEE Trans Inf Forensics Secur 6(3):920–935

    Article  Google Scholar 

  31. Sedighi V, Cogranne R, Fridrich J (2015) Content-adaptive steganography by minimizing statistical detectability. IEEE Trans Inf Forensics Secur 11(2):221–234

    Article  Google Scholar 

  32. Fridrich J, Filler T (2007) Practical methods for minimizing embedding impact in steganography. Security, Steganography, and Watermarking of Multimedia Contents IX. SPIE, 6505, pp 13-27

  33. Yang J, Ruan D, Huang J et al (2019) An embedding cost learning framework using GAN. IEEE Trans Inf Forensics Secur 15:839–851

    Article  Google Scholar 

  34. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. IEEE symposium on security and privacy (sp). IEEE, pp 39–57

  35. Chen PY, Zhang H, Sharma Y et al (2017) Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 15-26

  36. Bas P, Filler T, Pevnỳ T (2011) “Break our steganographic system”: the ins and outs of organizing BOSS. Berlin, Heidelberg, International workshop on information hiding. Springer, pp 59–70

  37. Bas P, Furon T (2007) BOWS-2. [Online]. Available: http://bows2.ec-lille.fr

Download references

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China (61972143, 61972142).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xin Liu or Gaobo Yang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, L., Liu, X., Wang, D. et al. Enhancing image steganography security via universal adversarial perturbations. Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-024-19122-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-024-19122-x

Keywords

Navigation