Abstract
Frame deletion detection has played an essential role in digital forensics. The existing literature suggests that detection work is accomplished by appropriately revealing continuity-attenuation traces of video contents caused by frame deletion in the temporal direction. In this work, we propose a new network architecture, one module of which is exploited as a detector to capture the spatiotemporal features with continuity-attenuation in the forgery videos. First, through a study on the statistical characteristics of the motion trajectory of moving objects, we reveal a new continuity-attenuation trace, based on which the inter-frame residual feature is selected as the basis for continuity-attenuation tracking. Second, to capture the continuity-attenuation phenomenon, we design a network framework consisting of three components: a detector module, a reference module, and a decision module. Three modules work cooperatively under the contrast learning strategy to make the detector more sensitive to capture the forensic trace. The experiment results show that the detection rate can reach 93.85%, indicating the effectiveness of our proposed deep learning-based detection strategy.
Similar content being viewed by others
Data availability
The datasets used in our research are applicable.
References
Verdoliva, L.: Media forensics and DeepFakes: an overview. IEEE J. Selected Top. Signal Process. 14(5), 910–932 (2020)
Yang, Q., Yu, D., Zhang, Z., Yao, Y., Chen, L.: Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics. IEEE Trans. Circuits Syst. Video Technol. 31(10), 4131–4144 (2021)
Yang, Q., Yu, D., Zhang, Z., Yao, Y., Chen, L.: Photo Forensics. The MIT Press, London (2016)
Wang, W, Farid, H.: Exposing digital forgeries in video by detecting double MPEG compression. In: Proceedings 8th ACM Workshop Mutimedia Secur (MMSec), New York, NY, pp 37–47 (2006)
Wang, W., Farid, H.: Exposing digital forgeries in video by detecting double quantization. In: Proceedings 11th ACM Workshop Mutimedia Secur (MMSec), New York, NY, pp. 39–48 (2009)
Su, Y., Jing, Z., Jie, L.: Exposing digital video forgery by detecting motion-compensated edge artefact. In: Proceedings of the Computational Intelligence and Software Engineering, Wuhan, China, Vol. 1, no.4, pp. 37–47 (2010)
Gironi, A., Fontani, M., Bianchi, T., Piva, A., Barni, M.: A video Forensic technique for detecting frame deletion and insertion. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6226–6230 (2014)
Vázquez-Padín, D., Fontani, M., Shullani, D., Pérez-González, F., Piva, A. Barni, M.: Video integrity verification and GOP size estimation via generalized variation of prediction footprint. IEEE Trans. Inf. Forensics Security, 15, pp. 1815–1830
Feng, C., Xu, Z., Jia, S., Xu, Y.: Motion-adaptive frame deletion detection for digital video forensics. IEEE Trans. Circ. Syst. 12(27), 2543–2554 (2017)
Jin, X., Su, Y., Jing, P.: Video frame deletion detection based on time–frequency analysis. J. Vis. Commun. Image Represent. 83, 103436 (2022)
Zhang, Z., Hou, J., Li, Z.: Efficient video frame insertion and deletion detection based on inconsistency of correlations between local binary pattern coded frames. Secur. Comm. Networks. 2(8), 311–320 (2015)
Zhao, Y., Pang, T., Liang, X., Li, Z.: Frame-deletion detection for static-background video based on multi-scale mutual information. Cloud Computing and Security. Vol. 10603. Springer, (2017)
Li, Z., Zhang, Z., Guo, S., Wang, J.: Video inter-frame forgery identification based on the consistency of quotient of MSSIM. Secur. Comm. Networks. 9, 4548–4556 (2016)
Liu, Y., Huang, T.: Exposing video inter-frame forgery by Zernike opponent chromaticity moments and coarseness analysis. Multimedia Syst. 2(23), 223–238 (2017)
Chao, J., Jiang, X. Sun, T.: A novel video inter-frame forgery model detection scheme based on optical flow consistency. In: Proceedings of the International Workshop Digit. Forensics Watermarking (IWDW), Shanghai, China, Vol. 7809, pp. 267–281 (2012)
Wang, W., Jiang, X., Wang, S.: Identifying video forgery process using optical flow. In: Proceedings of the International Workshop Digit. Forensics Watermarking (IWDW), Taipei, Taiwan, Vol. 8389, pp. 244–257 (2014)
Zhang, W.T., Feng, C.H., Xu, Z.Q.: A dual-window detection scheme considered motion direction based on optical flow consistency for frame deletion detection, pp. 277–283. Multimedia, Communication and computing Application (2015)
Li, S., Huo, H.: Frame deletion detection based on optical flow orientation variation. IEEE Access 9, 37196–37209 (2021)
Singh, R.D.: Aggarwal, N, “Video content authentication techniques: a comprehensive survey.” Multimedia Syst. 24, 211–240 (2018)
Shanableh, T.T.: Detection of frame deletion for digital video forensics. Digit. Invest. Vol. 4, no.10, pp. 350–360 (2013)
Rahmouni, N., Nozick, V., Yamagishi, J., Echizen, I.: Distinguishing computer graphics from natural images using convolution neural networks. In: IEEE Workshop on Information Forensics and Security, pp.1–6 (2017)
Kohli, A., Gupta, A., Singhal, D.: Cnn based localization of forged Region in object-based forgery for HD videos. IET Image Proc. 14(5), 947–958 (2020)
Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Learning rich features for image manipulation detection. IEEE/CVF Conf Comput Vis Pattern Recogn 2018, 1053–1061 (2018)
Dar, Y., Bruckstein, A.M.: Motion-compensated coding and frame rate up-conversion: models and analysis. IEEE Trans. Image Process. 24(7), 2051–2066 (2015)
Chen, S., Tan, S., Li, B., Huang, J.: Automatic detection of object-based forgery in advanced video. IEEE Trans. Circuits Syst. Video Technol. 26(11), 2138–2151 (2016)
Aloraini, M., Sharifzadeh, M., Schonfeld, D.: Sequential and patch analyses for object removal video forgery detection and localization. IEEE Trans. Circuits Syst. Video Technol. 31(3), 917–930 (2021)
Bestagini, P., Milani, S., Tagliasacchi, M., Tubaro, S.: Local tampering detection in video sequences. In: 2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP), pp.488–493 (2013)
Hu, Y., Salman, A.: Construction and testing of video tamper detection database(in Chinese). J South China Univ Technol (Natural Science Edition) 45, 57–64 (2017)
Funding
Any funding is not applicable.
Author information
Authors and Affiliations
Contributions
LS designed the network and wrote the main manuscript. HH indicated the writing of the manuscript and revised the content of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Human and animal rights
Both human and/or animal studies are not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Proof of Eq. (4)
On the basis of the first-order Markovian hypothesis, the variance of motion intensity is constant. \({\text{D}} \left( {u_{t}^{x} } \right)\) is expanded as follows:
Since the random interference is assumed to be white noise, the following applies:
According to Eq. (30), \({\text{D}} \left[ {f_{x} \left( t \right)} \right]\) is obtained as follows:
Similarly, \({\text{COV}} \left[ {u_{t}^{x} ,u_{t + 1}^{x} } \right]\) can be calculated as follows:
and resulted in:
Further, the correlation coefficient \(\rho_{x}^{t}\) can be obtained as follows:
1.2 Proof of Eq. (10)
According to Eq. (7), \(f_{x} \left( {t + k} \right)\) is expressed as follows:
Therefore, \({\text{COV}} \left[ {u_{t}^{x} ,u_{t + k}^{x} } \right]\) is calculated as follows:
according to Eq. (30), \({\text{COV}} \left[ {u_{t}^{x} ,u_{t + k}^{x} } \right]\) results in
Then, the k-step correlation coefficient is
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, S., Huo, H. Continuity-attenuation captured network for frame deletion detection. SIViP 18, 3285–3297 (2024). https://doi.org/10.1007/s11760-023-02990-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-023-02990-5