Skip to main content
Log in

ViT-PGC: vision transformer for pedestrian gender classification on small-size dataset

  • Short Paper
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Pedestrian gender classification (PGC) is a key task in full-body-based pedestrian image analysis and has become an important area in applications like content-based image retrieval, visual surveillance, smart city, and demographic collection. In the last decade, convolutional neural networks (CNN) have appeared with great potential and with reliable choices for vision tasks, such as object classification, recognition, detection, etc. But CNN has a limited local receptive field that prevents them from learning information about the global context. In contrast, a vision transformer (ViT) is a better alternative to CNN because it utilizes a self-attention mechanism to attend to a different patch of an input image. In this work, generic and effective modules such as locality self-attention (LSA), and shifted patch tokenization (SPT)-based vision transformer model are explored for the PGC task. With the use of these modules in ViT, it is successfully able to learn from stretch even on small-size (SS) datasets and overcome the lack of locality inductive bias. Through extensive experimentation, we found that the proposed ViT model produced better results in terms of overall and mean accuracies. The better results confirm that ViT outperformed state-of-the-art (SOTA) PGC methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

In this research work, publicly available standard datasets such as PETA, MIT, and cross-dataset have been utilized.

References

  1. Cai Z, Saberian M, Vasconcelos N (2015) Learning complexity-aware cascades for deep pedestrian detection. In: Proceedings of the IEEE international conference on computer vision. 3361–3369.

  2. Yoshihashi R, Trinh TT, Kawakami R, You S, Iida M, Naemura T (2018) Pedestrian detection with motion features via two-stream ConvNets. IPSJ Trans Compute Vis Appl 10:12

    Google Scholar 

  3. Khan MA, Akram T, Sharif M, Javed MY, Muhammad N, Yasmin M (2018) An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern Analy Appl 22:1377–1397

    MathSciNet  Google Scholar 

  4. Yao H, Zhang S, Hong R, Zhang Y, Xu C, Tian Q (2019) Deep representation learning with part loss for person re-identification. IEEE Trans Image Proc 28(6):2860–2871

    MathSciNet  MATH  Google Scholar 

  5. Ng C-B, Tay Y-H, Goi B-M (2015) A review of facial gender recognition. Pattern Anal Appl 18:739–755

    MathSciNet  Google Scholar 

  6. Azzopardi G, Greco A, Saggese A, Vento M (2017) Fast gender recognition in videos using a novel descriptor based on the gradient magnitudes of facial landmarks. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). 1–6.

  7. Fayyaz M, Yasmin M, Sharif M, Raza M (2021) J-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classification. Neural Comput Appl 33:361–391

    Google Scholar 

  8. Cai L, Zhu J, Zeng H, Chen J, Cai C (2018) Deep-learned and hand-crafted features fusion network for pedestrian gender recognition. In: Proceedings of ELM-2016, Springer. 207–215.

  9. Gornale S, Basavanna M, Kruti R (2017) Fingerprint based gender classification using local binary pattern. Int J Comput Intell Res ISSN, 0973–1873

  10. Kruti R, Patil A, Gornale S (2019) Fusion of local binary pattern and local phase quantization features set for gender classification using fingerprints. Int J Comput Sci Eng 7:22–29

    Google Scholar 

  11. Salih BM, Abdulazeez AM, Hassan OMS (2021) Gender classification based on iris recognition using artificial neural networks. Qubahan Acad J 1:156–163

    Google Scholar 

  12. Tapia J, Arellano C (2019) Gender classification from Iris texture images using a new set of binary statistical image features. In: 2019 International Conference on Biometrics (ICB). 1-7.

  13. Ahmed K, Saini M (2022) FCML-gait: fog computing and machine learning inspired human identity and gender recognition using gait sequences. Signal, Image Video Proc 17(4):925–936

    Google Scholar 

  14. Lee M, Lee J-H, Kim D-H (2022) Gender recognition using optimal gait feature based on recursive feature elimination in normal walking. Expert Syst Appl 189:116040

    Google Scholar 

  15. Liu T, Ye X, Sun (2018) Combining convolutional neural network and support vector machine for gait-based gender recognition. In: 2018 Chinese Automation Congress (CAC) 3477-3481.

  16. Gupta S, Thakur K, Kumar M (2021) 2D-human face recognition using SIFT and SURF descriptors of face’s feature regions. Vis Comput 37:447–456

    Google Scholar 

  17. Ahmadi N, Akbarizadeh G (2020) Iris tissue recognition based on GLDM feature extraction and hybrid MLPNN-ICA classifier. Neural Comput Appl 32:2267–2281

    Google Scholar 

  18. Carletti V, Greco A, Saggese A, Vento M (2020) An effective real time gender recognition system for smart cameras. J Ambient Intell Humaniz Comput 11:2407–2419

    Google Scholar 

  19. Greco A, Saggese A, Vento M, Vigilante V (2021) Gender recognition in the wild: a robustness evaluation over corrupted images. J Ambient Intell Humaniz Comput 12:10461–10472

    Google Scholar 

  20. Guo G, Mu G, Fu Y (2009) Gender from body: a biologically-inspired approach with manifold learning. In: Asian conference on computer vision 236–245.

  21. Yaghoubi E, Alirezazadeh P, Assunção E, Neves JC, Proençaã H (2019) Region-based cnns for pedestrian gender recognition in visual surveillance environments. In: 2019 International Conference of the Biometrics Special Interest Group (BIOSIG) 1-5.

  22. Sun Y, Zhang M, Sun Z, Tan T (2017) Demographic analysis from biometric data: achievements, challenges, and new frontiers. IEEE Trans Pattern Anal Mach Intell 40:332–351

    Google Scholar 

  23. Ahad M, Fayyaz M (2021) Pedestrian gender recognition with handcrafted feature ensembles. Azerbaijan J High Perform Comput 4(1):60–90

    Google Scholar 

  24. Ng C-B, Tay Y-H, Goi B-M (2013) A convolutional neural network for pedestrian gender recognition. In: International symposium on neural networks 558–564.

  25. Abbas F, Yasmin M, Fayyaz M, Abd Elaziz M, Lu S, El-Latif AAA (2021) Gender classification using proposed CNN-based model and ant colony optimization. Mathematics 9(2499):2021

    Google Scholar 

  26. Cai L, Zhu J, Zeng H, Chen J, Cai C, Ma K-K (2018) Hog-assisted deep feature learning for pedestrian gender recognition. J Franklin Inst 355:1991–2008

    Google Scholar 

  27. Cai L, Zeng H, Zhu J, Cao J, Wang Y, Ma K-K (2020) Cascading scene and viewpoint feature learning for pedestrian gender recognition. IEEE Internet Things J 8:3014–3026

    Google Scholar 

  28. Geelen CD, Wijnhoven RG, Dubbelman G (2015) Gender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMs. Video Surv Transp Imaging Appl 2015:170–183

    Google Scholar 

  29. Raza M, Zonghai C, Rehman SU, Zhenhua G, Jikai W, Peng B (2017) Part-wise pedestrian gender recognition via deep convolutional neural networks

  30. Sindagi VA, Patel VM (2018) A survey of recent advances in cnn-based single image crowd counting and density estimation. Pattern Recogn Lett 107:3–16

    Google Scholar 

  31. Cui R, Hua G, Zhu A, Wu J, Liu H (2019) Hard sample mining and learning for skeleton-based human action recognition and identification. IEEE Access 7:8245–8257

    Google Scholar 

  32. Nogay HS, Akinci TC, Yilmaz M (2022) Detection of invisible cracks in ceramic materials using by pre-trained deep convolutional neural network. Neural Comput Appl 34:1423–1432

    Google Scholar 

  33. Shaheed K, Mao A, Qureshi I, Kumar M, Hussain S, Ullah I et al (2022) DS-CNN: A pre-trained Xception model based on depth-wise separable convolutional neural network for finger vein recognition. Expert Syst Appl 191:116288

    Google Scholar 

  34. Krishnaswamy Rangarajan A, Purushothaman R (2020) Disease classification in eggplant using pre-trained VGG16 and MSVM. Sci Rep 10:2322

    Google Scholar 

  35. Bhojanapalli S, Chakrabarti A, Glasner D, Li D, Unterthiner T, Veit A (2021) Understanding robustness of transformers for image classification. In: Proceedings of the IEEE/CVF international conference on computer vision. 10231–10241.

  36. Dong H, Zhang L, Zou B (2021) Exploring vision transformers for polarimetric SAR image classification. IEEE Trans Geosci Remote Sens 60:1–15

    Google Scholar 

  37. Yu S, Ma K, Bi Q, Bian C, Ning M, He N, Li Y, Liu H, Zheng Y (2021) Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification. InMedical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, Proceedings, Part VIII 24 2021 (pp. 45-54). Springer International Publishing.

  38. Paul S, Chen P-Y (2022) Vision transformers are robust earners. In: Proceedings of the AAAI conference on Artificial Intelligence 2071–2081.

  39. Benz P, Ham S, Zhang C, Karjauv A, Kweon IS (2021) Adversarial robustness comparison of vision transformer and mlp-mixer to cnns. arXiv preprint arXiv:2110.02797

  40. Lee SH, Lee S, Song BC (2021) Vision transformer for small-size datasets. arXiv preprint arXiv:2112.13492

  41. Collins M, Zhang J, Miller P, Wang H (2009) Full body image feature representations for gender profiling. In: 2009 IEEE 12th International conference on computer vision workshops, ICCV workshops. 1235-1242.

  42. Cao L, Dikmen M, Fu Y, Huang TS (2008) Gender recognition from body. In: Proceedings of the 16th ACM international conference on Multimedia. 725–728.

  43. Li C, Guo J, Porikli F, Pang Y (2018) Lightennet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recogn Lett 104:15–22

    Google Scholar 

  44. Rashid M, Khan MA, Sharif M, Raza M, Sarfraz MM, Afza F (2019) Object detection and classification: a joint selection and fusion strategy of deep convolutional neural network and SIFT point features. Multimed Tools Appl 78:15751–15777

    Google Scholar 

  45. Khan MA, Akram T, Sharif M, Awais M, Javed K, Ali H et al (2018) CCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Comput Electron Agric 155:220–236

    Google Scholar 

  46. Sharif M, Attique Khan M, Rashid M, Yasmin M, Afza F, Tanik UJ (2019) Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J Exp Theore Artif Intell 33(4):577–599

    Google Scholar 

  47. Raza M, Sharif M, Yasmin M, Khan MA, Saba T, Fernandes SL (2018) Appearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learning. Futur Gener Comput Syst 88:28–39

    Google Scholar 

  48. Cai L, Zeng H, Zhu J, Cao J, Wang Y, Ma KK (2020) Cascading scene and viewpoint feature learning for pedestrian gender recognition. IEEE Internet Things J 8(4):3014–3026

    Google Scholar 

  49. Abbas F, Yasmin M, Fayyaz M, Elaziz MA, Lu S, El-Latif AAA (2021) Gender classification using proposed cnn-based model and ant colony optimization. Mathematics 9:2499

    Google Scholar 

  50. Ng CB, Tay YH, Goi BM (2017) Training strategy for convolutional neural networks in pedestrian gender classification. In: Second International Workshop on Pattern Recognition 10443: 226-230. SPIE.

  51. Cai L, Zeng H, Zhu J, Cao J, Hou J, Cai C (2017) Multi-view joint learning network for pedestrian gender classification. In: 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) 23-27.

  52. Ng C-B, Tay Y-H, Goi B-M (2013) Comparing image representations for training a convolutional neural network to classify gender. In: 2013 1st International Conference on Artificial Intelligence, Modelling and Simulatio 29–33.

  53. Antipov G, Berrani S-A, Ruchaud N, Dugelay J-L (2015) Learned vs. hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on Multimedia 1263–1266.

  54. Ng C-B, Tay Y-H, Goi B-M (2019) Pedestrian gender classification using combined global and local parts-based convolutional neural networks. Pattern Anal Appl 22:1469–1480

    MathSciNet  Google Scholar 

  55. Xu J, Luo L, Deng C, Huang H (2018) Bilevel distance metric learning for robust image recognition. In: Advances in Neural Information Processing Systems 4198–4207.

  56. Xiao T, Li S, Wang B, Lin L, Wang X (2017) Joint detection and identification feature learning for person search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3415–3424.

  57. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proc Syst 25

  58. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. (2017) Attention is all you need. Adv Neural Inf Proc Syst 30

  59. Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

  60. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901

    Google Scholar 

  61. Srinivas A, Lin T-Y, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 16519–16529.

  62. Jie H, Li S, Gang S, Albanie S (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition.

  63. Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. arXiv preprint arXiv:1904.09925.

  64. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.

  65. Sun C, Shrivastava A, Singh S, Gupta A (2017) Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE international conference on computer vision 843–852.

  66. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H (2021) Training data-efficient image transformers and distillation through attention. In: International conference on machine learning 10347-10357. PMLR.

  67. Yuan L, Chen Y, Wang T, Yu W, Shi Y, Jiang Z, et al. (2021) Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. In: 2021 IEEE, in CVF International Conference on Computer Vision, ICCV 538-547.

  68. Heo B, Yun S, Han D, Chun S, Choe J, Oh SJ (2021) Rethinking spatial dimensions of vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision 11936–11945.

  69. Wu H, Xiao B, Codella N, Liu M, Dai X, Yuan L, Zhang L (2021) Cvt: Introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 22-31).

  70. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision 10012-10022

  71. Deng Y, Luo P, Loy CC, Tang X (2014)Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM international conference on Multimedia 789-792.

  72. Ba JL, Kiros JR, Hinton GE (2016) Layer normalization. arXiv preprint arXiv:1607.06450.

  73. He Y-L, Zhang X-L, Ao W, Huang JZ (2018) Determining the optimal temperature parameter for Softmax function in reinforcement learning. Appl Soft Comput 70:80–85

    Google Scholar 

  74. Lin F, Wu Y, Zhuang Y, Long X, Xu W (2016) Human gender classification: a review. Int J Biomet 8:275–300

    Google Scholar 

  75. Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2016) Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261.

  76. Toğaçar M, Ergen B, Cömert Z, Özyurt F (2020) A deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning models. Irbm 41:212–222

    Google Scholar 

  77. Yuan B, Han L, Gu X, Yan H (2021) Multi-deep features fusion for high-resolution remote sensing image scene classification. Neural Comput Appl 33:2047–2063

    Google Scholar 

  78. Cıbuk M, Budak U, Guo Y, Ince MC, Sengur A (2019) Efficient deep features selections and classification for flower species recognition. Measurement 137:7–13

    Google Scholar 

  79. Li S, Wang L, Li J, Yao Y (2021) Image classification algorithm based on improved AlexNet. J Phys: Conf Ser 012051.

  80. Xu Z, Sun K, Mao J (2020) Research on ResNet101 network chemical reagent label image classification based on transfer learning. In: 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT) 354–358.

  81. Lu T, Han B, Chen L, Yu F, Xue C (2021) A generic intelligent tomato classification system for practical applications using DenseNet-201 with transfer learning. Sci Rep 11:15824

    Google Scholar 

  82. Acikgoz H (2022) A novel approach based on integration of convolutional neural networks and deep feature selection for short-term solar radiation forecasting. Appl Energy 305:117912

    Google Scholar 

  83. Zhang M, Su H, Wen J (2021) Classification of flower image based on attention mechanism and multi-loss attention network. Comput Commun 179:307–317

    Google Scholar 

  84. Emmadi SC, Aerra MR, Bantu S (2023) Performance Analysis of VGG-16 Deep Learning Model for COVID-19 Detection using Chest X-Ray Images. In: 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom) 1001-1007. IEEE.

  85. Zhang Q (2022) A novel ResNet101 model based on dense dilated convolution for image classification. SN Appl Sci 4:1–13

    Google Scholar 

  86. Sanghvi HA, Patel RH, Agarwal A, Gupta S, Sawhney V, Pandya AS (2023) A deep learning approach for classification of COVID and pneumonia using DenseNet-201. Int J Imaging Syst Technol 33:18–38

    Google Scholar 

  87. Zhao C, Wang X, Wong WK, Zheng W, Yang J, Miao D (2017) Multiple metric learning based on bar-shape descriptor for person re-identification. Pattern Recog

  88. Geelen CD, Wijnhoven RG, Dubbelman G (2015) Gender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMs. In: Video Surveillance and Transportation Imaging Applications 9407: 170-183. SPIE.

  89. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

  90. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1-9

  91. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition 770-778.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Farhat Abbas.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abbas, F., Yasmin, M., Fayyaz, M. et al. ViT-PGC: vision transformer for pedestrian gender classification on small-size dataset. Pattern Anal Applic 26, 1805–1819 (2023). https://doi.org/10.1007/s10044-023-01196-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-023-01196-2

Keywords

Navigation