skip to main content
research-article

3DEmo: For Portrait Emotion Recognition with New Dataset

Published:23 February 2024Publication History
Skip Abstract Section

Abstract

Emotional Expression Recognition (EER) and Facial Expression Recognition (FER) are active research areas in the affective computing field, which involves studying human emotion, recognition, and sentiment analysis. The main objective of this research is to develop algorithms that can accurately interpret and estimate human emotions from portrait images. The emotions depicted in a portrait can reflect various factors such as psychological and physiological states, the artist’s emotional responses, social and environmental aspects, and the period in which the painting was created. This task is challenging because (i) the portraits are often depicted in an artistic or stylized manner rather than realistically or naturally, (ii) the texture and color features obtained from natural faces and paintings differ, affecting the success rate of emotion recognition algorithms, and (iii) it is a new research area, where practically we do not have visual arts portrait facial emotion estimation models or datasets.

To address these challenges, we need a new class of tools and a database specifically tailored to analyze portrait images. This study aims to develop art portrait emotion recognition methods and create a new digital portrait dataset containing 927 images. The proposed model is based on (i) a 3-dimensional estimation of emotions learned by a deep neural network and (ii) a novel deep learning module (3DEmo) that could be easily integrated into existing FER models. To evaluate the effectiveness of the developed models, we also tested their robustness on a facial emotion recognition dataset. The extensive simulation results show that the presented approach outperforms established methods. We expect that this dataset and the developed new tools will encourage further research in recognizing emotions in portrait paintings and predicting artists’ emotions in the painting period based on their artwork.

REFERENCES

  1. [1] Abdat Faiza, Maaoui Choubeila, and Pruski Alain. 2011. Human-computer interaction using emotion recognition from facial expression. In 5th European Symposium on Computer Modeling and Simulation (UKSim’11). 196201. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Achlioptas Panos, Ovsjanikov Maks, Haydarov Kilichbek, Elhoseiny Mohamed, and Guibas Leonidas. 2021. ArtEmis: Affective language for visual art. In Conference on Computer Vision and Pattern Recognition (CVPR’21). 1156911579.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Bakker Iris, Voordt Theo van der, Vink Peter, and Boon Jan de. 2014. Pleasure, arousal, dominance: Mehrabian and Russell revisited. Curr. Psychol. 33 (2014), 405421. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Barrett Lisa Feldman and Russell James A.. 1999. The structure of current affect: Controversies and emerging consensus. Curr. Direct. Psychol. Sci. 8, 1 (1999), 1014. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Barsoum Emad, Zhang Cha, Ferrer Cristian Canton, and Zhang Zhengyou. 2016. Training deep networks for facial expression recognition with crowd-sourced label distribution. In ACM International Conference on Multimodal Interaction (ICMI’16).Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Benitez-Quiroz Fabian C., Srinivasan Ramprakash, and Martinez Aleix M.. 2016. EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 55625570. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bezdek James C., Ehrlich Robert, and Full William. 1984. FCM: The fuzzy c-means clustering algorithm. Comput. Geosci. 10, 2 (1984), 191203. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Breuer Ran and Kimmel Ron. 2017. A Deep Learning Perspective on the Origin of Facial Expressions. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Chou Jean-Peic and Stork David G.. 2023. Computational tracking of head pose through 500 years of fine-art portraiture. In Computer Vision and Analysis of Art. SPIE Electronic Imaging, San Francisco, CA.Google ScholarGoogle Scholar
  10. [10] Nguyen Tu N. Chowdary, M. Kalpana. and Hemanth D. Jude. 2021. Deep learning-based facial emotion recognition for human–computer interaction applications. Neural Comput. Applic. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chu Wen-Sheng, Torre Fernando, and Cohn Jeffrey. 2017. Learning spatial and temporal cues for multi-label facial action unit detection. 2532. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Cosentino Sarah, Randria Estelle I. S., Lin Jia-Yeu, Pellegrini Thomas, Sessa Salvatore, and Takanishi Atsuo. 2018. Group emotion recognition strategies for entertainment robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’18). 813818. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Dhall Abhinav, Sharma Garima, Goecke Roland, and Gedeon Tom. 2020. EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal Based Challenges. Association for Computing Machinery, New York, NY, 784789. Retrieved from Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Du Shichuan, Tao Yong, and Martinez Aleix M.. 2014. Compound facial expressions of emotion. Proc. Nat. Acad. Sci. 111, 15 (2014), E1454–E1462. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Kahou Samira Ebrahimi, Michalski Vincent, Konda Kishore, Memisevic Roland, and Pal Christopher. 2015. Recurrent neural networks for emotion recognition in video. InACM International Conference on Multimodal Interaction (ICMI’15). Association for Computing Machinery, New York, NY, 467474. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Ekman Paul. and Friesen Wallace V.. 1971. Constants Across Cultures in the Face and Emotion. 124129 pages. Retrieved from Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Ekundayo Olufisayo and Viriri Serestina. 2020. Facial expression recognition and ordinal intensity estimation: A multilabel learning approach. In Advances in Visual Computing, Bebis George, Yin Zhaozheng, Kim Edward, Bender Jan, Subr Kartic, Kwon Bum Chul, Zhao Jian, Kalkofen Denis, and Baciu George (Eds.). Springer International Publishing, Cham, 581592.Google ScholarGoogle Scholar
  18. [18] Ekundayo Olufisayo and Viriri Serestina. 2021. Facial expression recognition: A review of trends and techniques. IEEE Access PP (092021), 11. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Fan Yin, Lu Xiangju, Li Dian, and Liu Yuanliu. 2016. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. In 18th ACM International Conference on Multimodal Interaction (ICMI’16). Association for Computing Machinery, New York, NY, 445450. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Fasel Beat. 2002. Head-pose invariant facial expression recognition using convolutional neural networks. In 4th IEEE International Conference on Multimodal Interfaces. 529534. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Fasel Beat. 2002. Robust face analysis using convolutional neural networks. In International Conference on Pattern Recognition, Vol. 2. 40–43 vol.2. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Geraets Chris N. W., Tuente Stéphanie Klein, Lestestuiver Bart P., Beilen Marije van, Nijman Saskia A., Marsman Jan-Bernard C., and Veling Wim. 2021. Virtual reality facial emotion recognition in social environments: An eye-tracking study. Internet Intervent. 25 (2021), 100432. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Grossard Charline, Chaby Laurence, Hun Stéphanie, Pellerin Hugues, Bourgeois Jérémy, Dapogny Arnaud, Ding Huaxiong, Serret Sylvie, Foulon Pierre, Chetouani Mohamed, Chen Liming, Bailly Kevin, Grynszpan Ouriel, and Cohen David. 2018. Children facial expression production: Influence of age, gender, emotion subtype, elicitation condition and culture. Front. Psychol. 9 (2018). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Haddad Jad El, Lézoray Olivier, and Hamel Philippe. 2020. 3D-CNN for facial emotion recognition in videos. In International Symposium on Visual Computing (ISVC’20).Google ScholarGoogle Scholar
  25. [25] Happy S. L., George Anjith, and Routray Aurobinda. 2012. A real time facial expression classification system using local binary patterns. In 4th International Conference on Intelligent Human Computer Interaction (IHCI’12). 15. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Hasani Behzad and Mahoor Mohammad H.. 2017. Facial expression recognition using enhanced deep 3D convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’17). IEEE. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Jain Deepak Kumar, Zhang Zhang, and Huang Kaiqi. 2020. Multi angle optimal pattern-based deep learning for automatic facial expression recognition. Pattern Recog. Lett. 139 (2020), 157165. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Jeong Mira and Ko Byoung Chul. 2018. Driver’s facial expression recognition in real-time for safe driving. Sensors (Basel, Switz.) (Dec.2018), 12 4270. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Ji Shuiwang, Xu Wei, Yang Ming, and Yu Kai. 2013. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1 (2013), 221231. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Kim Dae Hoe, Baddar Wissam J., Jang Jinhyeok, and Ro Yong Man. 2019. Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition. IEEE Trans. Affect. Comput. 10 (2019), 223236.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Kim Kwang Hyeon et al. 2020. Facial expression monitoring system for predicting patient’s sudden movement during radiotherapy using deep learning. J. Appl. Clin. Med. Phys. 21, 191199. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Kingma Diederik P. and Ba Jimmy. 2017. Adam: A Method for Stochastic Optimization. arxiv:1412.6980 [cs.LG]Google ScholarGoogle Scholar
  34. [34] Kossaifi Jean, Walecki Robert, Panagakis Yannis, Shen Jie, Schmitt Maximilian, Ringeval Fabien, Han Jing, Pandit Vedhas, Toisoul Antoine, Schuller Bjorn, Star Kam, Hajiyev Elnar, and Pantic Maja. 2021. SEWA DB: A rich database for audio-visual emotion and sentiment research in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 43, 3 (Mar.2021), 10221040. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Lee Ka Keung and Xu Yangsheng. 2003. Real-time estimation of facial expression intensity. In IEEE International Conference on Robotics and Automation. 25672572. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Lei Suhua, Zhang Huan, Wang Ke, and Su Zhendong. 2018. How training data affect the accuracy and robustness of neural networks for image classification.Google ScholarGoogle Scholar
  37. [37] Li Shan and Deng Weihong. 2020. Deep facial expression recognition: A survey. IEEE Trans. Affect. Comput. (2020), 11. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Liu Shao, Yang Jiaqi, Agaian Sos S., and Yuan Changhe. 2021. Novel features for art movement classification of portrait paintings. Image Vis. Comput. 108 (2021), 104121. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Lloyd Stuart P.. 1982. Least squares quantization in PCM. IEEE Trans. Inf. Theor. 28, 2 (1982), 129137. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Lucey Patrick, Cohn Jeffrey F., Kanade Takeo, Saragih Jason, Ambadar Zara, and Matthews Iain. 2010. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops. 94101. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Lyons Micheal, Akamatsu Shigeru, Kamachi Miyuki, and Gyoba Jiro. 1998. Coding facial expressions with Gabor wavelets. In 3rd IEEE International Conference on Automatic Face and Gesture Recognition. 200205. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Lyons Michael, Kamachi Miyuki, and Gyoba Jiro. 1998. The Japanese Female Facial Expression (JAFFE) Dataset. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Mancini Giacomo, Biolcati Roberta, Agnoli Sergio, Andrei Federica, and Trombini Elena. 2018. Recognition of facial emotional expressions among Italian pre-adolescents, and their affective reactions. Front. Psychol. 9 (2018). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Mavadati Mohammad S., Mahoor Mohammad H., Bartlett Kevin, Trinh Philip, and Cohn Jeffrey F.. 2013. DISFA: A spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4, 2 (2013), 151160. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Mehta Dhwani, Siddiqui Mohammad Faridul Haque, and Javaid Ahmad Y.. 2018. Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors (Basel, Switz.) (Feb.2018). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Milani Federico and Fraternali Piero. 2021. A dataset and a convolutional model for iconography classification in paintings. J. Comput. Cult. Herit. 14, 4, Article 46 (July2021), 18 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Mollahosseini Ali, Hasani Behzad, and Mahoor Mohammad H.. 2019. AffectNet: A database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10, 1 (Jan.2019), 1831. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Nguyen Dung, Nguyen Kien, Sridharan Sridha, Ghasemi Afsane, Dean David, and Fookes Clinton. 2017. Deep spatio-temporal features for multimodal emotion recognition. In IEEE Winter Conference on Applications of Computer Vision (WACV’17). 12151223. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Nomiya Hiroki, Sakaue Shota, and Hochin Teruhisa. 2016. Recognition and intensity estimation of facial expression using ensemble classifiers. In IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS’16). 16. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Pantic Maja, Valstar Michel, Rademaker Ron, and Maat Ludo. 2005. Web-based database for facial expression analysis. In IEEE International Conference on Multimedia and Expo. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Paszke Adam, Gross Sam, Massa Francisco, Lerer Adam, Bradbury James, Chanan Gregory, Killeen Trevor, Lin Zeming, Gimelshein Natalia, Antiga Luca, Desmaison Alban, Kopf Andreas, Yang Edward, DeVito Zachary, Raison Martin, Tejani Alykhan, Chilamkurthy Sasank, Steiner Benoit, Fang Lu, Bai Junjie, and Chintala Soumith. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, Vol. 32, Wallach H., Larochelle H., Beygelzimer A., d’Alché-Buc F., Fox E., and Garnett R. (Eds.). Curran Associates, Inc., 80248035. Retrieved from http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdfGoogle ScholarGoogle Scholar
  52. [52] Qu Hui, Zhang Yikai, Chang Qi, Yan Zhennan, Chen Chao, and Metaxas Dimitris. 2020. Learn Distributed GAN with Temporary Discriminators.Google ScholarGoogle Scholar
  53. [53] Ruder Sebastian. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016).Google ScholarGoogle Scholar
  54. [54] Sargentis G.-Fivos, Dimitriadis Panayiotis, Iliopoulou Theano, and Koutsoyiannis Demetris. 2021. A stochastic view of varying styles in art paintings. Heritage 4, 1 (2021), 333348. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Shamir Lior and Tarakhovsky Jane A.. 2012. Computer analysis of art. J. Comput. Cult. Herit. 5, 2, Article 7 (Aug.2012), 11 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Shang Li and Deng Weihong. 2018. Blended emotion in-the-wild: Multi-label facial expression recognition using crowdsourced annotations and deep locality feature learning. Int. J. Comput. Vis. 127 (2018), 884906.Google ScholarGoogle Scholar
  57. [57] Shuman Vera, Sander David, and Scherer Klaus. 2013. Levels of valence. Front. Psychol. 4 (2013). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Siqueira Henrique, Magg Sven, and Wermter Stefan. 2020. Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks. Retrieved from https://www2.informatik.uni-hamburg.de/wtm/publications/2020/SMW20/SMW20.pdfGoogle ScholarGoogle Scholar
  59. [59] Sujono and Gunawan Alexander A. S.. 2015. Face expression detection on Kinect using active appearance model and fuzzy logic. Procedia Comput. Sci. 59 (2015), 268274. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Suk Myunghoon and Prabhakaran Balakrishnan. 2014. Real-time mobile facial expression recognition system—A case study. In IEEE Conference on Computer Vision and Pattern Recognition Workshops. 132137. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Szwoch Mariusz and Pieniazek Pawel. 2015. Facial emotion recognition using depth data. In 8th International Conference on Human System Interaction (HSI’15). 271277. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Tian Yingtao, Suzuki Chikahiko, Clanuwat Tarin, Bober-Irizar Mikel, Lamb Alex, and Kitamoto Asanobu. 2020. KaoKore: A Pre-modern Japanese Art Facial Expression Dataset. arxiv:2002.08595 [cs.CV]Google ScholarGoogle Scholar
  63. [63] Toisoul Antoine, Kossaifi Jean, Bulat Adrian, Tzimiropoulos Georgios, and Pantic Maja. 2021. Estimation of continuous valence and arousal levels from faces in naturalistic conditions. Nat. Mach. Intell. 3 (2021), 4250. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Tonguç Güray and Ozkara Betul Ozaydın. 2020. Automatic recognition of student emotions from facial expressions during a lecture. Comput. Educ. 148 (2020), 103797. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Volkova Ekaterina, Rosa Stephan de la, Bülthoff Heinrich H., and Mohler Betty. 2014. The MPI emotional body expressions database for narrative scenarios. PloS One 9, 12 (2014), e113647. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Wang Kai, Peng Xiaojiang, Yang Jianfei, Lu Shijian, and Qiao Yu. 2020. Suppressing Uncertainties for Large-Scale Facial Expression Recognition. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Wilhelm Torsten. 2019. Towards facial expression analysis in a driver assistance system. In 14th IEEE International Conference on Automatic Face Gesture Recognition (FG’19). 14. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Xi Xugang, Zhang Yan, Hua Xian, Miran Seyed M., Zhao Yun-Bo, and Luo Zhizeng. 2020. Facial expression distribution prediction based on surface electromyography. Expert Syst. Applic. 161 (2020), 113683. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Xiao Huafei, Li Wenbo, Zeng Guanzhong, Wu Yingzhang, Xue Jiyong, Zhang Juncheng, Li Chengmou, and Guo Gang. 2022. On-road driver emotion recognition using facial expression. Appl. Sci. 12, 2 (2022). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Xie Siyue, Hu Haifeng, and Chen Yizhen. 2020. Facial expression recognition with two-branch disentangled generative adversarial network. IEEE Trans. Circ. Syst. Vid. Technol. (2020).Google ScholarGoogle Scholar
  71. [71] Yan Wen-Jing, Li Xiaobai, Wang Su-Jing, Zhao Guoying, Liu Yong-Jin, Chen Yu-Hsin, and Fu Xiaolan. 2014. CASME II: An improved spontaneous micro-expression database and the baseline evaluation. PLoS One 9 (012014), 18. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Yang Yao-Yuan, Rashtchian Cyrus, Zhang Hongyang, Salakhutdinov Russ R., and Chaudhuri Kamalika. 2020. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, Larochelle H., Ranzato M., Hadsell R., Balcan M. F., and Lin H. (Eds.), Vol. 33. Curran Associates, Inc., 85888601. Retrieved from https://proceedings.neurips.cc/paper/2020/file/61d77652c97ef636343742fc3dcf3ba9-Paper.pdfGoogle ScholarGoogle Scholar
  73. [73] Yaniv Jordan, Newman Yael, and Shamir Ariel. 2019. The face of art: Landmark detection and geometric style in portraits. ACM Trans. Graph. 38, 4, Article 60 (July2019), 15 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Yin Lijun, Wei Xiaozhou, Sun Yi, Wang Jun, and Rosato Matthew J.. 2006. A 3D facial expression database for facial behavior research. (2006).Google ScholarGoogle Scholar
  75. [75] Zhang Xing, Yin Lijun, Cohn Jeffrey F., Canavan Shaun, Reale Michael, Horowitz Andy, Liu Peng, and Girard Jeffrey M.. 2014. BP4D-Spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32, 10 (2014), 692706. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Zhou Ying, Xue Hui, and Geng Xin. 2015. Emotion distribution recognition from facial expressions. In 23rd ACM International Conference on Multimedia (MM’15). Association for Computing Machinery, New York, NY, 12471250. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. 3DEmo: For Portrait Emotion Recognition with New Dataset

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Journal on Computing and Cultural Heritage
        Journal on Computing and Cultural Heritage   Volume 17, Issue 2
        June 2024
        355 pages
        ISSN:1556-4673
        EISSN:1556-4711
        DOI:10.1145/3613557
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 February 2024
        • Online AM: 2 November 2023
        • Accepted: 28 August 2023
        • Revised: 10 July 2023
        • Received: 9 April 2023
        Published in jocch Volume 17, Issue 2

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)238
        • Downloads (Last 6 weeks)85

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text