Skip to main content
Log in

Emotion-Aware Music Driven Movie Montage

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

In this paper, we present Emotion-Aware Music Driven Movie Montage, a novel paradigm for the challenging task of generating movie montages. Specifically, given a movie and a piece of music as the guidance, our method aims to generate a montage out of the movie that is emotionally consistent with the music. Unlike previous work such as video summarization, this task requires not only video content understanding, but also emotion analysis of both the input movie and music. To this end, we propose a two-stage framework, including a learning-based module for the prediction of emotion similarity and an optimization-based module for the selection and composition of candidate movie shots. The core of our method is to align and estimate emotional similarity between music clips and movie shots in a multi-modal latent space via contrastive learning. Subsequently, the montage generation is modeled as a joint optimization of emotion similarity and additional constraints such as scene-level story completeness and shot-level rhythm synchronization. We conduct both qualitative and quantitative evaluations to demonstrate that our method can generate emotionally consistent montages and outperforms alternative baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Narasimhan M, Rohrbach A, Darrell T. Clip-it! Language-guided video summarization. In Proc. the 35th International Conference on Neural Information Processing Systems, Dec. 2021, pp.13988–14000.

  2. Lin J C, Wei W L, Wang H M. EMV-matchmaker: Emotional temporal course modeling and matching for automatic music video generation. In Proc. the 23rd ACM International Conference on Multimedia, Oct. 2015, pp.899–902. https://doi.org/10.1145/2733373.2806359.

  3. Lin J C, Wei W L, Wang H M. DEMV-matchmaker: Emotional temporal course representation and deep similarity matching for automatic music video generation. In Proc. the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 2016, pp.2772– 2776. https://doi.org/10.1109/ICASSP.2016.7472182.

  4. Murch W. In the Blink of an Eye. Silman-James Press, 2001.

  5. Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In Proc. the 38th International Conference on Machine Learning, Jul. 2021, pp.8748–8763.

  6. Yoon J C, Lee I K, Byun S. Automated music video generation using multi-level feature-based segmentation. Multimedia Tools and Applications, 2009, 41(2): 197–214. https://doi.org/10.1007/s11042-008-0225-0.

    Article  Google Scholar 

  7. Kuo F F, Shan M K, Lee S Y. Background music recommendation for video based on multimodal latent semantic analysis. In Proc. the 2013 IEEE International Conference on Multimedia and Expo, Jul. 2013. https://doi.org/10.1109/ICME.2013.6607444.

  8. Liao Z C, Yu Y Z, Gong B C, Cheng L C. Audeosynth: Music-driven video montage. ACM Trans. Graphics, 2015, 34(4): Article No. 68. https://doi.org/10.1145/2766966.

  9. Wang J C, Yang Y H, Jhuo I H, Lin Y Y, Wang H M. The acousticvisual emotion Guassians model for automatic generation of music video. In Proc. the 20th ACM International Conference on Multimedia, Oct. 2012, pp.1379–1380. https://doi.org/10.1145/2393347.2396494.

  10. Lin J C, Wei W L, Wang H M. Automatic music video generation based on emotion-oriented pseudo song prediction and matching. In Proc. the 24th ACM International Conference on Multimedia, Oct. 2016, pp.372–376. https://doi.org/10.1145/2964284.2967245.

  11. Lin J C, Wei W L, Yang J, Wang H M, Liao H Y M. Automatic music video generation based on simultaneous soundtrack recommendation and video editing. In Proc. the 25th ACM International Conference on Multimedia, Oct. 2017, pp.519–527. https://doi.org/10.1145/3123266.3123399.

  12. Gross S, Wei X X, Zhu J. Automatic realistic music video generation from segments of YouTube videos. arXiv: 1905. 12245, 2019. https://arxiv.org/abs/1905.12245, Jun. 2023.

  13. Liu T C, Kender J R. Optimization algorithms for the selection of key frame sequences of variable length. In Proc. the 7th European Conference on Computer Vision, Apr. 2002, pp.403–417. https://doi.org/10.1007/3-540-47979-1_27.

  14. Mahmoud K M, Ghanem N M, Ismail M A. Unsupervised video summarization via dynamic modeling-based hierarchical clustering. In Proc. the 12th International Conference on Machine Learning and Applications, Dec. 2013, pp.303–308. https://doi.org/10.1109/ICMLA.2013.140.

  15. Song Y L, Vallmitjana J, Stent A, Jaimes A. TVSum: Summarizing web videos using titles. In Proc. the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2015, pp.5179–5187. https://doi.org/10.1109/CVPR.2015.7299154.

  16. Gygli M, Grabner H, Riemenschneider H, Van Gool L. Creating summaries from user videos. In Proc. the 13th European Conference on Computer Vision, Sept. 2014, pp.505–520. https://doi.org/10.1007/978-3-319-10584-0_33.

  17. Zhu W C, Lu J W, Li J H, Zhou J. DSNet: A flexible detect-to-summarize network for video summarization. IEEE Trans. Image Processing, 2021, 30: 948–962. https://doi.org/10.1109/TIP.2020.3039886.

    Article  Google Scholar 

  18. Sharghi A, Gong B Q, Shah M. Query-focused extractive video summarization. In Proc. the 14th European Conference on Computer Vision, Oct. 2016, pp.3–19. https://doi.org/10.1007/978-3-319-46484-8_1.

  19. Irie G, Satou T, Kojima A, Yamasaki T, Aizawa K. Automatic trailer generation. In Proc. the 18th ACM International Conference on Multimedia, Oct. 2010, pp.839–842. https://doi.org/10.1145/1873951.1874092.

  20. Xu H T, Zhen Y, Zha H Y. Trailer generation via a point process-based visual attractiveness model. In Proc. the 24th International Conference on Artificial Intelligence, Jul. 2015, pp.2198–2204. https://doi.org/10.5555/2832415.2832554.

  21. Papalampidi P, Keller F, Lapata M. Film trailer generation via task decomposition. arXiv: 2111.08774, 2021. https://arxiv.org/abs/2111.08774, Jun. 2023.

  22. Smith J R, Joshi D, Huet B, Hsu W, Cota J. Harnessing A I. for augmenting creativity: Application to movie trailer creation. In Proc. the 25th ACM International Conference on Multimedia, Oct. 2017, pp.1799–1808. https://doi.org/10.1145/3123266.3127906.

  23. Juslin P N, Laukka P. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 2004, 33(3): 217-238. https://doi.org/10.1080/0929821042000317813.

    Article  Google Scholar 

  24. Cowen A S, Keltner D. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114(38): E7900-E7909. https://doi.org/10.1073/pnas.1702247114.

    Article  Google Scholar 

  25. Kim Y E, Schmidt E M, Migneco R, Morton B G, Richardson P, Scott J J, Speck J A, Turnbull D. Music emotion recognition: A state of the art review. In Proc. the 11th International Society for Music Information Retrieval Conference, Aug. 2010, pp.937–952.

  26. Russell J A. A circumplex model of affect. Journal of Personality and Social Psychology, 1980, 39(6): 1161–1178. https://doi.org/10.1037/h0077714.

    Article  Google Scholar 

  27. Zhao S C, Gao Y, Jiang X L, Yao H X, Chua T S, Sun X S. Exploring principles-of-art features for image emotion recognition. In Proc. the 22nd ACM International Conference on Multimedia, Nov. 2014, pp.47–56. https://doi.org/10.1145/2647868.2654930.

  28. Lu X, Suryanarayan P, Adams Jr R B, Li J, Newman M G, Wang J Z. On shape and the computability of emotions. In Proc. the 20th ACM International Conference on Multimedia, Oct. 2012, pp.229–238. https://doi.org/10.1145/2393347.2393384.

  29. Baveye Y, Dellandrea E, Chamaret C, Chen L M. LIRIS-ACCEDE: A video database for affective content analysis. IEEE Trans. Affective Computing, 2015, 6(1): 43–55. https://doi.org/10.1109/TAFFC.2015.2396531.

    Article  Google Scholar 

  30. Hanjalic A, Xu L Q. Affective video content representation and modeling. IEEE Trans. Multimedia, 2005, 7(1): 143–154. https://doi.org/10.1109/TMM.2004.840618.

    Article  Google Scholar 

  31. Guzhov A, Raue F, Hees J, Dengel A. Audioclip: Extending clip to image, text and audio. In Proc. the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, pp.976–980. https://doi.org/10.1109/ICASSP43922.2022.9747631.

  32. Pandeya Y R, Lee J. Deep learning-based late fusion of multimodal information for emotion classification of music video. Multimedia Tools and Applications, 2021, 80(2): 2887–2905. https://doi.org/10.1007/s11042-020-08836-3.

    Article  Google Scholar 

  33. Yu X J. A study on the editing frequencies trends for films emotion clips. International Journal of Organizational Innovation, 2017, 9(3): 40A–47A.

    Google Scholar 

  34. Souček T, Lokoč J. TransNet V2: An effective deep network architecture for fast shot transition detection. arXiv: 2008.04838, 2020. https://arxiv.org/abs/2008.04838, Jun. 2023.

  35. Rao A Y, Xu L N, Xiong Y, Xu G D, Huang Q Q, Zhou B L, Lin D H. A local-to-global approach to multi-modal movie scene segmentation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.10146–10155. https://doi.org/10.1109/CVPR42600.2020.01016.

  36. Böck S, Korzeniowski F, Schlüter J, Krebs F, Widmer G. Madmom: A new python audio and music signal processing library. In Proc. the 24th ACM International Conference on Multimedia, Oct. 2016, pp.1174–1178. https://doi.org/10.1145/2964284.2973795..

  37. Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv: 1412.6980, 2014. https://arxiv.org/abs/1412.6980, Jun. 2023.

  38. Pandeya Y R, Bhattarai B, Lee J. Deep-learning-based multimodal emotion classification for music videos. Sensors, 2021, 21(14): 4927. https://doi.org/10.3390/s21144927.

    Article  Google Scholar 

  39. Wu H H, Seetharaman P, Kumar K, Bello J P. Wav2CLIP: Learning robust audio representations from clip. In Proc. the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2022, pp.4563–4567. https://doi.org/10.1109/ICASSP43922.2022.9747669.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-Ming Dong.

Supplementary Information

ESM 1

(PDF 1130 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, WQ., Lin, MX., Huang, HB. et al. Emotion-Aware Music Driven Movie Montage. J. Comput. Sci. Technol. 38, 540–553 (2023). https://doi.org/10.1007/s11390-023-3064-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-023-3064-6

Keywords

Navigation