Skip to main content
Log in

Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformerbased garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1 000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Guan P, Reiss L, Hirshberg D A et al. DRAPE: DRessing Any PErson. ACM Transactions on Graphics, 2012, 31(4): Article No. 35. DOI: https://doi.org/10.1145/2185520.2185531.

  2. Wang Y, Shao T, Fu K et al. Learning an intrinsic garment space for interactive authoring of garment animation. ACM Transactions on Graphics, 2019, 38(6): Article No. 220. DOI: https://doi.org/10.1145/3355089.3356512.

  3. Ma Q, Yang J, Ranjan A et al. Learning to dress 3D people in generative clothing. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.6468–6477. DOI: 10.1109/CVPR42600.2020.00650.

  4. Santesteban I, Otaduy M A, Casas D. Learning-based animation of clothing for virtual try-on. Computer Graphics Forum, 2019, 38(2): 355–366. DOI: https://doi.org/10.1111/cgf.13643.

    Article  Google Scholar 

  5. Patel C, Liao Z, Pons-Moll G. TailorNet: Predicting clothing in 3D as a function of human pose, shape and garment style. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.7363–7373. DOI: 10.1109/CVPR42600.2020.00739.

  6. Loper M, Mahmood N, Romero J et al. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics, 2015, 34(6): Article No. 248. DOI: https://doi.org/10.1145/2816795.2818013.

  7. Chung J, Gulcehre C, Cho K et al. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv: 1412.3555, 2014. https://arxiv.org/abs/1412.3555, Jun. 2022.

  8. Vaswani A, Shazeer N, Parmar N et al. Attention is all you need. In Proc. the 31st Annual Conference on Neural Information Processing Systems, Dec. 2017, pp.5998–6008.

  9. Cho K, Van Merrienboer B, Gulcehre C et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv: 1406.1078, 2015. https://arxiv.org/abs/1406.1078, Jun. 2022.

  10. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780. DOI: https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  Google Scholar 

  11. Tang M, Wang H M, Tang L et al. CAMA: Contactaware matrix assembly with unified collision handling for GPU-based cloth simulation. Computer Graphics Forum, 2016, 35(2): 511–521. DOI: https://doi.org/10.1111/cgf.12851.

    Article  Google Scholar 

  12. Lauterbach C, Mo Q, Manocha D. gProximity: Hierarchical GPU-based operations for collision and distance queries. Computer Graphics Forum, 2010, 29(2): 419–428. DOI: https://doi.org/10.1111/j.1467-8659.2009.01611.x.

    Article  Google Scholar 

  13. Cirio G, Lopez-Moreno J, Miraut D et al. Yarn-level simulation of woven cloth. ACM Transactions on Graphics, 2014, 33(6): Article No. 207. DOI: https://doi.org/10.1145/2661229.2661279.

  14. Wang H M, O'Brien J F, Ramamoorthi R. Data-driven elastic models for cloth: Modeling and measurement. ACM Transactions on Graphics, 2011, 30(4): Article No. 71. DOI: https://doi.org/10.1145/2010324.1964966.

  15. Jiang C, Gast T, Teran J. Anisotropic elastoplasticity for cloth, knit and hair frictional contact. ACM Transactions on Graphics, 2017, 36(4): Article No. 152. DOI: https://doi.org/10.1145/3072959.3073623.

  16. Li J, Daviet G, Narain R et al. An implicit frictional contact solver for adaptive cloth simulation. ACM Transactions on Graphics, 2018, 37(4): Article No. 52. DOI: https://doi.org/10.1145/3197517.3201308.

  17. Narain R, Samii A, O'Brien J F. Adaptive anisotropic remeshing for cloth simulation. ACM Transactions on Graphics, 2012, 31(6): Article No. 152. DOI: https://doi.org/10.1145/2366145.2366171.

  18. Shi M, Ming H, Liu Y et al. Saliency-dependent adaptive remeshing for cloth simulation. Textile Research Journal, 2021, 91(5/6): 480–495. DOI: https://doi.org/10.1177/0040517520944248.

    Article  Google Scholar 

  19. Tang M, Wang T, Liu Z et al. I-Cloth: Incremental collision handling for GPU-based interactive cloth simulation. ACM Transactions on Graphics, 2018, 37(6): Article No. 204. DOI: https://doi.org/10.1145/3272127.3275005.

  20. Li C, Tang M, Tong R et al. P-cloth: Interactive complex cloth simulation on multi-GPU systems using dynamic matrix assembly and pipelined implicit integrators. ACM Transactions on Graphics, 2020, 39(6): Article No. 180. DOI: https://doi.org/10.1145/3414685.3417763.

  21. Xu W, Umentani N, Chao Q et al. Sensitivity-optimized rigging for example-based real-time clothing synthesis. ACM Transactions on Graphics, 2014, 33(4): Article No. 107. DOI: https://doi.org/10.1145/2601097.2601136.

  22. Lähner Z, Cremers D, Tung T. DeepWrinkles: Accurate and realistic clothing modeling. In Proc. the 15th European Conference on Computer Vision, Sept. 2018, pp.698–715. DOI: 10.1007/978-3-030-01225-0_41.

  23. Pons-Moll G, Pujades S, Hu S et al. ClothCap: Seamless 4D clothing capture and retargeting. ACM Transactions on Graphics, 2017, 36(4): Article No. 73. DOI: https://doi.org/10.1145/3072959.3073711.

  24. Chen L, Gao L, Yang J et al. Deep deformation detail synthesis for thin shell models. arXiv: 2102.11541, 2021. https://arxiv.org/abs/2102.11541, Feb. 2022.

  25. He K, Zhang X, Ren S et al. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.770–778. DOI: 10.1109/CVPR.2016.90.

  26. Ba J L, Kiros J R, Hinton G E. Layer normalization. arXiv: 1607.06450, 2016. https://arxiv.org/abs/1607.06450, Jul. 2022.

  27. Taubin G. A signal processing approach to fair surface design. In Proc. the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Aug. 1995, pp.351–358. DOI: 10.1145/218380.218473.

  28. Mahmood N, Ghorbani N, Troje N F et al. AMASS: Archive of motion capture as surface shapes. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, October 27-November 2, 2019, pp.5441–5450. DOI: 10.1109/ICCV.2019.00554.

  29. Agarap A F. Deep learning using rectified linear units (ReLU). arXiv: 1803.08375, 2018. https://arxiv.org/abs/1803.08375, Aug. 2022.

  30. Zhang J, He T, Sra S et al. Why gradient clipping accelerates training: A theoretical justification for adaptivity. arXiv: 1905.11881, 2019. https://arxiv.org/abs/1905.11881, Aug. 2022.

  31. Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv: 1412.6980, 2014. https://arxiv.org/abs/1412.6980, Aug. 2022.

  32. Paszke A, Gross S, Chintala S et al. Automatic differentiation in PyTorch. In Proc. the 31st Conference on Neural Information Processing Systems Autodiff Workshop, Dec. 2017.

  33. Vasa L, Skala V. A perception correlated comparison method for dynamic meshes. IEEE Transactions on Visualization and Computer Graphics, 2011, 17(2): 220–230. DOI: https://doi.org/10.1109/TVCG.2010.38.

    Article  Google Scholar 

  34. Kingma D P, Welling M. Auto-encoding variational Bayes. arXiv: 1312.6114, 2014. https://arxiv.org/abs/1312.6114, Aug. 2022.

  35. Goodfellow I J, Pouget-Abadie J, Mirza M et al. Generative adversarial nets. In Proc. the 2014 Annual Conference on Neural Information Processing Systems, Dec. 2014, pp.2672–2680.

  36. Kocabas M, Athanasiou N, Black M J. VIBE: Video inference for human body pose and shape estimation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.5253–5263. DOI: 10.1109/CVPR42600.2020.00530.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Shi.

Supplementary Information

ESM 1

(PDF 249 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, YK., Shi, M., Feng, WK. et al. Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency. J. Comput. Sci. Technol. 38, 1356–1368 (2023). https://doi.org/10.1007/s11390-022-1887-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-022-1887-1

Keywords

Navigation