当前位置: X-MOL 学术Graph. Models › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GSNet: Generating 3D garment animation via graph skinning network
Graphical Models ( IF 1.7 ) Pub Date : 2023-09-06 , DOI: 10.1016/j.gmod.2023.101197
Tao Peng , Jiewen Kuang , Jinxing Liang , Xinrong Hu , Jiazhe Miao , Ping Zhu , Lijun Li , Feng Yu , Minghua Jiang

The goal of digital dress body animation is to produce the most realistic dress body animation possible. Although a method based on the same topology as the body can produce realistic results, it can only be applied to garments with the same topology as the body. Although the generalization-based approach can be extended to different types of garment templates, it still produces effects far from reality. We propose GSNet, a learning-based model that generates realistic garment animations and applies to garment types that do not match the body topology. We encode garment templates and body motions into latent space and use graph convolution to transfer body motion information to garment templates to drive garment motions. Our model considers temporal dependency and provides reliable physical constraints to make the generated animations more realistic. Qualitative and quantitative experiments show that our approach achieves state-of-the-art 3D garment animation performance.



中文翻译:

GSNet:通过图形蒙皮网络生成 3D 服装动画

数字服装人体动画的目标是制作尽可能真实的服装人体动画。虽然基于与身体相同拓扑的方法可以产生真实的结果,但它只能应用于与身体具有相同拓扑的服装。尽管基于泛化的方法可以扩展到不同类型的服装模板,但它仍然产生与现实相去甚远的效果。我们提出了 GSNet,这是一种基于学习的模型,可以生成逼真的服装动画并适用于与身体拓扑不匹配的服装类型。我们将服装模板和身体运动编码到潜在空间中,并使用图卷积将身体运动信息传输到服装模板以驱动服装运动。我们的模型考虑时间依赖性并提供可靠的物理约束以使生成的动画更加真实。定性和定量实验表明,我们的方法实现了最先进的 3D 服装动画性能。

更新日期:2023-09-06
down
wechat
bug