当前位置: X-MOL 学术J. Comput. Sci. Tech. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency
Journal of Computer Science and Technology ( IF 1.9 ) Pub Date : 2023-12-01 , DOI: 10.1007/s11390-022-1887-1
Yu-Kun Wei , Min Shi , Wen-Ke Feng , Deng-Ming Zhu , Tian-Lu Mao

Abstract

Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformerbased garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1 000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon.



中文翻译:

具有时间一致性的运动启发实时服装合成

摘要

根据身体运动合成服装动力学是计算机图形学中的一项重要技术。基于物理的模拟依赖于布料动力学定律的精确模型,该模型耗时、难以实现且控制复杂。现有的数据驱动方法要么缺乏时间一致性,要么无法处理与身体拓扑不同的服装。在本文中,我们提出了一种受运动启发的实时服装合成工作流程,可以对服装形状进行高级控制。给定一系列身体运动,我们的工作流程能够生成具有空间和时间一致性的相应服装动态。为此,我们开发了一个基于变压器的服装合成网络来学习从身体运动到服装动力学的映射。采用帧级注意力来捕捉服装和身体运动的依赖性。此外,还采取后处理程序来执行渗透去除和自动纹理化。然后,生成无碰撞且时间一致的纹理服装动画。我们从不同方面对我们提出的工作流程进行了定量和定性评估。大量实验表明,我们的网络能够提供服装动态,保留基于物理的模拟中的皱纹,同时运行速度提高 1000 倍。此外,与其他方法相比,我们的工作流程实现了卓越的合成性能。为了促进这个方向的进一步研究,我们的代码将很快公开。

更新日期:2023-12-01
down
wechat
bug