MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling

NeurIPS 2024

Weihao Yuan1, Weichao Shen1, Yisheng He1, Yuan Dong1, Xiaodong Gu1, Zilong Dong1, Liefeng Bo1, Qixing Huang2

1Alibaba Group
2The University of Texas at Austin

Abstract

Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a 26.6% decrease of FID on HumanML3D and a 29.9% decrease on KIT-ML.

Methodology

Framework overview. (a) In motion quantization, human motion is quantized into a spatial-temporal 2D token map by a joint VQ-VAE. (b) In motion generation, a temporal-spatial 2D masking is performed to obtain a masked map, and then a spatial-temporal 2D transformer is designed to infer the masked tokens.


Spatial-temporal 2D Joint Quantization of Motion:

The structure of our spatial-temporal 2D Joint VQ-VAE for motion quantization.


Spatial-temporal 2D Motion Generation:

The temporal-spatial masking strategy (a) and the spatial-temporal attention (b) for motion generation.

Experiments

Quantitative Results:


Qualitative Results:

BibTeX

@article{yuan2024mogents,
    title={MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling},
    author={Weihao Yuan and Weichao Shen and Yisheng HE and Yuan Dong and Xiaodong Gu and Zilong Dong and Liefeng Bo and Qixing Huang},
    journal = {Neural Information Processing Systems (NeurIPS)}
    year={2024},
}