HoleGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures

1Tencent AILab
Indicates Corresponding Author

3DV 2025 😎

HoloGest pioneers a decoupled architecture that separately models diffusion-based gesture semantics and physics-constrained motion priors, enabling holistic co-speech gesture generation with unprecedented synchronization of linguistic intent and biomechanical plausibility.

Abstract

Animating virtual characters with holistic co-speech gestures is a challenging but critical task. Previous systems have primarily focused on the weak correlation between audio and gestures, leading to physically unnatural outcomes that degrade the user experience. To address this problem, we introduce HoleGest, a novel neural network framework based on decoupled diffusion and motion priors for the automatic generation of high-quality, expressive co-speech gestures. Our system leverages large-scale human motion datasets to learn a robust prior with low audio dependency and high motion reliance, enabling stable global motion and detailed finger movements. To improve the generation efficiency of diffusion-based models, we integrate implicit joint constraints with explicit geometric and conditional constraints, capturing complex motion distributions between large strides. This integration significantly enhances generation speed while maintaining high-quality motion. Furthermore, we design a shared embedding space for gesture-transcription text alignment, enabling the generation of semantically correct gesture actions. Extensive experiments and user feedback demonstrate the effectiveness and potential applications of our model, with our method achieving a level of realism close to the ground truth, providing an immersive user experience.

Method Overview

Our system comprises a semantic alignment module and two core components: (a) The semantic alignment module maps both the transcribed text and gesture sequence into the latent space simultaneously, further abstracting the semantic latent variables and aligning them with the gesture latent variables in a higher-level abstract space, serving as independent guiding tokens. (b) The semi-implicit decoupled denoiser, by introducing GAN and semi-implicit constraints, models the complex denoising distribution between adjacent large strides, accelerating generation by reducing the number of steps. (c) The motion prior optimization takes the denoised initial local gesture sequence as a condition, and in conjunction with the audio guiding signal, generates global motion and finger actions for the second time. This system requires no additional input and has no time constraints; any pure audio file can generate a set of vivid, natural, and high-quality holistic co-speech gesture sequences. 'r2l' represents converting the rotation representation to the coordinate representation using the SMPL model.

Results (Generate from three random seeds.)


Qualitative Experiment


Applications

Technical Paper



Yongkang Cheng, Shaoli Huang
HoloGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures
arXiv:2310.12678, 2023.


Poster