DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech

1Tencent AILab, 2The University of Melbourne, 3Northwest A&F University, 4Mohamed bin Zayed University of Artificial Intelligence
Indicates Corresponding Author

AAAI 2025 😎
Comparison of four different sampling methods: DSG with DDPM, DSG with DDIM, our method with 1000-step sampling, and our method with 10-step sampling.

We used DIDiffGes, ChatGPT, and Blender to build a real-time human-avatar dialogue system.

Abstract

Diffusion models have demonstrated remarkable synthesis quality and diversity in generating co-speech gestures. However, the computationally intensive sampling steps associated with diffusion models hinder their practicality in real-world applications. Hence, we present DIDiffGes, for a Decoupled Semi-Implicit Diffusion model-based framework, that can synthesize high-quality, expressive gestures from speech using only a few sampling steps. Our approach leverages Generative Adversarial Networks (GANs) to enable large-step sampling for diffusion model. We decouple gesture data into body and hands distributions and further decompose them into marginal and conditional distributions. GANs model the marginal distribution implicitly, while L2 reconstruction loss learns the conditional distributions exciplictly. This strategy enhances GAN training stability and ensures expressiveness of generated full-body gestures. Our framework also learns to denoise root noise conditioned on local body representation, guaranteeing stability and realism. DIDiffGes can generate gestures from speech with just 10 sampling steps, without compromising quality and expressiveness, reducing the number of sampling steps by a factor of 100 compared to existing methods. Our user study reveals that our method outperforms state-of-the-art approaches in human likeness, appropriateness, and style correctness.

Method Overview

Our learning framework integrates a Sequential Diffusion Denoiser with two transformer encoders and a Decoupled Semi-implicit Objective. The first encoder denoises local motion and provides a conditional signal for the second encoder, which denoises root noise. The final result, a combination of local motion and root result, is added with t-1 step noise via posterior sampling and then decoupled into body and hand noise. These noise undergo adversarial training against prior sampled noise, supervised by Auxiliary Forward Diffusion Loss. For a detailed description of the network architecture, please refer to our supplementary materials.

Qualitative Experiment

Technical Paper



Yongkang Cheng, Shaoli Huang, Xuelin Chen, Jifeng Ning, Mingming Gong
DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech
arXiv:2310.12678, 2023.