Hello, everyone. I’m Yongkang Cheng. I’m an incoming Ph.D. student at MBZUAI, and my supervisor is Prof. Mingming Gong. I obtained my master’s and bachelor’s degrees from NWAFU and NJAU respectively. Currently, I’m a researcher at an Embodied AI startup. Before this, I worked as a research intern at Tencent AILab, CV-Lab, and the Digital Human Research Center, supervised by Dr. Shaoli Huang. My research interests lie in 3D motion generation, Motion Capture, and Embodie AI.
“Fortunately, my passion lies in the essence of artificial intelligence itself, rather than the mere trappings it brings ! “ — Yongkang
HoleGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures (3DV 2025 & China3DV2024 video pre)
Yongkang Cheng, Shaoli Huangâ€
DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech (AAAI 2025)
Yongkang Cheng, Shaoli Huang†, Xuelin Chen, Jifeng Ning, Mingming Gong
BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation (Under Review)
Yongkang Cheng, Shaoli Huang, Jifeng Ning, Ying Shan
Conditional GAN for Enhancing Diffusion Models in Efficient and Authentic Global Gesture Generation from Audio (WACV 2025)
Yongkang Cheng, Mingjiang Liang, Shaoli Huang†, Gaoge Han, Jifeng Ning, Wei Liu.
RopeTP: Global Human Motion Recovery via Integrating Robust Pose Estimation with Diffusion Trajectory Prior (WACV 2025)
Yongkang Cheng *, Mingjiang Liang *, Hualin Liang, Shaoli Huang†, Wei Liu
SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark (ECCV 2024)
Zhengdi Yu, Shaoli Huang†, Yongkang Cheng, Tolga Birdal1
ExpGest: Expressive Full-Body Gesture Generation Using Diffusion Model and Hybrid Audio-Text Guidance (ICME 2024)
Yongkang Cheng, Mingjiang Liang, Shaoli Huang†, Gaoge Han, Wei Liu, Jifeng Ningâ€
Conference Reviewer ICME 2024,2025; ICPR 2024; ACM MM 2024; WACV 2025; ICCV 2025;
Journal Reviewer IJCV; TVCG; PR;