avatar

Alexandra(Yongkang) Cheng

Incoming PHD student & Scientific Researcher
MBZUAI & An Embodied AI Startup
yongkangcheng959@gmail.com


🤓About Me

Hello, everyone. I’m Yongkang Cheng. I’m an incoming Ph.D. student at MBZUAI, and my supervisor is Prof. Mingming Gong. I obtained my master’s and bachelor’s degrees from NWAFU and NJAU respectively. Currently, I’m a researcher at an Embodied AI startup. Before this, I worked as a research intern at Tencent AILab, CV-Lab, and the Digital Human Research Center, supervised by Dr. Shaoli Huang. My research interests lie in 3D motion generation, Motion Capture, and Embodie AI.

“Fortunately, my passion lies in the essence of artificial intelligence itself, rather than the mere trappings it brings ! “ — Yongkang

🎉 Research Interests

đź‘Ł Education Experience

đź‘Ł Research Experience

🔥 News

📝 Selected Publications (* indicates co-authors. † indicates the corresponding author/project leader.)

HoleGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures (3DV 2025 & China3DV2024 video pre)

Yongkang Cheng, Shaoli Huang†

DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech (AAAI 2025)

Yongkang Cheng, Shaoli Huang†, Xuelin Chen, Jifeng Ning, Mingming Gong

BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation (Under Review)

arxiv project code

Yongkang Cheng, Shaoli Huang, Jifeng Ning, Ying Shan

Conditional GAN for Enhancing Diffusion Models in Efficient and Authentic Global Gesture Generation from Audio (WACV 2025)

Yongkang Cheng, Mingjiang Liang, Shaoli Huang†, Gaoge Han, Jifeng Ning, Wei Liu.

RopeTP: Global Human Motion Recovery via Integrating Robust Pose Estimation with Diffusion Trajectory Prior (WACV 2025)

code

Yongkang Cheng *, Mingjiang Liang *, Hualin Liang, Shaoli Huang†, Wei Liu

SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark (ECCV 2024)

arxiv paper code project

Zhengdi Yu, Shaoli Huang†, Yongkang Cheng, Tolga Birdal1

ExpGest: Expressive Full-Body Gesture Generation Using Diffusion Model and Hybrid Audio-Text Guidance (ICME 2024)

arxiv paper code

Yongkang Cheng, Mingjiang Liang, Shaoli Huang†, Gaoge Han, Wei Liu, Jifeng Ning†

🎖 Honors and Awards

đź‘€ Services

Conference Reviewer ICME 2024,2025; ICPR 2024; ACM MM 2024; WACV 2025; ICCV 2025;

Journal Reviewer IJCV; TVCG; PR;