VividAnimator: An End-to-End Audio and Pose-driven Half-Body Human Animation Framework

摘要

Existing for audio- and pose-driven human animation methods often struggle with stiff head movements and blurry hands, primarily due to the weak correlation between audio and head movements and the structural complexity of hands. To address these issues, we propose extbf{VividAnimator}, an end-to-end framework for generating high-quality, half-body human animations driven by audio and sparse hand pose conditions. Our framework introduces three key innovations. First, to overcome the instability and high cost of online codebook training, we pre-train a Hand Clarity Codebook (HCC) that encodes rich, high-fidelity hand texture priors, significantly mitigating hand degradation. Second, we design a Dual-Stream Audio-Aware Module (DSAA) to model lip synchronization and natural head pose dynamics separately while enabling interaction. Third, we introduce a Pose Calibration Trick (PCT) that refines and aligns pose conditions by relaxing rigid constraints, ensuring smooth and natural gesture transitions. Extensive experiments demonstrate that Vivid Animator achieves state-of-the-art performance, producing videos with superior hand detail, gesture realism, and identity consistency, validated by both quantitative metrics and qualitative evaluations.

类型
出版物
In The IEEE/CVF Winter Conference on Applications of Computer Vision
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Chi Wang 王驰
Chi Wang 王驰
特聘研究员

我的研究领域涉及人工智能内容生成、计算机图形学、三维视觉、以及数字孪生。