EmoTalk3D: High-Fidelity Free-View Synthesis of Emotional 3D Talking Head
We present a novel approach for synthesizing 3D talking heads with controllable emotion, enhancing lip synchronization and rendering quality. To address multi-view consistency and emotional expressiveness issues, we propose a ‘Speech-to-Geometry-to-Appearance’ mapping framework trained on the EmoTalk3D dataset, enabling controllable emotion, wide-range view rendering, and fine facial details.
ECCV 2024 Project Page Code