A collaboration between Google AI researchers and the Indian Institute of Technology Kharagpur offers a new framework to synthesize talking heads from

Google’s LipSync3D Offers Improved ‘Deepfaked’ Mouth Movement Synchronization

submited by
Style Pass
2021-06-09 09:30:07

A collaboration between Google AI researchers and the Indian Institute of Technology Kharagpur offers a new framework to synthesize talking heads from audio content. The project aims to produce optimized and reasonably-resourced ways to create ‘talking head’ video content from audio, for the purposes of synching lip movements to dubbed or machine-translated audio, and for use in avatars, in interactive applications, and in other real-time environments.

The machine learning models trained in the process – called LipSync3D – require only a single video of the target face identity as input data. The data preparation pipeline separates extraction of facial geometry from evaluation of lighting and other facets of an input video, allowing more economical and focused training.

The two-stage work-flow of LipSync3D. Above, the generation of a dynamically textured 3D face from the ‘target’ audio; below, the insertion of the generated mesh into a target video.

Leave a Comment