The diffusion-based generative models have achieved remarkable success in text-based image generation.              However, since it con

Fate/Zero: F using A ttentions for Zero-shot T ext-based Video E diting

submited by
Style Pass
2023-03-17 03:30:03

The diffusion-based generative models have achieved remarkable success in text-based image generation. However, since it contains enormous randomness in generation progress, it is still challenging to apply such models for real-world visual content editing, especially in videos. In this paper, we propose FateZero, a zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask. To edit videos consistently, we propose several techniques based on the pre-trained models. Firstly, in contrast to the straightforward DDIM inversion technique, our approach captures intermediate attention maps during inversion using source prompt1, which effectively retain both structural and motion information. These maps are directly fused in the editing process rather than generated during denoising. To further minimize semantic leakage of the source video, we then fuse self-attentions with a blending mask obtained by cross-attention features from the source prompt. Furthermore, we have implemented a reform of the self-attention mechanism in denoising UNet by introducing spatial-temporal attention to ensure frame consistency. Yet succinct, our method is the first one to show the ability of zero-shot text-driven video style and local attribute editing from the trained text-to-image model. We also have a better zero-shot shape-aware editing ability based on the One-shot text-to-video model. Extensive experiments demonstrate our superior temporal consistency and editing capability than previous works.

left, Pipeline: we store all the attention amps in DDIM inversion pipeline. At the editing stage of the DDIM denoising, we then fuse the attention maps with stored attention map using the proposed Attention Blending Block. Right, Attention Blending Block: First, we replace the cross-attention maps of un-edited words~(e.g., road and countryside) with source maps of them. In addition, we blend the self-attention map during inversion and editing with an adaptive spatial mask obtained from cross-attention, which represents the areas that the user wants to edit.

Leave a Comment