Flow-Guided Transformer for Video Inpainting
Authors: Kaidong Zhang, Jingjing Fu, Dong Liu
Published in In the proceedings of European Conference on Computer Vision (ECCV), 2022
Recommended citation: Kaidong Zhang, Jingjing Fu, Dong Liu, "Flow-Guided Transformer for Video Inpainting." In the proceedings of European Conference on Computer Vision (ECCV), 2022.
Demo Video
Abstract
We propose a flow-guided transformer, which innovatively leverage the motion discrepancy exposed by optical flows to instruct the attention retrieval in transformer for high fidelity video inpainting. More specially, we design a novel flow completion network to complete the corrupted flows by exploiting the relevant flow features in a local temporal window. With the completed flows, we propagate the content across video frames, and adopt the flow-guided transformer to synthesize the rest corrupted regions. We decouple transformers along temporal and spatial dimension, so that we can easily integrate the locally relevant completed flows to instruct spatial attention only. Furthermore, we design a flow-reweight module to precisely control the impact of completed flows on each spatial transformer. For the sake of efficiency, we introduce window partition strategy to both spatial and temporal transformers. Especially in spatial transformer, we design a dual perspective spatial MHSA, which integrates the global tokens to the window-based attention. Extensive experiments demonstrate the effectiveness of the proposed method qualitatively and quantitatively.
Links
Paper / Supplementary / Codes / Keynotes / Poster / Talk
Bibtex
@misc{zhang2022flowguided,
title={Flow-Guided Transformer for Video Inpainting},
author={Kaidong Zhang and Jingjing Fu and Dong Liu},
year={2022},
eprint={2208.06768},
archivePrefix={arXiv},
primaryClass={cs.CV}
}