fdd-video-edit.github.io - Video Editing via Factorized Diffusion Distillation

Description: Project webpage for fdd-video-edit

generative ai (303) diffusion models (35) video-to-video (3)

Example domain paragraphs

Paper arXiv Benchmark * Equal Contribution In an aquarium Sitting on a red bench Set it in winter wonderland Replace with a panda In Minecraft style Paint it pink and blue Abstract We introduce Emu Video Edit (EVE), a model that establishes a new state-of-the art in video editing without relying on any supervised video editing data. To develop EVE we separately train an image editing adapter and a video generation adapter, and attach both to the same text-to-image model. Then, to align the adapters towards

A comparison of our model with the previous state-of-the-art, InstructVid2Vid, on TGVE+

We extend our gratitude to the following people for their contributions (alphabetical order): Andrew Brown, Bichen Wu, Ishan Misra, Saketh Rambhatla, Xiaoliang Dai, Zijian He.

Links to fdd-video-edit.github.io (1)