Training-Free Motion-Guided Video Generation with Enhanced Temporal Consistency
Using Motion Consistency Loss


1University of Adelaide,      2The University of New South Wales
 
Reference Video No Inversion Noise Initialization No Consistency Guidance Ours


Abstract

In this paper, we address the challenge of generating temporally consistent videos with motion guidance. While many existing methods depend on additional control modules or inference-time fine-tuning, recent studies suggest that effective motion guidance is achievable without altering the model architecture or requiring extra training. Such approaches offer promising compatibility with various video generation foundation models. However, existing training-free methods often struggle to maintain consistent temporal coherence across frames or to follow guided motion accurately. In this work, we propose a simple yet effective solution that combines an initial-noise-based approach with a novel motion consistency loss, the latter being our key innovation. Specifically, we capture the inter-frame feature correlation patterns of intermediate features from a video diffusion model to represent the motion pattern of the reference video. We then design a motion consistency loss to maintain similar feature correlation patterns in the generated video, using the gradient of this loss in the latent space to guide the generation process for precise motion control. This approach improves temporal consistency across various motion control tasks while preserving the benefits of a training-free setup. Extensive experiments show that our method sets a new standard for efficient, temporally coherent video generation.


Overview of our method. We first conduct (a) inversion noise initialization on the reference video to obtain the initial noise zT (Section 3.2). Then we (b) extract the motion pattern M from the reference video for each tracked point p (Section 3.3). During the (c) denoising process, we use the proposed frame-to-frame motion consistency loss Lc , calculated with Eq. (4) based on M and newly extracted M' from the noise zt as the motion guidance for the noise estimation (Section 3.4).



Our method vs. Reference video based methods


Reference Videos MotionDirector Ours
Lifting weights
Riding a bicycle
Playing golf
Skateboarding

Our method can better control the content consistency across different frames and simute the motion of reference videos.



Our method vs. Trajectory based methods


Trajectory Direct Reference Peekaboo FreeTraj Ours

A kangaroo jumping in the Australian outback.

A man in gray clothes running in the summer.

A panda surfing in the universe.

A rabbit burrowing downwards into its warren.

Our method can better follow the trajectory movement and maintain the content consistency across different frames.



Gesture simulation


Input Video Generated Videos

A rabbit is moving its ear.

A snake is moving forward.

We use iPhone 15 pro max to take hand moving. Our method can successfully simulate the motion of fingers.



BibTeX