In recent years, large-scale pre-trained diffusion transformer models have made significant progress in video generation. While current DiT models can produce high-definition, high-frame-rate, and highly diverse videos, there is a lack of fine-grained control over the video content. Controlling the motion of subjects in videos using only prompts is challenging, especially when it comes to describing complex movements. Further, existing methods fail to control the motion in image-to-video generation, as the subject in the reference image often differs from the subject in the reference video in terms of initial position, size, and shape. To address this, we propose the Leveraging Motion Prior (LMP) framework for zero-shot video generation. Our framework harnesses the powerful generative capabilities of pre-trained diffusion transformers to enable motion in the generated videos to reference user-provided motion videos in both text-to-video and image-to-video generation. To this end, we first introduce a foreground-background disentangle module to distinguish between moving subjects and backgrounds in the reference video, preventing interference in the target video generation. A reweighted motion transfer module is designed to allow the target video to reference the motion from the reference video. To avoid interference from the subject in the reference video, we propose an appearance separation module to suppress the appearance of the reference subject in the target video. We annotate the DAVIS dataset with detailed prompts for our experiments and design evaluation metrics to validate the effectiveness of our method. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in generation quality, prompt-video consistency, and control capability.
A red flamingo...
A monkey...
A white rally motorcycle...
An astronaut...
...on a mountain stream...
...in a bustling city park...
...in an urban setting...
...along a sandy beach...
Ref video
A rabbit in the forest...autumn...
A cyclist...snow....winter...
Ref video
An astronaut...on the moon...
A Bengal tiger...forest...
Ref video
Baseline
DMT
Ours
A black and orange cat...
...on a sandy beach with waves...
Ref video
Baseline
DMT
Ours
A man walks down...
...a serene coastal area...
Ref video
Baseline
Ours
A panda with thick fur...
@article{chen2025lmp,
title={LMP: Leveraging Motion Prior in Zero-Shot Video Generation with Diffusion Transformer},
author={Chen, Changgu and Yang, Xiaoyan and Shu, Junwei and Wang, Changbo and Li, Yang},
journal={arXiv preprint arXiv:2505.14167},
year={2025}
}