Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation

School of Computer Science and Technology
East China Normal University
AAAI 2025

Our Motion-Zero framework endows any pre-trained video diffusion model with the capability to manipulate object trajectories directly, circumventing the need for supplementary training. By designating the target entity in the input prompts and a sequence of bounding boxes, users can intuitively direct the motion path of the object within the generated video sequence.


Abstract

Recent large-scale pre-trained diffusion models have demonstrated a powerful generative ability to produce high-quality videos from detailed text descriptions. However, exerting control over the motion of objects in videos generated by any video diffusion model is a challenging problem.

In this paper, we propose a novel zero-shot moving object trajectory control framework, Motion-Zero, to enable a bounding-box-trajectories-controlled text-to-video diffusion model. To this end, an initial noise prior module is designed to provide a position-based prior to improve the stability of the appearance of the moving object and the accuracy of position. In addition, based on the attention map of the U-net, spatial constraints are directly applied to the denoising process of diffusion models, which further ensures the positional and spatial consistency of moving objects during the inference.

Furthermore, temporal consistency is guaranteed with a proposed shift temporal attention mechanism. Our method can be flexibly applied to various state-of-the-art video diffusion models without any training process. Extensive experiments demonstrate our proposed method can control the motion trajectories of objects and generate high-quality videos.

Core Method



Overview of our Motion-Zero. The total pipeline is shown on (a). Given the box condition $\mathcal{B}$ and the prompt condition, we generate the prior latents $\mathbf{z}_T$ by our Initial Noise Prior Module (INPM) as shown on (b). At timestep $t$, $\mathbf{z}_t$ is firstly optimized to $\mathbf{z}_t'$ by the Spatial Constraints (SC). Subsequently, $\mathbf{z}_t'$ is passed to the Unet with Shift Temporal Attention Module (STAM) as demonstrated on (c). All the parameters of the video diffusion are frozen. $T_1$ represents the number of timesteps during which SC and STAM are applied, and $T_2$ denotes the number of timesteps where the original video diffusion process is utilized.



Init Noise Prior Module (INPM)


The Initial Noise Prior Module (INPM) is designed to leverage this property to provide a strong prior for the position of the moving object. Several steps are involved to integrate a moving object into a sequence of frames with a coherent prior as shown on the video above. Firstly, a meta video $V_{meta}$ is sampled using the original Video Diffusion with spatial constraints, $\mathbf{z}^*\sim\mathcal{N}(0,\mathbf{I}) $ as latent input, conditioned on a given prompt $\mathbf{c}$ and the first frame box $\mathcal{B}^0$. Then, a video latent $\mathbf{z}_{meta}$ is generated based on $V_{meta}$ from Encoder $\mathcal{E}$. Once $\mathbf{z}_{meta}$ is prepared, we perform a DDIM Inversion to obtain the corresponding noise latent representation $\mathbf{z}_I$. We crop the latent representation within the box $\mathcal{B}^0$ for each frame, creating a sequence of latent patches. Subsequently, we use a local mixup operation to mix the latent patches and the initial noise $\mathbf{z}^*$ in the range of $\mathcal{B}^f$ frame by frame. This method allows us to set a coherent prior in the corresponding object's position in the initial noises.

Shift Temporal Attenion Module (STAM)


A Shift Temporal Attention Mechanism (STAM) is proposed to improve the dynamics of the moving object in different frames. Specifically, we shift the elements of $\mathbf{z}^f$ inside the $\mathcal{B}^f$ range with the elements inside the $\mathcal{B}^0$ range, as shown on the video above. Therefore, the subsequent frames within the box range can be aligned with the box range of the first frame.

Experiments

Generation Results

Comparison

Ablation

Conclusion

Our contributions include:

  • We propose a zero-shot framework Motion-Zero which is capable of controlling the motion trajectories of arbitrary objects within a pre-trained video generation diffusion model. Our Motion-Zero is plug-and-play and without any additional training.
  • By recognizing the influence of initial noise on video generation, we implemented an initial noise conditioning module that establishes advantageous starting conditions for controllable video diffusion.
  • To further accurate control over the generative video, we propose spatial constraints and a novel shift temporal attention mechanism aimed at ensuring the positional accuracy and temporal continuity and consistency of the target, respectively.

Citation

If you found our work helpful, please cite us.

BibTeX:

@article{chen2025motionzero,
      title={Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation},
      author={Chen, Changgu and Shu, Junwei and He, Gaoqi and Wang, Changbo and Li, Yang},
      journal={AAAI},
      year={2025}
    }