Papers
arxiv:2603.10408

Motion Forcing: A Decoupled Framework for Robust Video Generation in Motion Dynamics

Published on Mar 11
Authors:
,
,
,
,

Abstract

Video generation framework that stabilizes the balance between visual quality, physical consistency, and controllability through hierarchical point-shape-appearance decomposition and masked point recovery for physical law learning.

AI-generated summary

The ultimate goal of video generation is to satisfy a fundamental trilemma: achieving high visual quality, maintaining rigorous physical consistency, and enabling precise controllability. While recent models can maintain this balance in simple, isolated scenarios, we observe that this equilibrium is fragile and often breaks down as scene complexity increases (e.g., involving collisions or dense traffic). To address this, we introduce Motion Forcing, a framework designed to stabilize this trilemma even in complex generative tasks. Our key insight is to explicitly decouple physical reasoning from visual synthesis via a hierarchical ``Point-Shape-Appearance'' paradigm. This approach decomposes generation into verifiable stages: modeling complex dynamics as sparse geometric anchors (Point), expanding them into dynamic depth maps that explicitly resolve 3D geometry (Shape), and finally rendering high-fidelity textures (Appearance). Furthermore, to foster robust physical understanding, we employ a Masked Point Recovery strategy. By randomly masking input anchors during training and enforcing the reconstruction of complete dynamic depth, the model is compelled to move beyond passive pattern matching and learn latent physical laws (e.g., inertia) to infer missing trajectories. Extensive experiments on autonomous driving benchmarks show that Motion Forcing significantly outperforms state-of-the-art baselines, maintaining trilemma stability across complex scenes. Evaluations on physics and robotics further confirm our framework's generality.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.10408 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.10408 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.