Bridging Semantic and Kinematic Conditions with Diffusion-based Discrete Motion Tokenizer
Abstract
A three-stage motion generation framework combines discrete token-based planning with diffusion-based synthesis to improve controllability and fidelity while reducing token usage and computational requirements.
Prior motion generation largely follows two paradigms: continuous diffusion models that excel at kinematic control, and discrete token-based generators that are effective for semantic conditioning. To combine their strengths, we propose a three-stage framework comprising condition feature extraction (Perception), discrete token generation (Planning), and diffusion-based motion synthesis (Control). Central to this framework is MoTok, a diffusion-based discrete motion tokenizer that decouples semantic abstraction from fine-grained reconstruction by delegating motion recovery to a diffusion decoder, enabling compact single-layer tokens while preserving motion fidelity. For kinematic conditions, coarse constraints guide token generation during planning, while fine-grained constraints are enforced during control through diffusion-based optimization. This design prevents kinematic details from disrupting semantic token planning. On HumanML3D, our method significantly improves controllability and fidelity over MaskControl while using only one-sixth of the tokens, reducing trajectory error from 0.72 cm to 0.08 cm and FID from 0.083 to 0.029. Unlike prior methods that degrade under stronger kinematic constraints, ours improves fidelity, reducing FID from 0.033 to 0.014.
Community
the most interesting bit for me is moTok's idea to decouple semantic abstraction from fine-grained motion by letting a diffusion decoder recover the details from compact tokens. that design lets you push kinematic constraints into planning while relegating reconstruction to diffusion, which explains why you can use far fewer tokens yet still hit high fidelity. one ablation i’d love to see is what happens if you push token granularity coarser or finer and see where the fidelity/controllability trade-off shifts, especially under sudden/high-frequency motion or occlusions. btw the arxivlens breakdown helped me parse the method details, there’s a solid walkthrough on arxivlens that covers section 3 well: https://arxivlens.com/PaperView/Details/bridging-semantic-and-kinematic-conditions-with-diffusion-based-discrete-motion-tokenizer-7804-5f444359
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DiMo: Discrete Diffusion Modeling for Motion Generation and Understanding (2026)
- Language-Guided Transformer Tokenizer for Human Motion Generation (2026)
- Temporal consistency-aware text-to-motion generation (2026)
- ActionPlan: Future-Aware Streaming Motion Synthesis via Frame-Level Action Planning (2026)
- Causal Motion Diffusion Models for Autoregressive Motion Generation (2026)
- Reconstruction-Anchored Diffusion Model for Text-to-Motion Generation (2026)
- Planning in 8 Tokens: A Compact Discrete Tokenizer for Latent World Model (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper