Efficient Camera-Controlled Video Generation of Static Scenes via Sparse Diffusion and 3D Rendering
Abstract
Diffusion-based video generation is made more efficient through keyframe-based 3D reconstruction and rendering, enabling faster synthesis with maintained visual quality.
Modern video generative models based on diffusion models can produce very realistic clips, but they are computationally inefficient, often requiring minutes of GPU time for just a few seconds of video. This inefficiency poses a critical barrier to deploying generative video in applications that require real-time interactions, such as embodied AI and VR/AR. This paper explores a new strategy for camera-conditioned video generation of static scenes: using diffusion-based generative models to generate a sparse set of keyframes, and then synthesizing the full video through 3D reconstruction and rendering. By lifting keyframes into a 3D representation and rendering intermediate views, our approach amortizes the generation cost across hundreds of frames while enforcing geometric consistency. We further introduce a model that predicts the optimal number of keyframes for a given camera trajectory, allowing the system to adaptively allocate computation. Our final method, SRENDER, uses very sparse keyframes for simple trajectories and denser ones for complex camera motion. This results in video generation that is more than 40 times faster than the diffusion-based baseline in generating 20 seconds of video, while maintaining high visual fidelity and temporal stability, offering a practical path toward efficient and controllable video synthesis.
Community
Proposes SRENDER: generate sparse diffusion keyframes for static scenes and render 3D views to produce long videos fast and consistently.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Pixel-to-4D: Camera-Controlled Image-to-Video Generation with Dynamic 3D Gaussians (2026)
- GeoVideo: Introducing Geometric Regularization into Video Generation Model (2025)
- Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image (2025)
- ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation (2025)
- Light-X: Generative 4D Video Rendering with Camera and Illumination Control (2025)
- WorldReel: 4D Video Generation with Consistent Geometry and Motion Modeling (2025)
- WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper