ShapeR: Robust Conditional 3D Shape Generation from Casual Captures
Abstract
ShapeR generates high-fidelity 3D shapes from casual image sequences using visual-inertial SLAM, 3D detection, and vision-language models with rectified flow transformer conditioning.
Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.
Community
arXivlens breakdown of this paper π https://arxivlens.com/PaperView/Details/shaper-robust-conditional-3d-shape-generation-from-casual-captures-6688-8ed9d9b7
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AmodalGen3D: Generative Amodal 3D Object Reconstruction from Sparse Unposed Views (2025)
- Gen3R: 3D Scene Generation Meets Feed-Forward Reconstruction (2026)
- Affostruction: 3D Affordance Grounding with Generative Reconstruction (2026)
- 3AM: Segment Anything with Geometric Consistency in Videos (2026)
- LabelAny3D: Label Any Object 3D in the Wild (2026)
- 3D-RE-GEN: 3D Reconstruction of Indoor Scenes with a Generative Framework (2025)
- SpatialMosaic: A Multiview VLM Dataset for Partial Visibility (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper