|
|
---
|
|
|
license: cc-by-4.0
|
|
|
task_categories:
|
|
|
- robotics
|
|
|
language:
|
|
|
- en
|
|
|
size_categories:
|
|
|
- 10B<n<100B
|
|
|
---
|
|
|
# EmbodiedSplat Dataset & Checkpoints |
|
|
|
|
|
[](https://gchhablani.github.io/embodied-splat/) |
|
|
|
|
|
High-fidelity 3D scene reconstructions, navigation episodes, and policy checkpoints |
|
|
released with the **EmbodiedSplat: Personalized Real-to-Sim-to-Real Navigation with Gaussian Splats from a Mobile Device** (ICCV 2025) paper. |
|
|
|
|
|
This dataset supports research on sim-to-real robot navigation using 3D Gaussian Splatting (DN-Splatter) and low-effort iPhone LiDAR captures. |
|
|
|
|
|
--- |
|
|
|
|
|
## Repository Structure |
|
|
```text |
|
|
embodied-splat/ |
|
|
├── ckpts/ |
|
|
├── datasets/ |
|
|
│ └── pointnav/ |
|
|
│ ├── pointnav_dn_splatter/ |
|
|
│ ├── pointnav_dn_splatter_2/ |
|
|
│ ├── pointnav_hm3d_stretch/ |
|
|
│ ├── pointnav_hssd/ |
|
|
│ ├── pointnav_mushroom_dn_splatter/ |
|
|
│ ├── pointnav_polycam_mesh/ |
|
|
│ └── pointnav_polycam_mesh_2/ |
|
|
├── fine_tune_ckpts/ |
|
|
│ ├── hm3d_fine_tuned_ckpts/ |
|
|
│ └── hssd_fine_tuned_ckpts/ |
|
|
├── grad_lounge/ |
|
|
└── scene_datasets/ |
|
|
├── mushroom/ |
|
|
│ └── dn_splatter/ |
|
|
└── polycam_data/ |
|
|
├── dn_splatter/ |
|
|
├── dn_splatter_2/ |
|
|
├── polycam_mesh/ |
|
|
└── polycam_mesh_2/ |
|
|
``` |
|
|
|
|
|
|
|
|
### `ckpts/` |
|
|
Pre-trained **PointNav** policy checkpoints: |
|
|
|
|
|
* **hm3d_ckpt_204.pth** – trained on the HM3D dataset |
|
|
* **hssd_ckpt_332.pth** – trained on the HSSD dataset |
|
|
|
|
|
Use these as starting points for fine-tuning on custom scenes. |
|
|
|
|
|
--- |
|
|
|
|
|
### `datasets/pointnav/` |
|
|
Habitat-Sim pointNav **episode datasets** used in the paper: |
|
|
|
|
|
* **HM3D** episodes |
|
|
* **HSSD** episodes |
|
|
* **MuSHRoom** dataset episodes |
|
|
* Episodes for our **custom DN-Splatter** and **Polycam** meshes |
|
|
* `castleberry` → **conf_b** in the paper |
|
|
* `clough_classroom` → **classroom** in the paper |
|
|
* `grad_lounge` → **lounge** in the paper |
|
|
* `piedmont` → **conf_a** in the paper |
|
|
* `polycam_mesh_2/` and `dn_splatter_2/` → **coda_conference_room** (**conf_c** in the supplementary material) |
|
|
|
|
|
These provide navigation tasks (start/goal poses, goals, etc.) for training and evaluation. |
|
|
|
|
|
--- |
|
|
|
|
|
### `fine_tune_ckpts/` |
|
|
Fine-tuned policy checkpoints corresponding exactly to the evaluation results reported in the paper: |
|
|
|
|
|
* **hm3d_fine_tuned_ckpts/** |
|
|
* Policies fine-tuned **from the HM3D pre-trained model** on each of the **5 scenes** released in the paper + supplementary material. |
|
|
|
|
|
* **hssd_fine_tuned_ckpts/** |
|
|
* Policies fine-tuned **from the HSSD pre-trained model** on each of the **4 scenes** released in the paper. |
|
|
|
|
|
These checkpoints directly reproduce the quantitative results shown in the main and supplementary text. |
|
|
|
|
|
--- |
|
|
|
|
|
### `grad_lounge/` |
|
|
Checkpoints used for **real-world robot navigation** in the **“lounge” scene** |
|
|
described in the paper’s real-robot experiments. |
|
|
|
|
|
--- |
|
|
|
|
|
### `scene_datasets/` |
|
|
3D scene reconstructions and meshes. |
|
|
|
|
|
* **mushroom/** – DN-Splatter reconstructions of the [MuSHRoom dataset](https://xuqianren.github.io/publications/MuSHRoom/). |
|
|
* **polycam_data/** – Our own Polycam captures and corresponding DN-Splatter reconstructions: |
|
|
* `castleberry` → **conf_b** in the paper |
|
|
* `clough_classroom` → **classroom** in the paper |
|
|
* `grad_lounge` → **lounge** in the paper |
|
|
* `piedmont` → **conf_a** in the paper |
|
|
* `polycam_mesh_2/` and `dn_splatter_2/` → **coda_conference_room** (**conf_c** in the supplementary material) |
|
|
|
|
|
Each subfolder contains: |
|
|
* **Polycam exported meshes** (`.glb`) |
|
|
* **DN-Splatter reconstructed meshes** (`.glb`) |
|
|
|
|
|
--- |
|
|
|
|
|
## Usage |
|
|
|
|
|
These resources can be used to: |
|
|
* Train navigation agents in [Habitat-Sim](https://aihabitat.org/) or similar simulators. |
|
|
* Reproduce all **sim-to-real** experiments from the paper. |
|
|
* Fine-tune your own navigation policies on high-fidelity reconstructions. |
|
|
|
|
|
**Please cite our paper if you use this dataset or checkpoints**: |
|
|
```bibtex |
|
|
@inproceedings{chhablani2025embodiedsplat, |
|
|
title={EmbodiedSplat: Personalized Real-to-Sim-to-Real Navigation with Gaussian Splats from a Mobile Device}, |
|
|
author={Gunjan Chhablani and Xiaomeng Ye and Muhammad Zubair Irshad and Zsolt Kira}, |
|
|
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## License |
|
|
|
|
|
All files are released under the **CC-BY-4.0** license. |
|
|
You are free to use and modify the data with appropriate attribution. |
|
|
|
|
|
--- |
|
|
|
|
|
## Contact |
|
|
|
|
|
For questions or collaboration inquiries, please reach out through the project page: |
|
|
➡️ [https://gchhablani.github.io/embodied-splat/](https://gchhablani.github.io/embodied-splat/) |
|
|
|
|
|
--- |
|
|
license: cc-by-4.0 |
|
|
--- |