File size: 10,678 Bytes
bfa5400 acd12b3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
license: mit
---
<img src="https://github.com/pi3det/toolkit/blob/main/images/pi3det.gif" width="12.5%"
align="left">
# Perspective-Invariant 3D Object Detection
<p align="center">
<a href="https://alanliang.vercel.app/" target="_blank">Ao Liang</a><sup>*,1,2,3,4</sup>
<a href="https://ldkong.com/" target="_blank">Lingdong Kong</a><sup>*,1</sup>
<a href="https://dylanorange.github.io/" target="_blank">Dongyue Lu</a><sup>*,1</sup>
<a href="" target="_blank">Youquan Liu</a><sup>5</sup>
<a href="" target="_blank">Jian Fang</a><sup>4</sup>
<a href="" target="_blank">Huaici Zhao</a><sup>4</sup>
<a href="https://www.comp.nus.edu.sg/~ooiwt/" target="_blank">Wei Tsang Ooi</a><sup>1</sup>
<br />
<sup>1</sup>National University of Singapore
<sup>2</sup>University of Chinese Academy of Sciences
<br />
<sup>3</sup>Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences
<br />
<sup>4</sup>Shenyang Institute of Automation, Chinese Academy of Sciences
<sup>5</sup>Fudan University
<br />
<sup>*</sup>Equally contributed to this work
</p>
<p align="center">
<a href="" target='_blank'>
<img src="https://img.shields.io/badge/Paper-%F0%9F%93%96-darkred">
</a>
<a href="http://pi3det.github.io/" target='_blank'>
<img src="https://img.shields.io/badge/Project-%F0%9F%94%97-orange">
</a>
<a href="" target='_blank'>
<img src="https://visitor-badge.laobi.icu/badge?page_id=pi3det.Pi3EDT">
</a>
</p>
<img src="https://robosense2025.github.io/images/track5/teaser.png" alt="Teaser" width="100%">
## Updates
- **[July 2025]**: Project page released.
- **[June 2025]**: **Pi3DET** has been extended to <strong>Track 5: Cross-Platform 3D Object Detection</strong> of the <a href="https://robosense2025.github.io/" target="_blank" rel="noopener noreferrer"><strong><u>RoboSense Challenge</u></strong></a> at <a href="https://www.iros25.org/" target="_blank" rel="noopener noreferrer"><strong><u>IROS 2025</u></strong></a>. See the <a href="https://robosense2025.github.io/track5" target="_blank" rel="noopener noreferrer"><strong><u>track homepage</u></strong></a>, <a href="https://github.com/robosense2025/track5" target="_blank" rel="noopener noreferrer"><strong><u>GitHub repo</u></strong></a> for more details.
## Todo
> Since the Pi3DET dataset is being used for **Track 5: Cross-Platform 3D Object Detection** of the [**_RoboSense Challenge_**](https://robosense2025.github.io/) at [**_IROS 2025_**](https://www.iros25.org/), in the interest of fairness we are temporarily not releasing all of the data and annotations. If you’re interested, we have open‑sourced a subset of the data and code—please refer to the track details for more information.
- [x] Release <strong>Phase 1</strong> dataset of the IROS Track, which is KITTI-like single-framee format.
- [ ] Release <strong>Phase 2</strong> dataset of the IROS Track, which is KITTI-like single-framee format.
- [ ] Release all data of Pi3DET, which has temporal information.
## Download
The Track 5 dataset follows the KITTI format. Each sample consists of:
- A front-view RGB image
- A LiDAR point cloud covering the camera’s field of view
- Calibration parameters
- 3D bounding-box annotations (for training)
> Calibration and annotations are packaged together in `.pkl` files.
We use the **same training set** (vehicle platform) for both phases, but **different validation sets**. The full dataset is hosted on Hugging Face:
[robosense/track5-cross-platform-3d-object-detection](https://huggingface.co/datasets/robosense/datasets/tree/main/track5-cross-platform-3d-object-detection)
1. **Download the dataset**
```bash
python tools/load_dataset.py $USER_DEFINE_OUTPUT_PATH
2. **Link data into the project**
```bash
# Create target directory
mkdir -p data/pi3det
# Link the training split
ln -s $USER_DEFINE_OUTPUT_PATH/track5-cross-platform-3d-object-detection/phase12_vehicle_training/training \
data/pi3det/training
# Link the validation split for Phase 1 (Drone)
ln -s $USER_DEFINE_OUTPUT_PATH/track5-cross-platform-3d-object-detection/phase1_drone_validation/validation \
data/pi3det/validation
# Link the .pkl info files
ln -s $USER_DEFINE_OUTPUT_PATH/track5-cross-platform-3d-object-detection/phase12_vehicle_training/training/pi3det_infos_train.pkl \
data/pi3det/pi3det_infos_train.pkl
ln -s $USER_DEFINE_OUTPUT_PATH/track5-cross-platform-3d-object-detection/phase1_drone_validation/validation/pi3det_infos_val.pkl \
data/pi3det/pi3det_infos_val.pkl
3. **Verify your directory structure**
After linking, your `data/` folder should look like this:
```bash
data/
└── pi3det/
├── training/
│ ├── image/
│ │ ├── 0000000.jpg
│ │ └── 0000001.jpg
│ └── point_cloud/
│ ├── 0000000.bin
│ └── 0000001.bin
├── validation/
│ ├── image/
│ │ ├── 0000000.jpg
│ │ └── 0000001.jpg
│ └── point_cloud/
│ ├── 0000000.bin
│ └── 0000001.bin
├── pi3det_infos_train.pkl
└── pi3det_infos_val.pkl
```
## Pi3DET Dataset
### Detailed statistic information
| Platform | Condition | Sequence | # of Frames | # of Points (M) | # of Vehicles | # of Pedestrians |
|-----------------------------|----------------|------------------------|------------:|----------------:|--------------:|-----------------:|
| **Vehicle (8)** | **Daytime (4)**| city_hall | 2,982 | 26.61 | 19,489 | 12,199 |
| | | penno_big_loop | 3,151 | 33.29 | 17,240 | 1,886 |
| | | rittenhouse | 3,899 | 49.36 | 11,056 | 12,003 |
| | | ucity_small_loop | 6,746 | 67.49 | 34,049 | 34,346 |
| | **Nighttime (4)**| city_hall | 2,856 | 26.16 | 12,655 | 5,492 |
| | | penno_big_loop | 3,291 | 38.04 | 8,068 | 106 |
| | | rittenhouse | 4,135 | 52.68 | 11,103 | 14,315 |
| | | ucity_small_loop | 5,133 | 53.32 | 18,251 | 8,639 |
| | | **Summary (Vehicle)** | 32,193 | 346.95 | 131,911 | 88,986 |
| **Drone (7)** | **Daytime (4)**| penno_parking_1 | 1,125 | 8.69 | 6,075 | 115 |
| | | penno_parking_2 | 1,086 | 8.55 | 5,896 | 340 |
| | | penno_plaza | 678 | 5.60 | 721 | 65 |
| | | penno_trees | 1,319 | 11.58 | 657 | 160 |
| | **Nighttime (3)**| high_beams | 674 | 5.51 | 578 | 211 |
| | | penno_parking_1 | 1,030 | 9.42 | 524 | 151 |
| | | penno_parking_2 | 1,140 | 10.12 | 83 | 230 |
| | | **Summary (Drone)** | 7,052 | 59.47 | 14,534 | 1,272 |
| **Quadruped (10)** | **Daytime (8)**| art_plaza_loop | 1,446 | 14.90 | 0 | 3,579 |
| | | penno_short_loop | 1,176 | 14.68 | 3,532 | 89 |
| | | rocky_steps | 1,535 | 14.42 | 0 | 5,739 |
| | | skatepark_1 | 661 | 12.21 | 0 | 893 |
| | | skatepark_2 | 921 | 8.47 | 0 | 916 |
| | | srt_green_loop | 639 | 9.23 | 1,349 | 285 |
| | | srt_under_bridge_1 | 2,033 | 28.95 | 0 | 1,432 |
| | | srt_under_bridge_2 | 1,813 | 25.85 | 0 | 1,463 |
| | **Nighttime (2)**| penno_plaza_lights | 755 | 11.25 | 197 | 52 |
| | | penno_short_loop | 1,321 | 16.79 | 904 | 103 |
| | | **Summary (Quadruped)**| 12,300 | 156.75 | 5,982 | 14,551 |
| **All Three Platforms (25)**| | **Summary (All)** | 51,545 | 563.17 | 152,427 | 104,809 |
### Examples
<img src="https://robosense2025.github.io/images/track5/data_example1.png" alt="Teaser" width="100%">
<img src="https://robosense2025.github.io/images/track5/data_example2.png" alt="Teaser" width="100%">
<img src="https://robosense2025.github.io/images/track5/data_example3.png" alt="Teaser" width="100%">
### Examples
<img src="https://robosense2025.github.io/images/track5/data_example1.png" alt="Teaser" width="100%">
<img src="https://robosense2025.github.io/images/track5/data_example2.png" alt="Teaser" width="100%">
<img src="https://robosense2025.github.io/images/track5/data_example3.png" alt="Teaser" width="100%"> |