RyanWW commited on
Commit
39ea9ac
·
verified ·
1 Parent(s): ebbe8d0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 10K<n<100K
7
+ task_categories:
8
+ - image-text-to-text
9
+ pretty_name: Spatial457
10
+ tags:
11
+ - spatial-reasoning
12
+ - multimodal
13
+ ---
14
+
15
+
16
+ <div align="center">
17
+ <span style="color:red; font-size:26px">This is the 20k images version of Spatial457 benchmark. Can be used for model training.</span>
18
+ </div>
19
+
20
+ ---
21
+
22
+ <div align="center">
23
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon_name.png" alt="Spatial457 Logo" width="240"/>
24
+ </div>
25
+
26
+ <h1 align="center">
27
+ <a href="https://arxiv.org/abs/2502.08636">
28
+ Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models
29
+ </a>
30
+ </h1>
31
+
32
+ <p align="center">
33
+ <a href="https://xingruiwang.github.io/">Xingrui Wang</a><sup>1</sup>,
34
+ <a href="#">Wufei Ma</a><sup>1</sup>,
35
+ <a href="#">Tiezheng Zhang</a><sup>1</sup>,
36
+ <a href="#">Celso M. de Melo</a><sup>2</sup>,
37
+ <a href="#">Jieneng Chen</a><sup>1</sup>,
38
+ <a href="#">Alan Yuille</a><sup>1</sup>
39
+ </p>
40
+
41
+ <p align="center">
42
+ <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;&nbsp;&nbsp;
43
+ <sup>2</sup> DEVCOM Army Research Laboratory
44
+ </p>
45
+
46
+ <p align="center">
47
+ <a href="https://xingruiwang.github.io/projects/Spatial457/">🌐 Project Page</a> •
48
+ <a href="https://arxiv.org/abs/2502.08636">📄 Paper</a> •
49
+ <a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Dataset</a> •
50
+ <a href="https://github.com/XingruiWang/Spatial457">💻 Code</a>
51
+ </p>
52
+
53
+ <p align="center">
54
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%"/>
55
+ </p>
56
+
57
+ ---
58
+
59
+
60
+ ## 🧠 Introduction
61
+
62
+ **Spatial457** is a diagnostic benchmark designed to evaluate **6D spatial reasoning** in large multimodal models (LMMs). It systematically introduces four core spatial capabilities:
63
+
64
+ - 🧱 Multi-object understanding
65
+ - 🧭 2D spatial localization
66
+ - 📦 3D spatial localization
67
+ - 🔄 3D orientation estimation
68
+
69
+ These are assessed across **five difficulty levels** and **seven diverse question types**, ranging from simple object queries to complex reasoning about physical interactions.
70
+
71
+ ---
72
+
73
+ ## 📂 Dataset Structure
74
+
75
+ The dataset is organized as follows:
76
+
77
+ ```
78
+ Spatial457/
79
+ ├── images/ # RGB images used in VQA tasks
80
+ ├── questions/ # JSONs for each subtask
81
+ │ ├── L1_single.json
82
+ │ ├── L2_objects.json
83
+ │ ├── L3_2d_spatial.json
84
+ │ ├── L4_occ.json
85
+ │ └── ...
86
+ ├── Spatial457.py # Hugging Face dataset loader script
87
+ ├── README.md # Documentation
88
+ ```
89
+
90
+ Each JSON file contains a list of VQA examples, where each item includes:
91
+
92
+ - "image_filename": image file name used in the question
93
+ - "question": natural language question
94
+ - "answer": boolean, string, or number
95
+ - "program": symbolic program (optional)
96
+ - "question_index": unique identifier
97
+
98
+ This modular structure supports scalable multi-task evaluation across levels and reasoning types.
99
+
100
+ ---
101
+
102
+ ## 🛠️ Dataset Usage
103
+
104
+ You can load the dataset directly using the Hugging Face 🤗 `datasets` library:
105
+
106
+ ### 🔹 Load a specific subtask (e.g., L5_6d_spatial)
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ dataset = load_dataset("RyanWW/Spatial457", name="L5_6d_spatial", split="validation", data_dir=".")
112
+ ```
113
+
114
+ Each example is a dictionary like:
115
+
116
+ ```python
117
+ {
118
+ 'image': <PIL.Image.Image>,
119
+ 'image_filename': 'superCLEVR_new_000001.png',
120
+ 'question': 'Is the large red object in front of the yellow car?',
121
+ 'answer': 'True',
122
+ 'program': [...],
123
+ 'question_index': 100001
124
+ }
125
+ ```
126
+
127
+ ### 🔹 Other available configurations
128
+
129
+ ```python
130
+ [
131
+ "L1_single", "L2_objects", "L3_2d_spatial",
132
+ "L4_occ", "L4_pose", "L5_6d_spatial", "L5_collision"
133
+ ]
134
+ ```
135
+
136
+ You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
137
+
138
+ ## 📊 Benchmark
139
+
140
+ We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—across all subsets. The results below have been updated after rerunning the evaluation. While they show minor variance compared to the results in the published paper, the conclusions remain unchanged.
141
+
142
+ The inference script supports [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and is run by setting the dataset to `Spatial457`. You can find the detailed inference scripts [here](https://github.com/XingruiWang/VLMEvalKit).
143
+
144
+
145
+ ### Spatial457 Evaluation Results
146
+
147
+ | Model | L1_single | L2_objects | L3_2d_spatial | L4_occ | L4_pose | L5_6d_spatial | L5_collision |
148
+ |------------------------|-----------|------------|---------------|--------|---------|----------------|---------------|
149
+ | **GPT-4o** | 72.39 | 64.54 | 58.04 | 48.87 | 43.62 | 43.06 | 44.54 |
150
+ | **GeminiPro-1.5** | 69.40 | 66.73 | 55.12 | 51.41 | 44.50 | 43.11 | 44.73 |
151
+ | **Claude 3.5 Sonnet** | 61.04 | 59.20 | 55.20 | 40.49 | 41.38 | 38.81 | 46.27 |
152
+ | **Qwen2-VL-7B-Instruct** | 62.84 | 58.90 | 53.73 | 26.85 | 26.83 | 36.20 | 34.84 |
153
+
154
+ ---
155
+
156
+ ## 📚 Citation
157
+
158
+ ```bibtex
159
+ @inproceedings{wang2025spatial457,
160
+ title = {Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models},
161
+ author = {Wang, Xingrui and Ma, Wufei and Zhang, Tiezheng and de Melo, Celso M and Chen, Jieneng and Yuille, Alan},
162
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
163
+ year = {2025},
164
+ url = {https://arxiv.org/abs/2502.08636}
165
+ }
166
+ ```