ShawnRu commited on
Commit
ccfcf1c
Β·
verified Β·
1 Parent(s): a1d2372

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -98
README.md CHANGED
@@ -1,50 +1,46 @@
1
- ---
2
- language:
3
- - en
4
- license: mit
5
- task_categories:
6
- - robotics
7
- tags:
8
- - agent
9
- - robotics
10
- - benchmark
11
- - environment
12
- - underwater
13
- - multi-modal
14
- - mllm
15
- - large-language-models
16
- ---
 
 
17
 
18
  <h1 align="center"> 🌊 OceanGym 🦾 </h1>
19
  <h3 align="center"> A Benchmark Environment for Underwater Embodied Agents </h3>
20
 
21
  <p align="center">
22
  🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
23
- πŸ“„ <a href="https://huggingface.co/papers/2509.26536" target="_blank">Paper</a>
24
- πŸ’» <a href="https://github.com/OceanGPT/OceanGym" target="_blank">Code</a>
25
  πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
26
- ☁️ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih" target="_blank">Google Drive</a>
27
- ☁️ <a href="https://pan.baidu.com/s/19c-BeIpAG1EjMjXZHCAqPA?pwd=sgjs" target="_blank">Baidu Drive</a>
28
  </p>
29
 
30
- <img src="asset/img/o1.png" align=center>
31
 
32
  **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
33
 
34
- We introduce OceanGym, the first comprehensive benchmark for ocean underwater embodied agents, designed to advance AI in one of the most demanding real-world environments. Unlike terrestrial or aerial domains, underwater settings present extreme perceptual and decision-making challenges, including low visibility, dynamic ocean currents, making effective agent deployment exceptionally difficult. OceanGym encompasses eight realistic task domains and a unified agent framework driven by Multi-modal Large Language Models (MLLMs), which integrates perception, memory, and sequential decision-making. Agents are required to comprehend optical and sonar data, autonomously explore complex environments, and accomplish long-horizon objectives under these harsh conditions. Extensive experiments reveal substantial gaps between state-of-the-art MLLM-driven agents and human experts, highlighting the persistent difficulty of perception, planning, and adaptability in ocean underwater environments. By providing a high-fidelity, rigorously designed platform, OceanGym establishes a testbed for developing robust embodied AI and transferring these capabilities to real-world autonomous ocean underwater vehicles, marking a decisive step toward intelligent agents capable of operating in one of Earth's last unexplored frontiers. The code and data are available at this https URL .
35
-
36
  # πŸ’ Acknowledgement
37
 
38
- OceanGym environment is based on Unreal Engine (UE) 5.3.
39
-
40
- Partial functions of OceanGym is developed on [HoloOcean](https://github.com/byu-holoocean).
41
 
42
- Thanks for their great contributions!
43
 
44
  # πŸ”” News
45
 
46
- - 09-2025, we launched the OceanGym project.
47
- - 08-2025, we finshed the OceanGym environment.
48
 
49
  ---
50
 
@@ -153,9 +149,11 @@ pip install -r requirements.txt
153
 
154
  ## Perception Task
155
 
 
 
156
  **Step 1: Prepare the dataset**
157
 
158
- After downloading from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym/tree/main/data/perception), and put it into the `data/perception` folder.
159
 
160
  **Step 2: Select model parameters**
161
 
@@ -214,42 +212,25 @@ This project is based on the HoloOcean environment. πŸ’
214
  > We have placed a simplified version here. If you encounter any detailed issues, please refer to the [original installation document](https://byu-holoocean.github.io/holoocean-docs/v2.1.0/usage/installation.html).
215
 
216
 
217
- ## Clone HoloOcean
218
-
219
- Make sure your GitHub account is linked to an **Epic Games** account, please Follow the steps [here](https://www.unrealengine.com/en-US/ue-on-github) and remember to accept the email invitation from Epic Games.
220
-
221
- After that clone HoloOcean:
222
-
223
- ```bash
224
- git clone git@github.com:byu-holoocean/HoloOcean.git holoocean
225
- ```
226
 
 
 
 
 
 
227
  ## Packaged Installation
228
 
229
- 1. Additional Requirements
230
-
231
- For the build-essential package for Linux, you can run the following console command:
232
-
233
- ```bash
234
- sudo apt install build-essential
235
- ```
236
-
237
- 2. Python Library
238
 
239
  From the cloned repository, install the Python package by doing the following:
240
 
241
  ```bash
242
- cd holoocean/client
243
  pip install .
244
  ```
245
 
246
- 3. Worlds Packages
247
-
248
- To install the most recent version of the Ocean worlds package, open a Python shell by typing the following and hit enter:
249
-
250
- ```bash
251
- python
252
- ```
253
 
254
  Install the package by running the following Python commands:
255
 
@@ -279,28 +260,43 @@ C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
279
  **1. If you're use it in first time, you have to compile it**
280
 
281
  1-1. find the Holodeck.uproject in **engine** folder
282
-
283
- <img src="asset/img/pic1.png" style="width: 60%; height: auto;" align="center">
284
 
285
  1-2. Right-click and select:Generate Visual Studio project files
286
-
287
- <img src="asset/img/pic2.png" style="width: 60%; height: auto;" align="center">
288
 
289
  1-3. If the version is not 5.3.2,please choose the Switch Unreal Engine Version
290
-
291
- <img src="asset/img/pic3.png" style="width: 60%; height: auto;" align="center">
292
 
293
  1-4. Then open the project
294
 
295
- <img src="asset/img/pic4.png" style="width: 60%; height: auto;" align="center">
296
 
297
  **2. Then find the `HAIDI` map in `demo` directory**
298
 
299
- <img src="asset/img/pic5.png" style="width: 60%; height: auto;" align="center">
300
 
301
  **3. Run the project**
302
 
303
- <img src="asset/img/pic6.png" style="width: 60%; height: auto;" align="center">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304
 
305
  # 🧠 Decision Task
306
 
@@ -310,7 +306,7 @@ The decision experiment can be run with reference to the [Quick Start](#-quick-s
310
 
311
  ## Target Object Locations
312
 
313
- We have provided eight tasks. For specific task descriptions, please refer to the [paper](https://huggingface.co/papers/2509.26536).
314
 
315
  The following are the coordinates for each target object in the environment (in meters):
316
 
@@ -356,7 +352,7 @@ The following are the coordinates for each target object in the environment (in
356
 
357
  ### Import Data
358
 
359
- First, you need download our data from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym).
360
 
361
  And then create a new `data` folder in the project root directory:
362
 
@@ -609,24 +605,24 @@ python perception/task/init_map_with_sonar.py \
609
 
610
  ## Decision Task
611
 
612
- <img src="asset/img/t1.png" align=center>
613
 
614
  - This table is the performance in decision tasks requiring autonomous completion by MLLM-driven agents.
615
 
616
  ## Perception Task
617
 
618
- <img src="asset/img/t2.png" align=center>
619
 
620
  - This table is the performance of perception tasks across different models and conditions.
621
  - Values represent accuracy percentages.
622
  - Adding sonar means using both RGB and sonar images.
623
 
624
- # πŸ“š Datasets
625
  **The link to the dataset is as follows**\
626
  ☁️ <a href="https://drive.google.com/drive/folders/1VhrvhvbWvnaS4EyeyaV1fmTQ6gPo8GCN?usp=drive_link" target="_blank">Google Drive</a>
627
  - Decision Task
628
 
629
- ```python
630
  decision_dataset
631
  β”œβ”€β”€ main
632
  β”‚ β”œβ”€β”€ gpt4omini
@@ -652,21 +648,67 @@ decision_dataset
652
  β”œβ”€β”€ qwen
653
  └── gpt4omini
654
  ```
 
655
 
 
 
 
656
 
657
  - Perception Task
658
 
659
- ```python
660
  perception_dataset
661
  β”œβ”€β”€ data
662
  β”‚ β”œβ”€β”€ highLight
663
  β”‚ β”œβ”€β”€ highLightContext
664
  β”‚ β”œβ”€β”€ lowLight
665
  β”‚ β”œβ”€β”€ lowLightContext
 
666
  β”‚
667
  └── result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
668
 
 
 
 
 
 
 
 
 
 
669
  ```
 
 
 
 
 
 
670
 
671
  # 🚩 Citation
672
 
@@ -684,30 +726,4 @@ If this OceanGym paper or benchmark is helpful, please kindly cite as this:
684
  }
685
  ```
686
 
687
- General HoloOcean use:
688
-
689
- ```bibtex
690
- @inproceedings{Potokar22icra,
691
- author = {E. Potokar and S. Ashford and M. Kaess and J. Mangelson},
692
- title = {Holo{O}cean: An Underwater Robotics Simulator},
693
- booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
694
- address = {Philadelphia, PA, USA},
695
- month = may,
696
- year = {2022}
697
- }
698
- ```
699
-
700
- Simulation of Sonar (Imaging, Profiling, Sidescan) sensors:
701
-
702
- ```bibtex
703
- @inproceedings{Potokar22iros,
704
- author = {E. Potokar and K. Lay and K. Norman and D. Benham and T. Neilsen and M. Kaess and J. Mangelson},
705
- title = {Holo{O}cean: Realistic Sonar Simulation},
706
- booktitle = {Proc. IEEE/RSJ Intl. Conf. Intelligent Robots and Systems, IROS},
707
- address = {Kyoto, Japan},
708
- month = {Oct},
709
- year = {2022}
710
- }
711
- ```
712
-
713
  πŸ’ Thanks again!
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ task_categories:
6
+ - robotics
7
+ tags:
8
+ - agent
9
+ - robotics
10
+ - benchmark
11
+ - environment
12
+ - underwater
13
+ - multi-modal
14
+ - mllm
15
+ - large-language-models
16
+ size_categories:
17
+ - n<1K
18
+ ---
19
 
20
  <h1 align="center"> 🌊 OceanGym 🦾 </h1>
21
  <h3 align="center"> A Benchmark Environment for Underwater Embodied Agents </h3>
22
 
23
  <p align="center">
24
  🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
25
+ πŸ“„ <a href="https://arxiv.org/abs/2509.26536" target="_blank">ArXiv Paper</a>
 
26
  πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
27
+ ☁️ <a href="https://drive.google.com/file/d/1EfKHeiyQD5eoJ6-EsiJHuIdBRM5Ope5A/view?usp=drive_link" target="_blank">Google Drive</a>
28
+ ☁️ <a href="https://pan.baidu.com/s/16h86huHLeFGAKatRWvLrFQ?pwd=wput" target="_blank">Baidu Drive</a>
29
  </p>
30
 
31
+ <img src="asset/img/o1.png" align=center>
32
 
33
  **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
34
 
 
 
35
  # πŸ’ Acknowledgement
36
 
37
+ OceanGym environment is built upon Unreal Engine (UE) 5.3, with certain components developed by drawing inspiration from and partially based on [HoloOcean](https://github.com/byu-holoocean). We sincerely acknowledge their valuable contribution.
 
 
38
 
 
39
 
40
  # πŸ”” News
41
 
42
+ - 10-2025, we released the initial version of OceanGym along with the accompanying [paper](https://arxiv.org/abs/2509.26536).
43
+ - 04-2025, we launched the OceanGym project.
44
 
45
  ---
46
 
 
149
 
150
  ## Perception Task
151
 
152
+ > All commands are applicable to **Linux**, so if you using **Windows**, you need to change the corresponding path representation (especially the slash).
153
+
154
  **Step 1: Prepare the dataset**
155
 
156
+ After downloading from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym/tree/main/data/perception) or [Google Drive](https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih), put it into the `data/perception` folder.
157
 
158
  **Step 2: Select model parameters**
159
 
 
212
  > We have placed a simplified version here. If you encounter any detailed issues, please refer to the [original installation document](https://byu-holoocean.github.io/holoocean-docs/v2.1.0/usage/installation.html).
213
 
214
 
215
+ ## Install the OceanGym_large.zip
 
 
 
 
 
 
 
 
216
 
217
+ From
218
+ ☁️ <a href="https://drive.google.com/file/d/1EfKHeiyQD5eoJ6-EsiJHuIdBRM5Ope5A/view?usp=drive_link" target="_blank">Google Drive</a>
219
+ ☁️ <a href="https://pan.baidu.com/s/16h86huHLeFGAKatRWvLrFQ?pwd=wput" target="_blank">Baidu Drive</a>
220
+ download the **OceanGym_large.zip** And extract it to the folder you want
221
+
222
  ## Packaged Installation
223
 
224
+ 1. Python Library
 
 
 
 
 
 
 
 
225
 
226
  From the cloned repository, install the Python package by doing the following:
227
 
228
  ```bash
229
+ cd OceanGym_large/client
230
  pip install .
231
  ```
232
 
233
+ 2. Worlds Packages
 
 
 
 
 
 
234
 
235
  Install the package by running the following Python commands:
236
 
 
260
  **1. If you're use it in first time, you have to compile it**
261
 
262
  1-1. find the Holodeck.uproject in **engine** folder
263
+
264
+ <img src="asset/img/pic1.png" style="width: 60%; height: auto;" align="center">
265
 
266
  1-2. Right-click and select:Generate Visual Studio project files
267
+
268
+ <img src="asset/img/pic2.png" style="width: 60%; height: auto;" align="center">
269
 
270
  1-3. If the version is not 5.3.2,please choose the Switch Unreal Engine Version
271
+
272
+ <img src="asset/img/pic3.png" style="width: 60%; height: auto;" align="center">
273
 
274
  1-4. Then open the project
275
 
276
+ <img src="asset/img/pic4.png" style="width: 60%; height: auto;" align="center">
277
 
278
  **2. Then find the `HAIDI` map in `demo` directory**
279
 
280
+ <img src="asset/img/pic5.png" style="width: 60%; height: auto;" align="center">
281
 
282
  **3. Run the project**
283
 
284
+ <img src="asset/img/pic6.png" style="width: 60%; height: auto;" align="center">
285
+
286
+ **4. Run the code**
287
+
288
+ When the ue editor shows as follows, namely: **"LogD3D12RHI: Cannot end block when stack is empty"** , it indicates that the scene has been loaded.
289
+
290
+ <img src="asset/img/pic7.png" style="width: 60%; height: auto;" align="center">
291
+
292
+ Then you can start the code, either directly using vscode
293
+
294
+ <img src="asset/img/pic8.png" style="width: 60%; height: auto;" align="center">
295
+ or by entering the following command in the command line
296
+
297
+ ```bash
298
+ python decision\tasks\task4.py
299
+ ```
300
 
301
  # 🧠 Decision Task
302
 
 
306
 
307
  ## Target Object Locations
308
 
309
+ We have provided eight tasks. For specific task descriptions, please refer to the [paper](https://arxiv.org/abs/2509.26536).
310
 
311
  The following are the coordinates for each target object in the environment (in meters):
312
 
 
352
 
353
  ### Import Data
354
 
355
+ First, you need download our data from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym) or [Google Drive](https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih).
356
 
357
  And then create a new `data` folder in the project root directory:
358
 
 
605
 
606
  ## Decision Task
607
 
608
+ <img src="asset/img/t1.png" align=center>
609
 
610
  - This table is the performance in decision tasks requiring autonomous completion by MLLM-driven agents.
611
 
612
  ## Perception Task
613
 
614
+ <img src="asset/img/t2.png" align=center>
615
 
616
  - This table is the performance of perception tasks across different models and conditions.
617
  - Values represent accuracy percentages.
618
  - Adding sonar means using both RGB and sonar images.
619
 
620
+ # πŸ“š DataSets
621
  **The link to the dataset is as follows**\
622
  ☁️ <a href="https://drive.google.com/drive/folders/1VhrvhvbWvnaS4EyeyaV1fmTQ6gPo8GCN?usp=drive_link" target="_blank">Google Drive</a>
623
  - Decision Task
624
 
625
+ ```
626
  decision_dataset
627
  β”œβ”€β”€ main
628
  β”‚ β”œβ”€β”€ gpt4omini
 
648
  β”œβ”€β”€ qwen
649
  └── gpt4omini
650
  ```
651
+ ### **How to use this dataset**
652
 
653
+ In the main folder, you can see the data generated by the three models corresponding to the three folders. Within each model folder, there are task1-12 task folders, and within the task folders, there are point1-3 folders, representing the results generated from different starting points. Among them, point1 and point2 are **fixed starting points**, which are respectively [144 ,-114,-63] and [350 ,-118 -7] and point3 is a **random point**\
654
+ In the scale experiment, Point1-4 represent different task durations, with point1 being **1 hour**, point2 **1.5 hours**, point3 **2 hours**, and point4 **3 hours**. Note that the actual duration may vary to some extent due to the influence of large model calls, network fluctuations, and other factors\
655
+ If you want to evaluate the files generated by yourself, please place the corresponding **memory_{time_stamp}.json** and **important_memory_{time_stamp}.json** files in the corresponding folders
656
 
657
  - Perception Task
658
 
659
+ ```
660
  perception_dataset
661
  β”œβ”€β”€ data
662
  β”‚ β”œβ”€β”€ highLight
663
  β”‚ β”œβ”€β”€ highLightContext
664
  β”‚ β”œβ”€β”€ lowLight
665
  β”‚ β”œβ”€β”€ lowLightContext
666
+ β”‚ └── ... (label files)
667
  β”‚
668
  └── result
669
+ └── ... (detail result fils)
670
+ ```
671
+
672
+ ### **How to use this dataset**
673
+
674
+ In the main folder, `data` is the test data of perception task, `result` is the detail results of this [table](#perception-task-1).
675
+
676
+ Below the folder `data`, there are 4 folders and 4 JSON files. Each folder contains test data for each perception task, and each JSON file is the label of its corresponding folder.
677
+
678
+ # πŸ”§ Develop OceanGym
679
+ OceanGym supports custom scenarios. You can freely exert yourself in the scenarios we provide!\
680
+ You can find the assets you need in the **ue5 fab Mall** and add them to OceanGym to test the exploration ability of the robot!\
681
+ Or modify parameters such as **terrain and lighting** to simulate the weather in different scenarios!
682
+
683
+ ### Modify lighting
684
+
685
+ Step 1:
686
+ Find the **DirectionalLight** in outliner
687
+
688
+ Step 2:
689
+ Choose the details of **DirectionalLight**
690
+
691
+ Step 3:
692
+ Modify the data of **light** as per your requirements
693
+
694
+ <img src="asset/img/pic9.png" style="width: 60%; height: auto;" align="center">
695
 
696
+ **Notice**\
697
+ In our paper, we simulate low-light and high-light environments, where the Intensity of light is **10.0lux** in the **high-light** environment
698
+ Intensity of light is **1.5lux** in a **low-light** environment
699
+
700
+ ### Modify start position
701
+ Step 1:
702
+ Find the initial config file **OceanGym.json** in
703
+ ```
704
+ C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
705
  ```
706
+ Step 2:
707
+ Modify the data of **location** as per your requirements
708
+
709
+ <img src="asset/img/pic10.png" style="width: 60%; height: auto;" align="center">
710
+
711
+ If you want to develop more functions, you can visit [the official website of holoocean](https://byu-holoocean.github.io/holoocean-docs/v2.0.1/develop/develop.html)
712
 
713
  # 🚩 Citation
714
 
 
726
  }
727
  ```
728
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
729
  πŸ’ Thanks again!