ShawnRu commited on
Commit
2167cfc
Β·
verified Β·
1 Parent(s): 3df4cc9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +638 -631
README.md CHANGED
@@ -1,631 +1,638 @@
1
- <h1 align="center"> 🌊 OceanGym 🦾 </h1>
2
- <h3 align="center"> A Benchmark Environment for Underwater Embodied Agents </h3>
3
-
4
- <p align="center">
5
- 🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
6
- πŸ“„ <a href="https://arxiv.org/abs/123" target="_blank">ArXiv Paper</a>
7
- πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
8
- ☁️ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih?usp=sharing" target="_blank">Google Drive</a>
9
- </p>
10
-
11
- <img src="asset\img\o1.png" align=center>
12
-
13
- **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
14
-
15
- # πŸ’ Acknowledgement
16
-
17
- OceanGym environment is based on Unreal Engine (UE) 5.3.
18
-
19
- Partial functions of OceanGym is developed on [HoloOcean](https://github.com/byu-holoocean).
20
-
21
- Thanks for their great contributions!
22
-
23
- # πŸ”” News
24
-
25
- - 09-2025, we launched the OceanGym project.
26
- - 08-2025, we finshed the OceanGym environment.
27
-
28
- ---
29
-
30
- **Contents:**
31
- - [πŸ’ Acknowledgement](#-acknowledgement)
32
- - [πŸ”” News](#-news)
33
- - [πŸ“Ί Quick Start](#-quick-start)
34
- - [Decision Task](#decision-task)
35
- - [Perception Task](#perception-task)
36
- - [βš™οΈ Set up Environment](#️-set-up-environment)
37
- - [Clone HoloOcean](#clone-holoocean)
38
- - [Packaged Installation](#packaged-installation)
39
- - [Add World Files](#add-world-files)
40
- - [Open the World](#open-the-world)
41
- - [🧠 Decision Task](#-decision-task)
42
- - [Target Object Locations](#target-object-locations)
43
- - [Evaluation Criteria](#evaluation-criteria)
44
- - [πŸ‘€ Perception Task](#-perception-task)
45
- - [Using the Bench to Eval](#using-the-bench-to-eval)
46
- - [Import Data](#import-data)
47
- - [Set your Model Parameters](#set-your-model-parameters)
48
- - [Simple Multi-views](#simple-multi-views)
49
- - [Multi-views with Sonar](#multi-views-with-sonar)
50
- - [Multi-views add Sonar Examples](#multi-views-add-sonar-examples)
51
- - [Collecting Image Data](#collecting-image-data)
52
- - [Modify Configuration File](#modify-configuration-file)
53
- - [Collect Camera Images Only](#collect-camera-images-only)
54
- - [Collect Camera and Sonar Images](#collect-camera-and-sonar-images)
55
- - [⏱️ Results](#️-results)
56
- - [Decision Task](#decision-task-1)
57
- - [Perception Task](#perception-task-1)
58
- - [🚩 Citation](#-citation)
59
-
60
- # πŸ“Ί Quick Start
61
-
62
- Install the experimental code environment using pip:
63
-
64
- ```bash
65
- pip install -r requirements.txt
66
- ```
67
-
68
- ## Decision Task
69
-
70
- > Only the environment is ready! Build the environment based on [here](#️-set-up-environment).
71
-
72
- **Step 1: Run a Task Script**
73
-
74
- For example, to run task 4:
75
-
76
- ```bash
77
- python decision\tasks\task4.py
78
- ```
79
-
80
- Follow the keyboard instructions or switch to LLM mode for automatic decision-making.
81
-
82
-
83
- **Step 2: Keyboard Control Guide**
84
-
85
- | Key | Action |
86
- |-------------|------------------------------|
87
- | W | Move Forward |
88
- | S | Move Backward |
89
- | A | Move Left |
90
- | D | Move Right |
91
- | J | Turn Left |
92
- | L | Turn Right |
93
- | I | Move Up |
94
- | K | Move Down |
95
- | M | Switch to LLM Mode |
96
- | Q | Exit |
97
-
98
- > You can use WASD for movement, J/L for turning, I/K for up/down.
99
- > Press `M` to switch to large language model mode (may cause temporary lag).
100
- > Press `Q` to exit.
101
-
102
- **Step 3: View Results**
103
-
104
- Logs and memory files are automatically saved in the `log/` and `memory/` directories.
105
-
106
- **Step 4: Evaluate the results**
107
-
108
- Place the generated `memory` and `important_memory` files into the corresponding `point` folders.
109
- Then, set the evaluation paths in the `evaluate.py` file.
110
-
111
- We provide 6 experimental evaluation paths. In `evaluate.py`, you can configure them as follows:
112
-
113
- ```python
114
- eval_roots = [
115
- os.path.join(eval_root, "main", "gpt4omini"),
116
- os.path.join(eval_root, "main", "gemini"),
117
- os.path.join(eval_root, "main", "qwen"),
118
- os.path.join(eval_root, "migration", "gpt4o"),
119
- os.path.join(eval_root, "migration", "qwen"),
120
- os.path.join(eval_root, "scale", "qwen"),
121
- ]
122
- ```
123
-
124
- To run the evaluation:
125
-
126
- ```bash
127
- python decision\utils\evaluate.py
128
- ```
129
-
130
- The generated results will be saved under the `\eval\decision` folder.
131
-
132
- ## Perception Task
133
-
134
- **Step 1: Prepare the dataset**
135
-
136
- After downloading from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym/tree/main/data/perception), and put it into the `data/perception` folder.
137
-
138
- **Step 2: Select model parameters**
139
-
140
- | parameter | function |
141
- | ---| --- |
142
- | model_template | The large language model message queue template you selected. |
143
- | model_name_or_path | If it is an API model, it is the model name; if it is a local model, it is the path. |
144
- | api_key | If it is an API model, enter your key. |
145
- | base_url | If it is an API model, enter its baseful URL. |
146
-
147
- Now we only support OpenAI, Google Gemma, Qwen and OpenBMB.
148
-
149
- ```bash
150
- MODELS_TEMPLATE="Yours"
151
- MODEL_NAME_OR_PATH="Yours"
152
- API_KEY="Yours"
153
- BASE_URL="Yours"
154
- ```
155
-
156
- **Step 3: Run the experiments**
157
-
158
- | parameter | function |
159
- | ---| --- |
160
- | exp_name | Customize the name of the experiment to save the results. |
161
- | exp_idx | Select the experiment number, or enter "all" to select all. |
162
- | exp_json | JSON file containing the experiment label data. |
163
- | images_dir | The folder where the experimental image data is stored. |
164
-
165
- For the experimental types, We designed (1) multi-view perception task and (2) context-based perception task.
166
-
167
- For the lighting conditions, We designed (1) high illumination and (2) low illumination.
168
-
169
- For the auxiliary sonar, We designed (1) without sonar image (2) zero-shot sonar image and (3) sonar image with few sonar example.
170
-
171
- Such as this command is used to evaluate the **multi-view** perception task under **high** illumination:
172
-
173
-
174
- ```bash
175
- python perception/eval/mv.py \
176
- --exp_name Result_MV_highLight_00 \
177
- --exp_idx "all" \
178
- --exp_json "/data/perception/highLight.json" \
179
- --images_dir "/data/perception/highLight" \
180
- --model_template $MODELS_TEMPLATE \
181
- --model_name_or_path $MODEL_NAME_OR_PATH \
182
- --api_key $API_KEY \
183
- --base_url $BASE_URL
184
- ```
185
-
186
- For more patterns about perception tasks, please read [this](#-perception-task) part carefully.
187
-
188
- # βš™οΈ Set up Environment
189
-
190
- This project is based on the HoloOcean environment. πŸ’
191
-
192
- > We have placed a simplified version here. If you encounter any detailed issues, please refer to the [original installation document](https://byu-holoocean.github.io/holoocean-docs/v2.1.0/usage/installation.html).
193
-
194
-
195
- ## Clone HoloOcean
196
-
197
- Make sure your GitHub account is linked to an **Epic Games** account, please Follow the steps [here](https://www.unrealengine.com/en-US/ue-on-github) and remember to accept the email invitation from Epic Games.
198
-
199
- After that clone HoloOcean:
200
-
201
- ```bash
202
- git clone git@github.com:byu-holoocean/HoloOcean.git holoocean
203
- ```
204
-
205
- ## Packaged Installation
206
-
207
- 1. Additional Requirements
208
-
209
- For the build-essential package for Linux, you can run the following console command:
210
-
211
- ```bash
212
- sudo apt install build-essential
213
- ```
214
-
215
- 2. Python Library
216
-
217
- From the cloned repository, install the Python package by doing the following:
218
-
219
- ```bash
220
- cd holoocean/client
221
- pip install .
222
- ```
223
-
224
- 3. Worlds Packages
225
-
226
- To install the most recent version of the Ocean worlds package, open a Python shell by typing the following and hit enter:
227
-
228
- ```bash
229
- python
230
- ```
231
-
232
- Install the package by running the following Python commands:
233
-
234
- ```python
235
- import holoocean
236
- holoocean.install("Ocean")
237
- ```
238
-
239
- To do these steps in a single console command, use:
240
-
241
- ```bash
242
- python -c "import holoocean; holoocean.install('Ocean')"
243
- ```
244
-
245
- ## Add World Files
246
-
247
- Place the JSON config file from `asset/decision/map_config` or `asset\perception\map_config` into some place like:
248
-
249
- (Windows)
250
-
251
- ```
252
- C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
253
- ```
254
-
255
- ## Open the World
256
-
257
- **1. If you're use it in first time, you have to compile it**
258
-
259
- 1-1. find the Holodeck.uproject in **engine** folder \
260
- <img src="asset\img\pic1.png" style="width: 60%; height: auto;" align="center">
261
-
262
- 1-2. Right-click and select:Generate Visual Studio project files \
263
- <img src="asset\img\pic2.png" style="width: 60%; height: auto;" align="center">
264
-
265
- 1-3. If the version is not 5.3.2,please choose the Switch Unreal Engine Version \
266
- <img src="asset\img\pic3.png" style="width: 60%; height: auto;" align="center">
267
-
268
- 1-4. Then open the project \
269
- <img src="asset\img\pic4.png" style="width: 60%; height: auto;" align="center">
270
-
271
- **2. Then find the `HAIDI` map in `demo` directory** \
272
- <img src="asset\img\pic5.png" style="width: 60%; height: auto;" align="center">
273
-
274
- **3. Run the project** \
275
- <img src="asset\img\pic6.png" style="width: 60%; height: auto;" align="center">
276
-
277
- # 🧠 Decision Task
278
-
279
- > All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
280
-
281
- The decision experiment can be run with reference to the [Quick Start](#️-quick-start).
282
-
283
- ## Target Object Locations
284
-
285
- We have provided eight tasks. For specific task descriptions, please refer to the paper.
286
-
287
- The following are the coordinates for each target object in the environment (in meters):
288
-
289
- - **MINING ROBOT**:
290
- (-71, 149, -61), (325, -47, -83)
291
- - **OIL PIPELINE**:
292
- (345, -165, -32), (539, -233, -42), (207, -30, -66)
293
- - **OIL DRUM**:
294
- (447, -203, -98)
295
- - **SUNKEN SHIP**:
296
- (429, -151, -69), (78, -11, -47)
297
- - **ELECTRICAL BOX**:
298
- (168, 168, -65)
299
- - **WIND POWER STATION**:
300
- (207, -30, -66)
301
- - **AIRCRAFT WRECKAGE**:
302
- (40, -9, -54), (296, 78, -70), (292, -186, -67)
303
- - **H-MARKED LANDING PLATFORM**:
304
- (267, 33, -80)
305
-
306
- ---
307
-
308
- ## Evaluation Criteria
309
-
310
- 1. If the target is not found, use the final stopping position for evaluation.
311
- 2. If the target is found, use the closest distance to any target point.
312
- 3. For found targets:
313
- - Minimum distance ≀ 30: full score
314
- - 30 < distance < 100: score decreases proportionally
315
- - Distance β‰₯ 100: score is 0
316
- 4. Score composition:
317
- - One point: 100
318
- - Two points: 60 / 40
319
- - Three points: 60 / 20 / 20
320
-
321
- # πŸ‘€ Perception Task
322
-
323
- ## Using the Bench to Eval
324
-
325
- > All commands are applicable to **Linux**, so if you using **Windows**, you need to change the corresponding path representation (especially the slash).
326
- >
327
- > Now we only support OpenAI, Google Gemma, Qwen and OpenBMB. If you need to customize the model, please contact the author.
328
-
329
- ### Import Data
330
-
331
- First, you need download our data from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym).
332
-
333
- And then create a new `data` folder in the project root directory:
334
-
335
- ```bash
336
- mkdir -p data/perception
337
- ```
338
-
339
- Finally, put the downloaded data into the corresponding folder.
340
-
341
- ### Set your Model Parameters
342
-
343
- Just open a terminal in the root directory and set it directly.
344
-
345
- | parameter | function |
346
- | ---| --- |
347
- | model_template | The large language model message queue template you selected. |
348
- | model_name_or_path | If it is an API model, it is the model name; if it is a local model, it is the path. |
349
- | api_key | If it is an API model, enter your key. |
350
- | base_url | If it is an API model, enter its baseful URL. |
351
-
352
- ```bash
353
- MODELS_TEMPLATE="Yours"
354
- MODEL_NAME_OR_PATH="Yours"
355
- API_KEY="Yours"
356
- BASE_URL="Yours"
357
- ```
358
-
359
- ### Simple Multi-views
360
-
361
- All of these scripts evaluate the perception task, and the parameters are as follows:
362
-
363
- | parameter | function |
364
- | ---| --- |
365
- | exp_name | Customize the name of the experiment to save the results. |
366
- | exp_idx | Select the experiment number, or enter "all" to select all. |
367
- | exp_json | JSON file containing the experiment label data. |
368
- | images_dir | The folder where the experimental image data is stored. |
369
-
370
- This command is used to evaluate the **multi-view** perception task under **high** illumination:
371
-
372
- ```bash
373
- python perception/eval/mv.py \
374
- --exp_name Result_MV_highLight_00 \
375
- --exp_idx "all" \
376
- --exp_json "/data/perception/highLight.json" \
377
- --images_dir "/data/perception/highLight" \
378
- --model_template $MODELS_TEMPLATE \
379
- --model_name_or_path $MODEL_NAME_OR_PATH \
380
- --api_key $API_KEY \
381
- --base_url $BASE_URL
382
- ```
383
-
384
- This command is used to evaluate the **context-based** perception task under **high** illumination:
385
-
386
- ```bash
387
- python perception/eval/mv.py \
388
- --exp_name Result_MV_highLightContext_00 \
389
- --exp_idx "all" \
390
- --exp_json "/data/perception/highLightContext.json" \
391
- --images_dir "/data/perception/highLightContext" \
392
- --model_template $MODELS_TEMPLATE \
393
- --model_name_or_path $MODEL_NAME_OR_PATH \
394
- --api_key $API_KEY \
395
- --base_url $BASE_URL
396
- ```
397
-
398
- This command is used to evaluate the **multi-view** perception task under **low** illumination:
399
-
400
- ```bash
401
- python perception/eval/mv.py \
402
- --exp_name Result_MV_lowLight_00 \
403
- --exp_idx "all" \
404
- --exp_json "/data/perception/lowLight.json" \
405
- --images_dir "/data/perception/lowLight" \
406
- --model_template $MODELS_TEMPLATE \
407
- --model_name_or_path $MODEL_NAME_OR_PATH \
408
- --api_key $API_KEY \
409
- --base_url $BASE_URL
410
- ```
411
-
412
- This command is used to evaluate the **context-based** perception task under **low** illumination:
413
-
414
- ```bash
415
- python perception/eval/mv.py \
416
- --exp_name Result_MV_lowLightContext_00 \
417
- --exp_idx "all" \
418
- --exp_json "/data/perception/lowLightContext.json" \
419
- --images_dir "/data/perception/lowLightContext" \
420
- --model_template $MODELS_TEMPLATE \
421
- --model_name_or_path $MODEL_NAME_OR_PATH \
422
- --api_key $API_KEY \
423
- --base_url $BASE_URL
424
- ```
425
-
426
- ### Multi-views with Sonar
427
-
428
- This command is used to evaluate the **multi-view** perception task under **high** illumination with **sonar** image:
429
-
430
- ```bash
431
- python perception/eval/mvs.py \
432
- --exp_name Result_MVwS_highLight_00 \
433
- --exp_idx "all" \
434
- --exp_json "/data/perception/highLight.json" \
435
- --images_dir "/data/perception/highLight" \
436
- --model_template $MODELS_TEMPLATE \
437
- --model_name_or_path $MODEL_NAME_OR_PATH \
438
- --api_key $API_KEY \
439
- --base_url $BASE_URL
440
- ```
441
-
442
- This command is used to evaluate the **context-based** perception task under **high** illumination with **sonar** image:
443
-
444
- ```bash
445
- python perception/eval/mvs.py \
446
- --exp_name Result_MVwS_highLightContext_00 \
447
- --exp_idx "all" \
448
- --exp_json "/data/perception/highLightContext.json" \
449
- --images_dir "/data/perception/highLightContext" \
450
- --model_template $MODELS_TEMPLATE \
451
- --model_name_or_path $MODEL_NAME_OR_PATH \
452
- --api_key $API_KEY \
453
- --base_url $BASE_URL
454
- ```
455
-
456
- This command is used to evaluate the **multi-view** perception task under **low** illumination with **sonar** image:
457
-
458
- ```bash
459
- python perception/eval/mvs.py \
460
- --exp_name Result_MVwS_lowLight_00 \
461
- --exp_idx "all" \
462
- --exp_json "/data/perception/lowLight.json" \
463
- --images_dir "/data/perception/lowLight" \
464
- --model_template $MODELS_TEMPLATE \
465
- --model_name_or_path $MODEL_NAME_OR_PATH \
466
- --api_key $API_KEY \
467
- --base_url $BASE_URL
468
- ```
469
-
470
- This command is used to evaluate the **context-based** perception task under **low** illumination with **sonar** image:
471
-
472
- ```bash
473
- python perception/eval/mvs.py \
474
- --exp_name Result_MVwS_lowLightContext_00 \
475
- --exp_idx "all" \
476
- --exp_json "/data/perception/lowLightContext.json" \
477
- --images_dir "/data/perception/lowLightContext" \
478
- --model_template $MODELS_TEMPLATE \
479
- --model_name_or_path $MODEL_NAME_OR_PATH \
480
- --api_key $API_KEY \
481
- --base_url $BASE_URL
482
- ```
483
-
484
- ### Multi-views add Sonar Examples
485
-
486
- This command is used to evaluate the **multi-view** perception task under **high** illumination with **sona** image **examples**:
487
-
488
- ```bash
489
- python perception/eval/mvsex.py \
490
- --exp_name Result_MVwSss_highLight_00 \
491
- --exp_idx "all" \
492
- --exp_json "/data/perception/highLight.json" \
493
- --images_dir "/data/perception/highLight" \
494
- --model_template $MODELS_TEMPLATE \
495
- --model_name_or_path $MODEL_NAME_OR_PATH \
496
- --api_key $API_KEY \
497
- --base_url $BASE_URL
498
- ```
499
-
500
- This command is used to evaluate the **context-based** perception task under **high** illumination with **sona** image **examples**:
501
-
502
- ```bash
503
- python perception/eval/mvsex.py \
504
- --exp_name Result_MVwSss_highLightContext_00 \
505
- --exp_idx "all" \
506
- --exp_json "/data/perception/highLightContext.json" \
507
- --images_dir "/data/perception/highLightContext" \
508
- --model_template $MODELS_TEMPLATE \
509
- --model_name_or_path $MODEL_NAME_OR_PATH \
510
- --api_key $API_KEY \
511
- --base_url $BASE_URL
512
- ```
513
-
514
- This command is used to evaluate the **multi-view** perception task under **low** illumination with **sona** image **examples**:
515
-
516
- ```bash
517
- python perception/eval/mvsex.py \
518
- --exp_name Result_MVwSss_lowLight_00 \
519
- --exp_idx "all" \
520
- --exp_json "/data/perception/lowLight.json" \
521
- --images_dir "/data/perception/lowLight" \
522
- --model_template $MODELS_TEMPLATE \
523
- --model_name_or_path $MODEL_NAME_OR_PATH \
524
- --api_key $API_KEY \
525
- --base_url $BASE_URL
526
- ```
527
-
528
- This command is used to evaluate the **context-based** perception task under **low** illumination with **sona** image **examples**:
529
-
530
- ```bash
531
- python perception/eval/mvsex.py \
532
- --exp_name Result_MVwSss_lowLightContext_00 \
533
- --exp_idx "all" \
534
- --exp_json "/data/perception/lowLightContext.json" \
535
- --images_dir "/data/perception/lowLightContext" \
536
- --model_template $MODELS_TEMPLATE \
537
- --model_name_or_path $MODEL_NAME_OR_PATH \
538
- --api_key $API_KEY \
539
- --base_url $BASE_URL
540
- ```
541
-
542
- ## Collecting Image Data
543
-
544
- > This part is optional. Only use when you need to collect pictures by yourself.
545
-
546
- ### Modify Configuration File
547
-
548
- The sample configuration files can be found in `asset/perception/map_config`. You need to copy this and paste it into your HoloOcean project's configuration.
549
-
550
- ### Collect Camera Images Only
551
-
552
- This command is used to collect **camera** images only, and the parameters are as follows:
553
-
554
- | parameter | function |
555
- | ---| --- |
556
- | scenario | The name of the json configuration file you want to replace. |
557
- | task_name | Customize the name of the experiment to save the results. |
558
- | rgbcamera | The camera directions you can choose. If select all, enter "all". |
559
-
560
- ```bash
561
- python perception/task/init_map.py \
562
- --scenario without_sonar \
563
- --task_name "Exp_Camera_Only" \
564
- --rgbcamera "all"
565
- ```
566
-
567
- ### Collect Camera and Sonar Images
568
-
569
- This command is used to collect both **camera** images and **sonar** images at same time:
570
-
571
- ```bash
572
- python perception/task/init_map_with_sonar.py \
573
- --scenario with_sonar \
574
- --task_name "Exp_Add_Sonar" \
575
- --rgbcamera "FrontCamera"
576
- ```
577
-
578
- # ⏱️ Results
579
-
580
- ## Decision Task
581
-
582
- <img src="asset\img\t1.png" align=center>
583
-
584
- - This table is the performance in decision tasks requiring autonomous completion by MLLM-driven agents.
585
-
586
- ## Perception Task
587
-
588
- <img src="asset\img\t2.png" align=center>
589
-
590
- - This table is the performance of perception tasks across different models and conditions.
591
- - Values represent accuracy percentages.
592
- - Adding sonar means using both RGB and sonar images.
593
-
594
- # 🚩 Citation
595
-
596
- If this OceanGym paper or benchmark is helpful, please kindly cite as this:
597
-
598
- ```bibtex
599
- @inproceedings{xxx,
600
- title={OceanGym: A Benchmark Environment for Underwater Embodied Agents},
601
- ...
602
- }
603
- ```
604
-
605
- General HoloOcean use:
606
-
607
- ```bibtex
608
- @inproceedings{Potokar22icra,
609
- author = {E. Potokar and S. Ashford and M. Kaess and J. Mangelson},
610
- title = {Holo{O}cean: An Underwater Robotics Simulator},
611
- booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
612
- address = {Philadelphia, PA, USA},
613
- month = may,
614
- year = {2022}
615
- }
616
- ```
617
-
618
- Simulation of Sonar (Imaging, Profiling, Sidescan) sensors:
619
-
620
- ```bibtex
621
- @inproceedings{Potokar22iros,
622
- author = {E. Potokar and K. Lay and K. Norman and D. Benham and T. Neilsen and M. Kaess and J. Mangelson},
623
- title = {Holo{O}cean: Realistic Sonar Simulation},
624
- booktitle = {Proc. IEEE/RSJ Intl. Conf. Intelligent Robots and Systems, IROS},
625
- address = {Kyoto, Japan},
626
- month = {Oct},
627
- year = {2022}
628
- }
629
- ```
630
-
631
- πŸ’ Thanks again!
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - agent
7
+ ---
8
+ <h1 align="center"> 🌊 OceanGym 🦾 </h1>
9
+ <h3 align="center"> A Benchmark Environment for Underwater Embodied Agents </h3>
10
+
11
+ <p align="center">
12
+ 🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
13
+ πŸ“„ <a href="https://arxiv.org/abs/123" target="_blank">ArXiv Paper</a>
14
+ πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
15
+ ☁️ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih?usp=sharing" target="_blank">Google Drive</a>
16
+ </p>
17
+
18
+ <img src="asset\img\o1.png" align=center>
19
+
20
+ **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
21
+
22
+ # πŸ’ Acknowledgement
23
+
24
+ OceanGym environment is based on Unreal Engine (UE) 5.3.
25
+
26
+ Partial functions of OceanGym is developed on [HoloOcean](https://github.com/byu-holoocean).
27
+
28
+ Thanks for their great contributions!
29
+
30
+ # πŸ”” News
31
+
32
+ - 09-2025, we launched the OceanGym project.
33
+ - 08-2025, we finshed the OceanGym environment.
34
+
35
+ ---
36
+
37
+ **Contents:**
38
+ - [πŸ’ Acknowledgement](#-acknowledgement)
39
+ - [πŸ”” News](#-news)
40
+ - [πŸ“Ί Quick Start](#-quick-start)
41
+ - [Decision Task](#decision-task)
42
+ - [Perception Task](#perception-task)
43
+ - [βš™οΈ Set up Environment](#️-set-up-environment)
44
+ - [Clone HoloOcean](#clone-holoocean)
45
+ - [Packaged Installation](#packaged-installation)
46
+ - [Add World Files](#add-world-files)
47
+ - [Open the World](#open-the-world)
48
+ - [🧠 Decision Task](#-decision-task)
49
+ - [Target Object Locations](#target-object-locations)
50
+ - [Evaluation Criteria](#evaluation-criteria)
51
+ - [πŸ‘€ Perception Task](#-perception-task)
52
+ - [Using the Bench to Eval](#using-the-bench-to-eval)
53
+ - [Import Data](#import-data)
54
+ - [Set your Model Parameters](#set-your-model-parameters)
55
+ - [Simple Multi-views](#simple-multi-views)
56
+ - [Multi-views with Sonar](#multi-views-with-sonar)
57
+ - [Multi-views add Sonar Examples](#multi-views-add-sonar-examples)
58
+ - [Collecting Image Data](#collecting-image-data)
59
+ - [Modify Configuration File](#modify-configuration-file)
60
+ - [Collect Camera Images Only](#collect-camera-images-only)
61
+ - [Collect Camera and Sonar Images](#collect-camera-and-sonar-images)
62
+ - [⏱️ Results](#️-results)
63
+ - [Decision Task](#decision-task-1)
64
+ - [Perception Task](#perception-task-1)
65
+ - [🚩 Citation](#-citation)
66
+
67
+ # πŸ“Ί Quick Start
68
+
69
+ Install the experimental code environment using pip:
70
+
71
+ ```bash
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+ ## Decision Task
76
+
77
+ > Only the environment is ready! Build the environment based on [here](#️-set-up-environment).
78
+
79
+ **Step 1: Run a Task Script**
80
+
81
+ For example, to run task 4:
82
+
83
+ ```bash
84
+ python decision\tasks\task4.py
85
+ ```
86
+
87
+ Follow the keyboard instructions or switch to LLM mode for automatic decision-making.
88
+
89
+
90
+ **Step 2: Keyboard Control Guide**
91
+
92
+ | Key | Action |
93
+ |-------------|------------------------------|
94
+ | W | Move Forward |
95
+ | S | Move Backward |
96
+ | A | Move Left |
97
+ | D | Move Right |
98
+ | J | Turn Left |
99
+ | L | Turn Right |
100
+ | I | Move Up |
101
+ | K | Move Down |
102
+ | M | Switch to LLM Mode |
103
+ | Q | Exit |
104
+
105
+ > You can use WASD for movement, J/L for turning, I/K for up/down.
106
+ > Press `M` to switch to large language model mode (may cause temporary lag).
107
+ > Press `Q` to exit.
108
+
109
+ **Step 3: View Results**
110
+
111
+ Logs and memory files are automatically saved in the `log/` and `memory/` directories.
112
+
113
+ **Step 4: Evaluate the results**
114
+
115
+ Place the generated `memory` and `important_memory` files into the corresponding `point` folders.
116
+ Then, set the evaluation paths in the `evaluate.py` file.
117
+
118
+ We provide 6 experimental evaluation paths. In `evaluate.py`, you can configure them as follows:
119
+
120
+ ```python
121
+ eval_roots = [
122
+ os.path.join(eval_root, "main", "gpt4omini"),
123
+ os.path.join(eval_root, "main", "gemini"),
124
+ os.path.join(eval_root, "main", "qwen"),
125
+ os.path.join(eval_root, "migration", "gpt4o"),
126
+ os.path.join(eval_root, "migration", "qwen"),
127
+ os.path.join(eval_root, "scale", "qwen"),
128
+ ]
129
+ ```
130
+
131
+ To run the evaluation:
132
+
133
+ ```bash
134
+ python decision\utils\evaluate.py
135
+ ```
136
+
137
+ The generated results will be saved under the `\eval\decision` folder.
138
+
139
+ ## Perception Task
140
+
141
+ **Step 1: Prepare the dataset**
142
+
143
+ After downloading from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym/tree/main/data/perception), and put it into the `data/perception` folder.
144
+
145
+ **Step 2: Select model parameters**
146
+
147
+ | parameter | function |
148
+ | ---| --- |
149
+ | model_template | The large language model message queue template you selected. |
150
+ | model_name_or_path | If it is an API model, it is the model name; if it is a local model, it is the path. |
151
+ | api_key | If it is an API model, enter your key. |
152
+ | base_url | If it is an API model, enter its baseful URL. |
153
+
154
+ Now we only support OpenAI, Google Gemma, Qwen and OpenBMB.
155
+
156
+ ```bash
157
+ MODELS_TEMPLATE="Yours"
158
+ MODEL_NAME_OR_PATH="Yours"
159
+ API_KEY="Yours"
160
+ BASE_URL="Yours"
161
+ ```
162
+
163
+ **Step 3: Run the experiments**
164
+
165
+ | parameter | function |
166
+ | ---| --- |
167
+ | exp_name | Customize the name of the experiment to save the results. |
168
+ | exp_idx | Select the experiment number, or enter "all" to select all. |
169
+ | exp_json | JSON file containing the experiment label data. |
170
+ | images_dir | The folder where the experimental image data is stored. |
171
+
172
+ For the experimental types, We designed (1) multi-view perception task and (2) context-based perception task.
173
+
174
+ For the lighting conditions, We designed (1) high illumination and (2) low illumination.
175
+
176
+ For the auxiliary sonar, We designed (1) without sonar image (2) zero-shot sonar image and (3) sonar image with few sonar example.
177
+
178
+ Such as this command is used to evaluate the **multi-view** perception task under **high** illumination:
179
+
180
+
181
+ ```bash
182
+ python perception/eval/mv.py \
183
+ --exp_name Result_MV_highLight_00 \
184
+ --exp_idx "all" \
185
+ --exp_json "/data/perception/highLight.json" \
186
+ --images_dir "/data/perception/highLight" \
187
+ --model_template $MODELS_TEMPLATE \
188
+ --model_name_or_path $MODEL_NAME_OR_PATH \
189
+ --api_key $API_KEY \
190
+ --base_url $BASE_URL
191
+ ```
192
+
193
+ For more patterns about perception tasks, please read [this](#-perception-task) part carefully.
194
+
195
+ # βš™οΈ Set up Environment
196
+
197
+ This project is based on the HoloOcean environment. πŸ’
198
+
199
+ > We have placed a simplified version here. If you encounter any detailed issues, please refer to the [original installation document](https://byu-holoocean.github.io/holoocean-docs/v2.1.0/usage/installation.html).
200
+
201
+
202
+ ## Clone HoloOcean
203
+
204
+ Make sure your GitHub account is linked to an **Epic Games** account, please Follow the steps [here](https://www.unrealengine.com/en-US/ue-on-github) and remember to accept the email invitation from Epic Games.
205
+
206
+ After that clone HoloOcean:
207
+
208
+ ```bash
209
+ git clone git@github.com:byu-holoocean/HoloOcean.git holoocean
210
+ ```
211
+
212
+ ## Packaged Installation
213
+
214
+ 1. Additional Requirements
215
+
216
+ For the build-essential package for Linux, you can run the following console command:
217
+
218
+ ```bash
219
+ sudo apt install build-essential
220
+ ```
221
+
222
+ 2. Python Library
223
+
224
+ From the cloned repository, install the Python package by doing the following:
225
+
226
+ ```bash
227
+ cd holoocean/client
228
+ pip install .
229
+ ```
230
+
231
+ 3. Worlds Packages
232
+
233
+ To install the most recent version of the Ocean worlds package, open a Python shell by typing the following and hit enter:
234
+
235
+ ```bash
236
+ python
237
+ ```
238
+
239
+ Install the package by running the following Python commands:
240
+
241
+ ```python
242
+ import holoocean
243
+ holoocean.install("Ocean")
244
+ ```
245
+
246
+ To do these steps in a single console command, use:
247
+
248
+ ```bash
249
+ python -c "import holoocean; holoocean.install('Ocean')"
250
+ ```
251
+
252
+ ## Add World Files
253
+
254
+ Place the JSON config file from `asset/decision/map_config` or `asset\perception\map_config` into some place like:
255
+
256
+ (Windows)
257
+
258
+ ```
259
+ C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
260
+ ```
261
+
262
+ ## Open the World
263
+
264
+ **1. If you're use it in first time, you have to compile it**
265
+
266
+ 1-1. find the Holodeck.uproject in **engine** folder \
267
+ <img src="asset\img\pic1.png" style="width: 60%; height: auto;" align="center">
268
+
269
+ 1-2. Right-click and select:Generate Visual Studio project files \
270
+ <img src="asset\img\pic2.png" style="width: 60%; height: auto;" align="center">
271
+
272
+ 1-3. If the version is not 5.3.2,please choose the Switch Unreal Engine Version \
273
+ <img src="asset\img\pic3.png" style="width: 60%; height: auto;" align="center">
274
+
275
+ 1-4. Then open the project \
276
+ <img src="asset\img\pic4.png" style="width: 60%; height: auto;" align="center">
277
+
278
+ **2. Then find the `HAIDI` map in `demo` directory** \
279
+ <img src="asset\img\pic5.png" style="width: 60%; height: auto;" align="center">
280
+
281
+ **3. Run the project** \
282
+ <img src="asset\img\pic6.png" style="width: 60%; height: auto;" align="center">
283
+
284
+ # 🧠 Decision Task
285
+
286
+ > All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
287
+
288
+ The decision experiment can be run with reference to the [Quick Start](#️-quick-start).
289
+
290
+ ## Target Object Locations
291
+
292
+ We have provided eight tasks. For specific task descriptions, please refer to the paper.
293
+
294
+ The following are the coordinates for each target object in the environment (in meters):
295
+
296
+ - **MINING ROBOT**:
297
+ (-71, 149, -61), (325, -47, -83)
298
+ - **OIL PIPELINE**:
299
+ (345, -165, -32), (539, -233, -42), (207, -30, -66)
300
+ - **OIL DRUM**:
301
+ (447, -203, -98)
302
+ - **SUNKEN SHIP**:
303
+ (429, -151, -69), (78, -11, -47)
304
+ - **ELECTRICAL BOX**:
305
+ (168, 168, -65)
306
+ - **WIND POWER STATION**:
307
+ (207, -30, -66)
308
+ - **AIRCRAFT WRECKAGE**:
309
+ (40, -9, -54), (296, 78, -70), (292, -186, -67)
310
+ - **H-MARKED LANDING PLATFORM**:
311
+ (267, 33, -80)
312
+
313
+ ---
314
+
315
+ ## Evaluation Criteria
316
+
317
+ 1. If the target is not found, use the final stopping position for evaluation.
318
+ 2. If the target is found, use the closest distance to any target point.
319
+ 3. For found targets:
320
+ - Minimum distance ≀ 30: full score
321
+ - 30 < distance < 100: score decreases proportionally
322
+ - Distance β‰₯ 100: score is 0
323
+ 4. Score composition:
324
+ - One point: 100
325
+ - Two points: 60 / 40
326
+ - Three points: 60 / 20 / 20
327
+
328
+ # πŸ‘€ Perception Task
329
+
330
+ ## Using the Bench to Eval
331
+
332
+ > All commands are applicable to **Linux**, so if you using **Windows**, you need to change the corresponding path representation (especially the slash).
333
+ >
334
+ > Now we only support OpenAI, Google Gemma, Qwen and OpenBMB. If you need to customize the model, please contact the author.
335
+
336
+ ### Import Data
337
+
338
+ First, you need download our data from [Hugging Face](https://huggingface.co/datasets/zjunlp/OceanGym).
339
+
340
+ And then create a new `data` folder in the project root directory:
341
+
342
+ ```bash
343
+ mkdir -p data/perception
344
+ ```
345
+
346
+ Finally, put the downloaded data into the corresponding folder.
347
+
348
+ ### Set your Model Parameters
349
+
350
+ Just open a terminal in the root directory and set it directly.
351
+
352
+ | parameter | function |
353
+ | ---| --- |
354
+ | model_template | The large language model message queue template you selected. |
355
+ | model_name_or_path | If it is an API model, it is the model name; if it is a local model, it is the path. |
356
+ | api_key | If it is an API model, enter your key. |
357
+ | base_url | If it is an API model, enter its baseful URL. |
358
+
359
+ ```bash
360
+ MODELS_TEMPLATE="Yours"
361
+ MODEL_NAME_OR_PATH="Yours"
362
+ API_KEY="Yours"
363
+ BASE_URL="Yours"
364
+ ```
365
+
366
+ ### Simple Multi-views
367
+
368
+ All of these scripts evaluate the perception task, and the parameters are as follows:
369
+
370
+ | parameter | function |
371
+ | ---| --- |
372
+ | exp_name | Customize the name of the experiment to save the results. |
373
+ | exp_idx | Select the experiment number, or enter "all" to select all. |
374
+ | exp_json | JSON file containing the experiment label data. |
375
+ | images_dir | The folder where the experimental image data is stored. |
376
+
377
+ This command is used to evaluate the **multi-view** perception task under **high** illumination:
378
+
379
+ ```bash
380
+ python perception/eval/mv.py \
381
+ --exp_name Result_MV_highLight_00 \
382
+ --exp_idx "all" \
383
+ --exp_json "/data/perception/highLight.json" \
384
+ --images_dir "/data/perception/highLight" \
385
+ --model_template $MODELS_TEMPLATE \
386
+ --model_name_or_path $MODEL_NAME_OR_PATH \
387
+ --api_key $API_KEY \
388
+ --base_url $BASE_URL
389
+ ```
390
+
391
+ This command is used to evaluate the **context-based** perception task under **high** illumination:
392
+
393
+ ```bash
394
+ python perception/eval/mv.py \
395
+ --exp_name Result_MV_highLightContext_00 \
396
+ --exp_idx "all" \
397
+ --exp_json "/data/perception/highLightContext.json" \
398
+ --images_dir "/data/perception/highLightContext" \
399
+ --model_template $MODELS_TEMPLATE \
400
+ --model_name_or_path $MODEL_NAME_OR_PATH \
401
+ --api_key $API_KEY \
402
+ --base_url $BASE_URL
403
+ ```
404
+
405
+ This command is used to evaluate the **multi-view** perception task under **low** illumination:
406
+
407
+ ```bash
408
+ python perception/eval/mv.py \
409
+ --exp_name Result_MV_lowLight_00 \
410
+ --exp_idx "all" \
411
+ --exp_json "/data/perception/lowLight.json" \
412
+ --images_dir "/data/perception/lowLight" \
413
+ --model_template $MODELS_TEMPLATE \
414
+ --model_name_or_path $MODEL_NAME_OR_PATH \
415
+ --api_key $API_KEY \
416
+ --base_url $BASE_URL
417
+ ```
418
+
419
+ This command is used to evaluate the **context-based** perception task under **low** illumination:
420
+
421
+ ```bash
422
+ python perception/eval/mv.py \
423
+ --exp_name Result_MV_lowLightContext_00 \
424
+ --exp_idx "all" \
425
+ --exp_json "/data/perception/lowLightContext.json" \
426
+ --images_dir "/data/perception/lowLightContext" \
427
+ --model_template $MODELS_TEMPLATE \
428
+ --model_name_or_path $MODEL_NAME_OR_PATH \
429
+ --api_key $API_KEY \
430
+ --base_url $BASE_URL
431
+ ```
432
+
433
+ ### Multi-views with Sonar
434
+
435
+ This command is used to evaluate the **multi-view** perception task under **high** illumination with **sonar** image:
436
+
437
+ ```bash
438
+ python perception/eval/mvs.py \
439
+ --exp_name Result_MVwS_highLight_00 \
440
+ --exp_idx "all" \
441
+ --exp_json "/data/perception/highLight.json" \
442
+ --images_dir "/data/perception/highLight" \
443
+ --model_template $MODELS_TEMPLATE \
444
+ --model_name_or_path $MODEL_NAME_OR_PATH \
445
+ --api_key $API_KEY \
446
+ --base_url $BASE_URL
447
+ ```
448
+
449
+ This command is used to evaluate the **context-based** perception task under **high** illumination with **sonar** image:
450
+
451
+ ```bash
452
+ python perception/eval/mvs.py \
453
+ --exp_name Result_MVwS_highLightContext_00 \
454
+ --exp_idx "all" \
455
+ --exp_json "/data/perception/highLightContext.json" \
456
+ --images_dir "/data/perception/highLightContext" \
457
+ --model_template $MODELS_TEMPLATE \
458
+ --model_name_or_path $MODEL_NAME_OR_PATH \
459
+ --api_key $API_KEY \
460
+ --base_url $BASE_URL
461
+ ```
462
+
463
+ This command is used to evaluate the **multi-view** perception task under **low** illumination with **sonar** image:
464
+
465
+ ```bash
466
+ python perception/eval/mvs.py \
467
+ --exp_name Result_MVwS_lowLight_00 \
468
+ --exp_idx "all" \
469
+ --exp_json "/data/perception/lowLight.json" \
470
+ --images_dir "/data/perception/lowLight" \
471
+ --model_template $MODELS_TEMPLATE \
472
+ --model_name_or_path $MODEL_NAME_OR_PATH \
473
+ --api_key $API_KEY \
474
+ --base_url $BASE_URL
475
+ ```
476
+
477
+ This command is used to evaluate the **context-based** perception task under **low** illumination with **sonar** image:
478
+
479
+ ```bash
480
+ python perception/eval/mvs.py \
481
+ --exp_name Result_MVwS_lowLightContext_00 \
482
+ --exp_idx "all" \
483
+ --exp_json "/data/perception/lowLightContext.json" \
484
+ --images_dir "/data/perception/lowLightContext" \
485
+ --model_template $MODELS_TEMPLATE \
486
+ --model_name_or_path $MODEL_NAME_OR_PATH \
487
+ --api_key $API_KEY \
488
+ --base_url $BASE_URL
489
+ ```
490
+
491
+ ### Multi-views add Sonar Examples
492
+
493
+ This command is used to evaluate the **multi-view** perception task under **high** illumination with **sona** image **examples**:
494
+
495
+ ```bash
496
+ python perception/eval/mvsex.py \
497
+ --exp_name Result_MVwSss_highLight_00 \
498
+ --exp_idx "all" \
499
+ --exp_json "/data/perception/highLight.json" \
500
+ --images_dir "/data/perception/highLight" \
501
+ --model_template $MODELS_TEMPLATE \
502
+ --model_name_or_path $MODEL_NAME_OR_PATH \
503
+ --api_key $API_KEY \
504
+ --base_url $BASE_URL
505
+ ```
506
+
507
+ This command is used to evaluate the **context-based** perception task under **high** illumination with **sona** image **examples**:
508
+
509
+ ```bash
510
+ python perception/eval/mvsex.py \
511
+ --exp_name Result_MVwSss_highLightContext_00 \
512
+ --exp_idx "all" \
513
+ --exp_json "/data/perception/highLightContext.json" \
514
+ --images_dir "/data/perception/highLightContext" \
515
+ --model_template $MODELS_TEMPLATE \
516
+ --model_name_or_path $MODEL_NAME_OR_PATH \
517
+ --api_key $API_KEY \
518
+ --base_url $BASE_URL
519
+ ```
520
+
521
+ This command is used to evaluate the **multi-view** perception task under **low** illumination with **sona** image **examples**:
522
+
523
+ ```bash
524
+ python perception/eval/mvsex.py \
525
+ --exp_name Result_MVwSss_lowLight_00 \
526
+ --exp_idx "all" \
527
+ --exp_json "/data/perception/lowLight.json" \
528
+ --images_dir "/data/perception/lowLight" \
529
+ --model_template $MODELS_TEMPLATE \
530
+ --model_name_or_path $MODEL_NAME_OR_PATH \
531
+ --api_key $API_KEY \
532
+ --base_url $BASE_URL
533
+ ```
534
+
535
+ This command is used to evaluate the **context-based** perception task under **low** illumination with **sona** image **examples**:
536
+
537
+ ```bash
538
+ python perception/eval/mvsex.py \
539
+ --exp_name Result_MVwSss_lowLightContext_00 \
540
+ --exp_idx "all" \
541
+ --exp_json "/data/perception/lowLightContext.json" \
542
+ --images_dir "/data/perception/lowLightContext" \
543
+ --model_template $MODELS_TEMPLATE \
544
+ --model_name_or_path $MODEL_NAME_OR_PATH \
545
+ --api_key $API_KEY \
546
+ --base_url $BASE_URL
547
+ ```
548
+
549
+ ## Collecting Image Data
550
+
551
+ > This part is optional. Only use when you need to collect pictures by yourself.
552
+
553
+ ### Modify Configuration File
554
+
555
+ The sample configuration files can be found in `asset/perception/map_config`. You need to copy this and paste it into your HoloOcean project's configuration.
556
+
557
+ ### Collect Camera Images Only
558
+
559
+ This command is used to collect **camera** images only, and the parameters are as follows:
560
+
561
+ | parameter | function |
562
+ | ---| --- |
563
+ | scenario | The name of the json configuration file you want to replace. |
564
+ | task_name | Customize the name of the experiment to save the results. |
565
+ | rgbcamera | The camera directions you can choose. If select all, enter "all". |
566
+
567
+ ```bash
568
+ python perception/task/init_map.py \
569
+ --scenario without_sonar \
570
+ --task_name "Exp_Camera_Only" \
571
+ --rgbcamera "all"
572
+ ```
573
+
574
+ ### Collect Camera and Sonar Images
575
+
576
+ This command is used to collect both **camera** images and **sonar** images at same time:
577
+
578
+ ```bash
579
+ python perception/task/init_map_with_sonar.py \
580
+ --scenario with_sonar \
581
+ --task_name "Exp_Add_Sonar" \
582
+ --rgbcamera "FrontCamera"
583
+ ```
584
+
585
+ # ⏱️ Results
586
+
587
+ ## Decision Task
588
+
589
+ <img src="asset\img\t1.png" align=center>
590
+
591
+ - This table is the performance in decision tasks requiring autonomous completion by MLLM-driven agents.
592
+
593
+ ## Perception Task
594
+
595
+ <img src="asset\img\t2.png" align=center>
596
+
597
+ - This table is the performance of perception tasks across different models and conditions.
598
+ - Values represent accuracy percentages.
599
+ - Adding sonar means using both RGB and sonar images.
600
+
601
+ # 🚩 Citation
602
+
603
+ If this OceanGym paper or benchmark is helpful, please kindly cite as this:
604
+
605
+ ```bibtex
606
+ @inproceedings{xxx,
607
+ title={OceanGym: A Benchmark Environment for Underwater Embodied Agents},
608
+ ...
609
+ }
610
+ ```
611
+
612
+ General HoloOcean use:
613
+
614
+ ```bibtex
615
+ @inproceedings{Potokar22icra,
616
+ author = {E. Potokar and S. Ashford and M. Kaess and J. Mangelson},
617
+ title = {Holo{O}cean: An Underwater Robotics Simulator},
618
+ booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
619
+ address = {Philadelphia, PA, USA},
620
+ month = may,
621
+ year = {2022}
622
+ }
623
+ ```
624
+
625
+ Simulation of Sonar (Imaging, Profiling, Sidescan) sensors:
626
+
627
+ ```bibtex
628
+ @inproceedings{Potokar22iros,
629
+ author = {E. Potokar and K. Lay and K. Norman and D. Benham and T. Neilsen and M. Kaess and J. Mangelson},
630
+ title = {Holo{O}cean: Realistic Sonar Simulation},
631
+ booktitle = {Proc. IEEE/RSJ Intl. Conf. Intelligent Robots and Systems, IROS},
632
+ address = {Kyoto, Japan},
633
+ month = {Oct},
634
+ year = {2022}
635
+ }
636
+ ```
637
+
638
+ πŸ’ Thanks again!