Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
embodied-ai
embodied-navigation
urban-airspace
drone-navigation
multimodal-reasoning
spatial-reasoning
License:
Add navigation data and Dataset Viewer table
Browse filesUpload the canonical navi_data.pkl file, a Hugging Face Dataset Viewer Parquet table split, preview JSON, media assets, and dataset card updates.
- README.md +214 -3
- airsim_utils/__init__.py +0 -0
- airsim_utils/coord_transformation.py +10 -0
- data/train-00000-of-00001.parquet +3 -0
- dataset/navi_data.pkl +3 -0
- dataset/navi_data_preview.json +167 -0
- embodied_vln.py +504 -0
- image/QuantitativeResults.png +3 -0
- image/statistics.png +3 -0
- video/1.gif +3 -0
- video/1.mp4 +3 -0
- video/2.gif +3 -0
- video/2.mp4 +3 -0
- video/3.gif +3 -0
- video/3.mp4 +3 -0
README.md
CHANGED
|
@@ -1,3 +1,214 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
pretty_name: EmbodiedNav-Bench
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
task_categories:
|
| 7 |
+
- visual-question-answering
|
| 8 |
+
- reinforcement-learning
|
| 9 |
+
tags:
|
| 10 |
+
- embodied-ai
|
| 11 |
+
- embodied-navigation
|
| 12 |
+
- urban-airspace
|
| 13 |
+
- drone-navigation
|
| 14 |
+
- multimodal-reasoning
|
| 15 |
+
- spatial-reasoning
|
| 16 |
+
size_categories:
|
| 17 |
+
- n<1K
|
| 18 |
+
configs:
|
| 19 |
+
- config_name: default
|
| 20 |
+
data_files:
|
| 21 |
+
- split: train
|
| 22 |
+
path: data/train-00000-of-00001.parquet
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
# How Far Are Large Multimodal Models from Human-Level Spatial Action? A Benchmark for Goal-Oriented Embodied Navigation in Urban Airspace
|
| 26 |
+
|
| 27 |
+
## Abstract
|
| 28 |
+
|
| 29 |
+
Large multimodal models (LMMs) show strong visual-linguistic reasoning but their capacity for spatial decision-making and action remains unclear. In this work, we investigate whether LMMs can achieve embodied spatial action like human through a challenging scenario: goal-oriented navigation in urban 3D spaces. We first spend over 500 hours constructing a dataset comprising 5,037 high-quality goal-oriented navigation samples, with an emphasis on 3D vertical actions and rich urban semantic information. Then, we comprehensively assess 17 representative models, including non-reasoning LMMs, reasoning LMMs, agent-based methods, and vision-language-action models. Experiments show that current LMMs exhibit emerging action capabilities, yet remain far from human-level performance. Furthermore, we reveal an intriguing phenomenon: navigation errors do not accumulate linearly but instead diverge rapidly from the destination after a critical decision bifurcation. The limitations of LMMs are investigated by analyzing their behavior at these critical decision bifurcations. Finally, we experimentally explore four promising directions for improvement: geometric perception, cross-view understanding, spatial imagination, and long-term memory.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## Dataset Overview
|
| 34 |
+
|
| 35 |
+
EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating how large multimodal models act in urban 3D airspace. The released sample set contains 300 human-collected trajectories with natural-language goals, drone start poses, target positions, and ground-truth 3D paths. The original evaluation data is provided as `dataset/navi_data.pkl`, and a Parquet conversion is provided at `data/train-00000-of-00001.parquet` for the Hugging Face Dataset Viewer table.
|
| 36 |
+
|
| 37 |
+
### Navigation Example
|
| 38 |
+
|
| 39 |
+
| Example 1 | Example 2 | Example 3 |
|
| 40 |
+
| :-------------------------------------------------------------------: | :-------------------------------------------------------------------: | :-------------------------------------------------------------------: |
|
| 41 |
+
| *Goal: Nearby bus stop* | *Goal: The fresh food shop in the building below* | *Goal: The balcony on the 20th floor of the building on the right* |
|
| 42 |
+
| <a href="video/1.mp4"><img src="video/1.gif" width="300"></a> | <a href="video/2.mp4"><img src="video/2.gif" width="300"></a> | <a href="video/3.mp4"><img src="video/3.gif" width="300"></a> |
|
| 43 |
+
|
| 44 |
+
> **Note**: The videos above demonstrate goal-oriented embodied navigation examples in urban airspace. Given linguistic instructions, the task evaluates the ability to progressively act based on continuous embodied observations to approach the goal location.
|
| 45 |
+
|
| 46 |
+
### Dataset Statistics
|
| 47 |
+
|
| 48 |
+
**Key Statistics:**
|
| 49 |
+
|
| 50 |
+
- **Total Trajectories**: 5,037 high-quality goal-oriented navigation trajectories
|
| 51 |
+
- **Data Collection**: Over 500 hours of human-controlled data collection
|
| 52 |
+
- **Average Trajectory Length**: ~203.4 meters
|
| 53 |
+
- **Annotators**: 10 volunteers (5 for case creation, 5 experienced drone pilots with 100+ hours flight experience)
|
| 54 |
+
- **Action Types**:
|
| 55 |
+
- Horizontal movement (move-forth, move-left, move-right, move-back)
|
| 56 |
+
- Vertical movement (move-up, move-down)
|
| 57 |
+
- Rotation/view Change (turn-left, turn-right,adjust-camera-gimbal-upwards, adjust-camera-gimbal-downwards)
|
| 58 |
+
- **Trajectory Distribution**: Pay more attention to vertical movement
|
| 59 |
+
|
| 60 |
+
**Dataset Construction and Statistical Visualization:**
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
|
| 64 |
+
*Figure: a. Dataset Construction Pipeline. b. The length distribution of navigation trajectories. c. Proportion of various types of actions. d. The relative position of trajectories to the origin. e. Word cloud of goal instructions.*
|
| 65 |
+
|
| 66 |
+
---
|
| 67 |
+
|
| 68 |
+
## Environment Setup and Simulator Deployment
|
| 69 |
+
|
| 70 |
+
This project references [EmbodiedCity](https://github.com/tsinghua-fib-lab/EmbodiedCity) for the urban simulation environment.
|
| 71 |
+
|
| 72 |
+
### 1. Download the simulator
|
| 73 |
+
|
| 74 |
+
- Offline simulator download (official): [EmbodiedCity-Simulator on HuggingFace](https://huggingface.co/datasets/EmbodiedCity/EmbodiedCity-Simulator)
|
| 75 |
+
- Download and extract the simulator package, then launch the provided executable (`.exe`) and keep it running before evaluation.
|
| 76 |
+
|
| 77 |
+
### 2. Create the Python environment
|
| 78 |
+
|
| 79 |
+
Use one of the following ways:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
conda create -n EmbodiedCity python=3.10 -y
|
| 83 |
+
conda activate EmbodiedCity
|
| 84 |
+
pip install airsim openai opencv-python numpy pandas
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
If you are using the simulator package's built-in environment files:
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
conda env create -n EmbodiedCity -f environment.yml
|
| 91 |
+
conda activate EmbodiedCity
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
### 3. Dataset release
|
| 95 |
+
|
| 96 |
+
All paths below are **relative to the project root**.
|
| 97 |
+
|
| 98 |
+
We are currently open-sourcing 300 trajectories as public examples:
|
| 99 |
+
|
| 100 |
+
- `dataset/navi_data.pkl`
|
| 101 |
+
- `dataset/navi_data_preview.json` (human-readable JSON preview)
|
| 102 |
+
- `data/train-00000-of-00001.parquet` (Hugging Face Dataset Viewer table split)
|
| 103 |
+
|
| 104 |
+
`dataset/navi_data.pkl` is the canonical dataset file for evaluation.
|
| 105 |
+
|
| 106 |
+
#### 3.1 `navi_data.pkl` field schema
|
| 107 |
+
|
| 108 |
+
Each sample in `dataset/navi_data.pkl` is a Python `dict` with the following fields:
|
| 109 |
+
|
| 110 |
+
| Field | Type | Description |
|
| 111 |
+
| :-- | :-- | :-- |
|
| 112 |
+
| `folder` | `str` | Scene folder identifier |
|
| 113 |
+
| `start_pos` | `float[3]` | Initial drone world position `(x, y, z)` |
|
| 114 |
+
| `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians |
|
| 115 |
+
| `start_ang` | `float` | Initial camera gimbal angle (degrees) |
|
| 116 |
+
| `task_desc` | `str` | Natural-language navigation instruction |
|
| 117 |
+
| `target_pos` | `float[3]` | Target world position `(x, y, z)` |
|
| 118 |
+
| `gt_traj` | `float[N,3]` | Ground-truth trajectory points |
|
| 119 |
+
| `gt_traj_len` | `float` | Ground-truth trajectory length |
|
| 120 |
+
|
| 121 |
+
#### 3.2 Example view for humans
|
| 122 |
+
|
| 123 |
+
To make inspection easier without loading PKL directly, we provide:
|
| 124 |
+
|
| 125 |
+
- `dataset/navi_data_preview.json`
|
| 126 |
+
|
| 127 |
+
This JSON contains:
|
| 128 |
+
|
| 129 |
+
- field descriptions
|
| 130 |
+
- total sample count
|
| 131 |
+
- preview of the first few samples (including `gt_traj` partial points)
|
| 132 |
+
|
| 133 |
+
Example item (simplified):
|
| 134 |
+
|
| 135 |
+
```json
|
| 136 |
+
{
|
| 137 |
+
"sample_index": 0,
|
| 138 |
+
"folder": "0",
|
| 139 |
+
"task_desc": "the entrance of the red building on the left front",
|
| 140 |
+
"start_pos": [6589.18164, -4162.23877, -36.2995872],
|
| 141 |
+
"start_rot": [0.0, 0.0, 3.14159251],
|
| 142 |
+
"start_ang": 0.0,
|
| 143 |
+
"target_pos": [6390.7041, -4154.58545, -6.29958725],
|
| 144 |
+
"gt_traj_len": 229.99981973603806,
|
| 145 |
+
"gt_traj_num_points": 28,
|
| 146 |
+
"gt_traj_preview_first5": [
|
| 147 |
+
[6589.18164, -4162.23877, -36.2995872],
|
| 148 |
+
[6579.18164, -4162.23877, -36.2995872],
|
| 149 |
+
[6569.18164, -4162.23877, -36.2995872],
|
| 150 |
+
[6559.18164, -4162.23877, -36.2995872],
|
| 151 |
+
[6549.18164, -4162.23877, -36.2995872]
|
| 152 |
+
]
|
| 153 |
+
}
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
#### 3.3 Hugging Face Dataset Viewer table
|
| 157 |
+
|
| 158 |
+
The `train` split is stored as `data/train-00000-of-00001.parquet` so the dataset can be inspected directly in the Hugging Face Table view. Each table row corresponds to one navigation trajectory and includes flattened coordinate columns (`start_x`, `target_x`, etc.) together with the original structured fields (`start_pos`, `start_rot`, `target_pos`, and `gt_traj`).
|
| 159 |
+
|
| 160 |
+
### 4. How to test your own model
|
| 161 |
+
|
| 162 |
+
To evaluate your model, modify the Agent logic in [`embodied_vln.py`](./embodied_vln.py), mainly in the `ActionGen` class:
|
| 163 |
+
|
| 164 |
+
- `ActionGen.query(...)`: replace prompt design / model API call / decision logic.
|
| 165 |
+
- Keep output command format compatible with `parse_llm_action(...)` (one command per step).
|
| 166 |
+
- Supported commands include: `move_forth`, `move_back`, `move_left`, `move_right`, `move_up`, `move_down`, `turn_left`, `turn_right`, `angle_up`, `angle_down`.
|
| 167 |
+
|
| 168 |
+
Then run:
|
| 169 |
+
|
| 170 |
+
```bash
|
| 171 |
+
python embodied_vln.py
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
**Example: connect other API models**
|
| 175 |
+
|
| 176 |
+
Use the API placeholder pattern in `embodied_vln.py` as a template for plugging in your own model service.
|
| 177 |
+
|
| 178 |
+
Current placeholders (in `embodied_vln.py`) are:
|
| 179 |
+
|
| 180 |
+
- `AZURE_OPENAI_MODEL`
|
| 181 |
+
- `AZURE_OPENAI_API_KEY`
|
| 182 |
+
- `AZURE_OPENAI_ENDPOINT`
|
| 183 |
+
- `AZURE_OPENAI_API_VERSION` (optional, default: `2024-07-01-preview`)
|
| 184 |
+
|
| 185 |
+
PowerShell example:
|
| 186 |
+
|
| 187 |
+
```powershell
|
| 188 |
+
$env:AZURE_OPENAI_MODEL="your-deployment-name"
|
| 189 |
+
$env:AZURE_OPENAI_API_KEY="your-api-key"
|
| 190 |
+
$env:AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com/"
|
| 191 |
+
$env:AZURE_OPENAI_API_VERSION="2024-07-01-preview"
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
If you use a non-Azure model API, keep this contract unchanged:
|
| 195 |
+
|
| 196 |
+
- `ActionGen.query(...)` must return one text command each step.
|
| 197 |
+
- Returned command should still be compatible with `parse_llm_action(...)`.
|
| 198 |
+
|
| 199 |
+
Minimal expected return format:
|
| 200 |
+
|
| 201 |
+
```text
|
| 202 |
+
Thinking: <your model reasoning>
|
| 203 |
+
Command: move_forth
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
---
|
| 207 |
+
|
| 208 |
+
## Experimental Results
|
| 209 |
+
|
| 210 |
+
### Quantitative Results
|
| 211 |
+
|
| 212 |
+
We evaluate 17 representative models across five categories: Basic Baselines, Non-Reasoning LMMs, Reasoning LMMs, Agent-Based Approaches, and Vision-Language-Action Models.
|
| 213 |
+
|
| 214 |
+
> **Note**: Short, Middle, and Long groups correspond to ground truth trajectories of <118.2m, 118.2-223.6m, and >223.6m respectively. SR = Success Rate, SPL = Success weighted by Path Length, DTG = Distance to Goal.
|
airsim_utils/__init__.py
ADDED
|
File without changes
|
airsim_utils/coord_transformation.py
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import airsim
|
| 2 |
+
import numpy as np
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
# xyzw to roll, pitch, yaw
|
| 6 |
+
def quaternion2eularian_angles(quat):
|
| 7 |
+
pry = airsim.to_eularian_angles(quat) # p, r, y
|
| 8 |
+
return np.array([pry[1], pry[0], pry[2]])
|
| 9 |
+
|
| 10 |
+
|
data/train-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c62ee76d41c6477c0e69faa15ca34a7ecc4b6385ea82fdc98d30d0d9909e5f3
|
| 3 |
+
size 131261
|
dataset/navi_data.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe3c45c943fba18d8bf1ff2a29674d013fff70fe019337df7bad70d3f3f39a4a
|
| 3 |
+
size 307603
|
dataset/navi_data_preview.json
ADDED
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"source": "dataset/navi_data.pkl",
|
| 3 |
+
"num_samples": 300,
|
| 4 |
+
"note": "This JSON is a human-readable preview. Ground-truth data is stored in the PKL file.",
|
| 5 |
+
"fields": {
|
| 6 |
+
"folder": "str, scene folder identifier",
|
| 7 |
+
"start_pos": "float[3], initial drone world position (x, y, z)",
|
| 8 |
+
"start_rot": "float[3], initial drone orientation (roll, pitch, yaw in radians)",
|
| 9 |
+
"start_ang": "float, initial camera gimbal angle in degrees",
|
| 10 |
+
"task_desc": "str, natural-language navigation goal description",
|
| 11 |
+
"target_pos": "float[3], target world position (x, y, z)",
|
| 12 |
+
"gt_traj": "float[N,3], ground-truth trajectory points",
|
| 13 |
+
"gt_traj_len": "float, ground-truth trajectory length"
|
| 14 |
+
},
|
| 15 |
+
"preview_samples": [
|
| 16 |
+
{
|
| 17 |
+
"sample_index": 0,
|
| 18 |
+
"folder": "0",
|
| 19 |
+
"task_desc": "the entrance of the red building on the left front",
|
| 20 |
+
"start_pos": [
|
| 21 |
+
6589.18164,
|
| 22 |
+
-4162.23877,
|
| 23 |
+
-36.2995872
|
| 24 |
+
],
|
| 25 |
+
"start_rot": [
|
| 26 |
+
0.0,
|
| 27 |
+
0.0,
|
| 28 |
+
3.14159251
|
| 29 |
+
],
|
| 30 |
+
"start_ang": 0.0,
|
| 31 |
+
"target_pos": [
|
| 32 |
+
6390.7041,
|
| 33 |
+
-4154.58545,
|
| 34 |
+
-6.29958725
|
| 35 |
+
],
|
| 36 |
+
"gt_traj_len": 229.99981973603806,
|
| 37 |
+
"gt_traj_num_points": 28,
|
| 38 |
+
"gt_traj_preview_first5": [
|
| 39 |
+
[
|
| 40 |
+
6589.18164,
|
| 41 |
+
-4162.23877,
|
| 42 |
+
-36.2995872
|
| 43 |
+
],
|
| 44 |
+
[
|
| 45 |
+
6579.18164,
|
| 46 |
+
-4162.23877,
|
| 47 |
+
-36.2995872
|
| 48 |
+
],
|
| 49 |
+
[
|
| 50 |
+
6569.18164,
|
| 51 |
+
-4162.23877,
|
| 52 |
+
-36.2995872
|
| 53 |
+
],
|
| 54 |
+
[
|
| 55 |
+
6559.18164,
|
| 56 |
+
-4162.23877,
|
| 57 |
+
-36.2995872
|
| 58 |
+
],
|
| 59 |
+
[
|
| 60 |
+
6549.18164,
|
| 61 |
+
-4162.23877,
|
| 62 |
+
-36.2995872
|
| 63 |
+
]
|
| 64 |
+
]
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"sample_index": 1,
|
| 68 |
+
"folder": "1",
|
| 69 |
+
"task_desc": "A coffee shop after turning right at the intersection ahead",
|
| 70 |
+
"start_pos": [
|
| 71 |
+
6466.28223,
|
| 72 |
+
-4321.81348,
|
| 73 |
+
-6.29958725
|
| 74 |
+
],
|
| 75 |
+
"start_rot": [
|
| 76 |
+
0.0,
|
| 77 |
+
0.0,
|
| 78 |
+
1.17809743
|
| 79 |
+
],
|
| 80 |
+
"start_ang": 0.0,
|
| 81 |
+
"target_pos": [
|
| 82 |
+
6385.86914,
|
| 83 |
+
-4204.44824,
|
| 84 |
+
-6.29958725
|
| 85 |
+
],
|
| 86 |
+
"gt_traj_len": 230.00124558891707,
|
| 87 |
+
"gt_traj_num_points": 35,
|
| 88 |
+
"gt_traj_preview_first5": [
|
| 89 |
+
[
|
| 90 |
+
6466.28223,
|
| 91 |
+
-4321.81348,
|
| 92 |
+
-6.29958725
|
| 93 |
+
],
|
| 94 |
+
[
|
| 95 |
+
6470.1084,
|
| 96 |
+
-4312.57471,
|
| 97 |
+
-6.29958725
|
| 98 |
+
],
|
| 99 |
+
[
|
| 100 |
+
6473.93506,
|
| 101 |
+
-4303.33594,
|
| 102 |
+
-6.29958725
|
| 103 |
+
],
|
| 104 |
+
[
|
| 105 |
+
6473.93506,
|
| 106 |
+
-4303.33594,
|
| 107 |
+
-6.29958725
|
| 108 |
+
],
|
| 109 |
+
[
|
| 110 |
+
6473.93506,
|
| 111 |
+
-4293.33594,
|
| 112 |
+
-6.29958725
|
| 113 |
+
]
|
| 114 |
+
]
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"sample_index": 2,
|
| 118 |
+
"folder": "2",
|
| 119 |
+
"task_desc": "The bus stop behind",
|
| 120 |
+
"start_pos": [
|
| 121 |
+
6385.86914,
|
| 122 |
+
-4204.44824,
|
| 123 |
+
-6.29958725
|
| 124 |
+
],
|
| 125 |
+
"start_rot": [
|
| 126 |
+
0.0,
|
| 127 |
+
0.0,
|
| 128 |
+
-2.42603484e-08
|
| 129 |
+
],
|
| 130 |
+
"start_ang": 0.0,
|
| 131 |
+
"target_pos": [
|
| 132 |
+
6345.55225,
|
| 133 |
+
-4199.54443,
|
| 134 |
+
-6.29958725
|
| 135 |
+
],
|
| 136 |
+
"gt_traj_len": 60.000701152788444,
|
| 137 |
+
"gt_traj_num_points": 24,
|
| 138 |
+
"gt_traj_preview_first5": [
|
| 139 |
+
[
|
| 140 |
+
6385.86914,
|
| 141 |
+
-4204.44824,
|
| 142 |
+
-6.29958725
|
| 143 |
+
],
|
| 144 |
+
[
|
| 145 |
+
6385.86914,
|
| 146 |
+
-4204.44824,
|
| 147 |
+
-6.29958725
|
| 148 |
+
],
|
| 149 |
+
[
|
| 150 |
+
6385.86914,
|
| 151 |
+
-4204.44824,
|
| 152 |
+
-6.29958725
|
| 153 |
+
],
|
| 154 |
+
[
|
| 155 |
+
6385.86914,
|
| 156 |
+
-4204.44824,
|
| 157 |
+
-6.29958725
|
| 158 |
+
],
|
| 159 |
+
[
|
| 160 |
+
6385.86914,
|
| 161 |
+
-4204.44824,
|
| 162 |
+
-6.29958725
|
| 163 |
+
]
|
| 164 |
+
]
|
| 165 |
+
}
|
| 166 |
+
]
|
| 167 |
+
}
|
embodied_vln.py
ADDED
|
@@ -0,0 +1,504 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import copy
|
| 2 |
+
import os
|
| 3 |
+
import time
|
| 4 |
+
import base64
|
| 5 |
+
import pickle
|
| 6 |
+
from collections import deque
|
| 7 |
+
|
| 8 |
+
import airsim
|
| 9 |
+
import cv2
|
| 10 |
+
import numpy as np
|
| 11 |
+
import pandas as pd
|
| 12 |
+
from openai import AzureOpenAI
|
| 13 |
+
|
| 14 |
+
from airsim_utils.coord_transformation import quaternion2eularian_angles
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
def parse_llm_action(llm_output: str) -> int:
|
| 18 |
+
"""
|
| 19 |
+
Parse one action command from the LLM output text.
|
| 20 |
+
|
| 21 |
+
Expected output format is usually:
|
| 22 |
+
"Thinking: ...\nCommand: <action_name>"
|
| 23 |
+
|
| 24 |
+
Returns:
|
| 25 |
+
int: action enum used by `perform_act`. Returns -1 if parsing fails.
|
| 26 |
+
"""
|
| 27 |
+
command_str = llm_output.split(":")[-1]
|
| 28 |
+
command_str = command_str.strip(" ")
|
| 29 |
+
command_str = command_str.lower()
|
| 30 |
+
|
| 31 |
+
if "forth" in command_str:
|
| 32 |
+
return 6
|
| 33 |
+
elif "back" in command_str:
|
| 34 |
+
return 7
|
| 35 |
+
elif "turn_left" in command_str:
|
| 36 |
+
return 2
|
| 37 |
+
elif "turn_right" in command_str:
|
| 38 |
+
return 3
|
| 39 |
+
elif "angle_up" in command_str:
|
| 40 |
+
return 4
|
| 41 |
+
elif "angle_down" in command_str:
|
| 42 |
+
return 5
|
| 43 |
+
elif "left" in command_str:
|
| 44 |
+
return 8
|
| 45 |
+
elif "right" in command_str:
|
| 46 |
+
return 9
|
| 47 |
+
elif "up" in command_str:
|
| 48 |
+
return 10
|
| 49 |
+
elif "down" in command_str:
|
| 50 |
+
return 11
|
| 51 |
+
else:
|
| 52 |
+
return -1
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
class ActionGen:
|
| 56 |
+
"""
|
| 57 |
+
Agent logic for one-step action generation.
|
| 58 |
+
|
| 59 |
+
The class keeps short conversation history and sends the current
|
| 60 |
+
first-person image + textual task context to the LLM at each step.
|
| 61 |
+
"""
|
| 62 |
+
|
| 63 |
+
def __init__(self, model, client, airsim_client, task_desc):
|
| 64 |
+
"""
|
| 65 |
+
Args:
|
| 66 |
+
model: model/deployment name used by the LLM endpoint.
|
| 67 |
+
client: initialized LLM client.
|
| 68 |
+
airsim_client: AirSim wrapper with control/perception methods.
|
| 69 |
+
task_desc: text description of the navigation target.
|
| 70 |
+
"""
|
| 71 |
+
self.model = model
|
| 72 |
+
self.model_class = model.split("-")[0]
|
| 73 |
+
self.llm_client = client
|
| 74 |
+
self.queue = deque()
|
| 75 |
+
self.messages = [] # Conversation history forwarded to the model.
|
| 76 |
+
self.airsim_client = airsim_client
|
| 77 |
+
self.task_desc = task_desc
|
| 78 |
+
|
| 79 |
+
def query(self, camera_angle):
|
| 80 |
+
"""
|
| 81 |
+
Run one decision step and return raw LLM output text.
|
| 82 |
+
|
| 83 |
+
Args:
|
| 84 |
+
camera_angle: current gimbal angle in degrees.
|
| 85 |
+
|
| 86 |
+
Returns:
|
| 87 |
+
str: LLM output string that should contain "Command: ...".
|
| 88 |
+
"""
|
| 89 |
+
# Capture front camera RGB observation.
|
| 90 |
+
img1 = self.airsim_client.get_image()
|
| 91 |
+
|
| 92 |
+
# Encode image to base64 so it can be attached to multimodal API input.
|
| 93 |
+
_, buffer = cv2.imencode(".jpg", img1)
|
| 94 |
+
base64_image1 = base64.b64encode(buffer).decode("utf-8")
|
| 95 |
+
|
| 96 |
+
# Use a longer system-style instruction for the first round only.
|
| 97 |
+
if len(self.messages) == 0:
|
| 98 |
+
user_content = (
|
| 99 |
+
f"Please follow the instructions provided to control the camera gimbal angle and drone to gradually "
|
| 100 |
+
f"move to the customer's designated location. Assuming the angle range of the camera gimbal is -90 "
|
| 101 |
+
f"degrees to 90 degrees, where -90 degrees represents vertical downward view, 0 degrees represents "
|
| 102 |
+
f"horizontal view, and 90 degrees represents vertical upward view.\n"
|
| 103 |
+
f"\n"
|
| 104 |
+
f"Camera angle commands:\n"
|
| 105 |
+
f"angle_down, angle_up\n"
|
| 106 |
+
f"\n"
|
| 107 |
+
f"Drone movement commands:\n"
|
| 108 |
+
f"move_forth, move_back, move_left, move_right, move_up, move_down, turn_left, turn_right\n"
|
| 109 |
+
f"\n"
|
| 110 |
+
f"Example:\n"
|
| 111 |
+
f"The navigation goal is: main entrance of the building directly below. "
|
| 112 |
+
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 113 |
+
f"Thinking: Should first lower the altitude and then search.\n"
|
| 114 |
+
f"Command: move_forth\n"
|
| 115 |
+
f"\n"
|
| 116 |
+
f"Rule: put reasoning after 'Thinking'. After 'Command:', output only one executable command with no "
|
| 117 |
+
f"extra text.\n"
|
| 118 |
+
f"\n"
|
| 119 |
+
f"The navigation goal is: {self.task_desc}. "
|
| 120 |
+
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 121 |
+
f"Note: avoid spinning in place repeatedly.\n"
|
| 122 |
+
f"\n"
|
| 123 |
+
f"Thinking:\n"
|
| 124 |
+
f"Command:"
|
| 125 |
+
)
|
| 126 |
+
else:
|
| 127 |
+
user_content = (
|
| 128 |
+
f"The navigation goal is: {self.task_desc}. "
|
| 129 |
+
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 130 |
+
f"Continue to output the thinking and command to approach the destination.\n"
|
| 131 |
+
f"Thinking:\n"
|
| 132 |
+
f"Command:"
|
| 133 |
+
)
|
| 134 |
+
|
| 135 |
+
# Call the OpenAI-compatible chat completion API.
|
| 136 |
+
self.messages.append(
|
| 137 |
+
{
|
| 138 |
+
"role": "user",
|
| 139 |
+
"content": [
|
| 140 |
+
{"type": "text", "text": user_content},
|
| 141 |
+
{
|
| 142 |
+
"type": "image_url",
|
| 143 |
+
"image_url": {
|
| 144 |
+
"url": f"data:image/jpeg;base64,{copy.deepcopy(base64_image1)}"
|
| 145 |
+
},
|
| 146 |
+
},
|
| 147 |
+
],
|
| 148 |
+
}
|
| 149 |
+
)
|
| 150 |
+
|
| 151 |
+
try:
|
| 152 |
+
chat_response = self.llm_client.chat.completions.create(
|
| 153 |
+
model=self.model,
|
| 154 |
+
messages=self.messages,
|
| 155 |
+
)
|
| 156 |
+
answer = chat_response.choices[0].message.content
|
| 157 |
+
print(f"GPT: {answer}")
|
| 158 |
+
except Exception as e:
|
| 159 |
+
print(f"Error: LM response - {e}")
|
| 160 |
+
answer = "Error"
|
| 161 |
+
|
| 162 |
+
self.messages.append({"role": "assistant", "content": answer})
|
| 163 |
+
return answer
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
class AirsimClient:
|
| 167 |
+
"""Minimal AirSim wrapper for this benchmark script."""
|
| 168 |
+
|
| 169 |
+
def __init__(self, vehicle_name=""):
|
| 170 |
+
_ = vehicle_name # Reserved for future multi-vehicle extension.
|
| 171 |
+
airsim_client = airsim.VehicleClient()
|
| 172 |
+
airsim_client.confirmConnection()
|
| 173 |
+
self.client = airsim_client
|
| 174 |
+
|
| 175 |
+
def set_vehicle_pose(self, position, orientation):
|
| 176 |
+
"""
|
| 177 |
+
Teleport the vehicle to the target pose.
|
| 178 |
+
|
| 179 |
+
Args:
|
| 180 |
+
position: xyz array in world coordinates.
|
| 181 |
+
orientation: roll/pitch/yaw array in radians.
|
| 182 |
+
"""
|
| 183 |
+
client = self.client
|
| 184 |
+
pose = airsim.Pose(airsim.Vector3r(*position), airsim.to_quaternion(*orientation))
|
| 185 |
+
client.simSetVehiclePose(pose, True)
|
| 186 |
+
|
| 187 |
+
def set_camera_angle(self, angle):
|
| 188 |
+
"""
|
| 189 |
+
Set camera gimbal pitch angle (degrees).
|
| 190 |
+
"""
|
| 191 |
+
client = self.client
|
| 192 |
+
camera_pose = airsim.Pose(
|
| 193 |
+
airsim.Vector3r(0, 0, 0),
|
| 194 |
+
airsim.to_quaternion(angle * np.pi / 180, 0, 0),
|
| 195 |
+
)
|
| 196 |
+
client.simSetCameraPose("0", camera_pose)
|
| 197 |
+
|
| 198 |
+
def move_relative(self, dx, dy, dz):
|
| 199 |
+
"""
|
| 200 |
+
Move relative to the drone local coordinate system.
|
| 201 |
+
|
| 202 |
+
Args:
|
| 203 |
+
dx: forward/backward displacement.
|
| 204 |
+
dy: right/left displacement.
|
| 205 |
+
dz: up/down displacement.
|
| 206 |
+
"""
|
| 207 |
+
client = self.client
|
| 208 |
+
pose = client.simGetVehiclePose()
|
| 209 |
+
orientation = airsim.to_eularian_angles(pose.orientation)
|
| 210 |
+
yaw = orientation[2]
|
| 211 |
+
|
| 212 |
+
# Convert local displacement into world-frame displacement.
|
| 213 |
+
forward = np.array([np.cos(yaw), np.sin(yaw), 0])
|
| 214 |
+
right = np.array([-np.sin(yaw), np.cos(yaw), 0])
|
| 215 |
+
up = np.array([0, 0, 1])
|
| 216 |
+
move_vector = dx * forward + dy * right + dz * up
|
| 217 |
+
new_position = np.array(
|
| 218 |
+
[pose.position.x_val, pose.position.y_val, pose.position.z_val]
|
| 219 |
+
) + move_vector
|
| 220 |
+
|
| 221 |
+
self.set_vehicle_pose(new_position, orientation)
|
| 222 |
+
|
| 223 |
+
def get_current_state(self):
|
| 224 |
+
"""
|
| 225 |
+
Get current pose from AirSim.
|
| 226 |
+
|
| 227 |
+
Returns:
|
| 228 |
+
tuple[np.ndarray, np.ndarray]: position and euler orientation.
|
| 229 |
+
"""
|
| 230 |
+
client = self.client
|
| 231 |
+
state = client.simGetGroundTruthKinematics()
|
| 232 |
+
pos = state.position.to_numpy_array()
|
| 233 |
+
ori = quaternion2eularian_angles(state.orientation)
|
| 234 |
+
return pos, ori
|
| 235 |
+
|
| 236 |
+
def get_image(self):
|
| 237 |
+
"""
|
| 238 |
+
Get RGB observation from the front camera.
|
| 239 |
+
"""
|
| 240 |
+
response = self.client.simGetImages(
|
| 241 |
+
[airsim.ImageRequest("0", airsim.ImageType.Scene, False, False)]
|
| 242 |
+
)
|
| 243 |
+
img1d = np.frombuffer(response[0].image_data_uint8, dtype=np.uint8)
|
| 244 |
+
if img1d.size == (response[0].height * response[0].width * 3):
|
| 245 |
+
img_rgb = img1d.reshape(response[0].height, response[0].width, 3)
|
| 246 |
+
return img_rgb
|
| 247 |
+
return None
|
| 248 |
+
|
| 249 |
+
|
| 250 |
+
class VLN_evaluator:
|
| 251 |
+
"""
|
| 252 |
+
Evaluation pipeline for vision-language navigation.
|
| 253 |
+
"""
|
| 254 |
+
|
| 255 |
+
def __init__(self, root_dir, eval_model, llm_client, agent_method):
|
| 256 |
+
"""
|
| 257 |
+
Args:
|
| 258 |
+
root_dir: dataset root directory.
|
| 259 |
+
eval_model: model/deployment name.
|
| 260 |
+
llm_client: initialized LLM client.
|
| 261 |
+
agent_method: label used in output result directory.
|
| 262 |
+
"""
|
| 263 |
+
self.root_dir = root_dir
|
| 264 |
+
self.eval_model = eval_model
|
| 265 |
+
self.airsim_client = AirsimClient()
|
| 266 |
+
self.agent_method = agent_method
|
| 267 |
+
self.llm_client = llm_client
|
| 268 |
+
self.load_navi_task()
|
| 269 |
+
|
| 270 |
+
def load_navi_task(self):
|
| 271 |
+
"""Load navigation tasks from `navi_data.pkl`."""
|
| 272 |
+
with open(os.path.join(self.root_dir, "navi_data.pkl"), "rb") as f:
|
| 273 |
+
self.navi_data = pickle.load(f)
|
| 274 |
+
|
| 275 |
+
def evaluation(self):
|
| 276 |
+
"""
|
| 277 |
+
Evaluate navigation performance and print SR/SPL/DTG.
|
| 278 |
+
"""
|
| 279 |
+
navi_data = self.navi_data
|
| 280 |
+
navi_data_pd = pd.DataFrame(navi_data)
|
| 281 |
+
|
| 282 |
+
# Split samples into short/middle/long groups by trajectory length quantiles.
|
| 283 |
+
short_len = navi_data_pd["gt_traj_len"].quantile(1 / 3)
|
| 284 |
+
middle_len = navi_data_pd["gt_traj_len"].quantile(2 / 3)
|
| 285 |
+
sr_count_sets = np.zeros((3,))
|
| 286 |
+
num_sets = np.zeros((3,))
|
| 287 |
+
ne_count_sets = np.zeros((3,))
|
| 288 |
+
spl_sets = np.zeros((3,))
|
| 289 |
+
|
| 290 |
+
# Aggregate metrics over all samples.
|
| 291 |
+
sr_count = 0.0
|
| 292 |
+
spl = 0.0
|
| 293 |
+
ne_count = 0.0
|
| 294 |
+
|
| 295 |
+
# Evaluate each navigation sample independently.
|
| 296 |
+
for idx in range(len(navi_data)):
|
| 297 |
+
navi_task = navi_data[idx]
|
| 298 |
+
start_pos = navi_task["start_pos"]
|
| 299 |
+
start_rot = navi_task["start_rot"]
|
| 300 |
+
gt_traj = navi_task["gt_traj"]
|
| 301 |
+
target_pos = navi_task["target_pos"]
|
| 302 |
+
gt_traj_len = navi_task["gt_traj_len"]
|
| 303 |
+
task_desc = navi_task["task_desc"]
|
| 304 |
+
_ = gt_traj # Reserved for future path-level metrics.
|
| 305 |
+
|
| 306 |
+
# Initialize agent for this sample.
|
| 307 |
+
agent = ActionGen(self.eval_model, self.llm_client, self.airsim_client, task_desc)
|
| 308 |
+
|
| 309 |
+
# Reset drone pose and camera angle.
|
| 310 |
+
self.airsim_client.set_vehicle_pose(start_pos, start_rot)
|
| 311 |
+
self.camera_angle = 0
|
| 312 |
+
self.airsim_client.set_camera_angle(self.camera_angle)
|
| 313 |
+
print(f"Current navigation goal: {task_desc}")
|
| 314 |
+
|
| 315 |
+
# Print current state.
|
| 316 |
+
cur_pos, cur_rot = self.airsim_client.get_current_state()
|
| 317 |
+
print(f"pos: {cur_pos}, rot: {cur_rot}")
|
| 318 |
+
|
| 319 |
+
# Log full executed trajectory for this sample.
|
| 320 |
+
traj_df = pd.DataFrame(columns=["pos", "rot", "camera_angle"])
|
| 321 |
+
traj_df.loc[traj_df.shape[0]] = [start_pos, start_rot, self.camera_angle]
|
| 322 |
+
|
| 323 |
+
traj_len = 0.0
|
| 324 |
+
step = 0
|
| 325 |
+
max_steps = 50
|
| 326 |
+
threshold = 20
|
| 327 |
+
|
| 328 |
+
# Step-by-step control loop.
|
| 329 |
+
while step < max_steps:
|
| 330 |
+
# Query one action from the agent.
|
| 331 |
+
answer = agent.query(self.camera_angle)
|
| 332 |
+
|
| 333 |
+
# Parse command text into an internal action enum.
|
| 334 |
+
act = parse_llm_action(answer)
|
| 335 |
+
print("action: ", act)
|
| 336 |
+
|
| 337 |
+
# Execute action in simulator.
|
| 338 |
+
self.perform_act(act)
|
| 339 |
+
time.sleep(0.1)
|
| 340 |
+
|
| 341 |
+
former_pos = cur_pos
|
| 342 |
+
cur_pos, cur_rot = self.airsim_client.get_current_state()
|
| 343 |
+
traj_df.loc[traj_df.shape[0]] = [cur_pos, cur_rot, self.camera_angle]
|
| 344 |
+
traj_len += np.linalg.norm(cur_pos - former_pos)
|
| 345 |
+
step += 1
|
| 346 |
+
|
| 347 |
+
# Distance to goal after this step.
|
| 348 |
+
dist = np.linalg.norm(cur_pos - target_pos)
|
| 349 |
+
print(f"Task idx: {idx}, current step size: {step}, current dist: {dist}")
|
| 350 |
+
|
| 351 |
+
# Stop on success or if the drone has diverged too far.
|
| 352 |
+
if dist < threshold:
|
| 353 |
+
break
|
| 354 |
+
elif dist > 300:
|
| 355 |
+
break
|
| 356 |
+
|
| 357 |
+
# Final distance for this sample.
|
| 358 |
+
print(f"Max step size reached or target reached. step: {step}")
|
| 359 |
+
dist = np.linalg.norm(cur_pos - target_pos)
|
| 360 |
+
|
| 361 |
+
# Save predicted trajectory.
|
| 362 |
+
save_folder_path = "results/%s/%s" % (self.agent_method, self.eval_model)
|
| 363 |
+
if not os.path.exists(save_folder_path):
|
| 364 |
+
os.makedirs(save_folder_path)
|
| 365 |
+
traj_df.to_csv(os.path.join(save_folder_path, "%d.csv" % idx), index=False)
|
| 366 |
+
|
| 367 |
+
# Update group-level DTG accumulators.
|
| 368 |
+
if gt_traj_len < short_len:
|
| 369 |
+
num_sets[0] += 1
|
| 370 |
+
ne_count_sets[0] += dist
|
| 371 |
+
elif gt_traj_len < middle_len:
|
| 372 |
+
num_sets[1] += 1
|
| 373 |
+
ne_count_sets[1] += dist
|
| 374 |
+
else:
|
| 375 |
+
num_sets[2] += 1
|
| 376 |
+
ne_count_sets[2] += dist
|
| 377 |
+
|
| 378 |
+
# Update SR/SPL if success.
|
| 379 |
+
if dist < threshold:
|
| 380 |
+
sr_count += 1
|
| 381 |
+
spl_count = gt_traj_len / max(gt_traj_len, traj_len)
|
| 382 |
+
spl += spl_count
|
| 383 |
+
|
| 384 |
+
if gt_traj_len < short_len:
|
| 385 |
+
sr_count_sets[0] += 1
|
| 386 |
+
spl_sets[0] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 387 |
+
elif gt_traj_len < middle_len:
|
| 388 |
+
sr_count_sets[1] += 1
|
| 389 |
+
spl_sets[1] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 390 |
+
else:
|
| 391 |
+
sr_count_sets[2] += 1
|
| 392 |
+
spl_sets[2] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 393 |
+
|
| 394 |
+
ne_count += dist
|
| 395 |
+
print(f"####### SR count: {sr_count}, SPL: {spl}, NE: {ne_count}")
|
| 396 |
+
print("Group SR:", sr_count_sets / num_sets)
|
| 397 |
+
print("Group SPL:", spl_sets / num_sets)
|
| 398 |
+
print("Group DTG:", ne_count_sets / num_sets)
|
| 399 |
+
print("Group sample counts:", num_sets)
|
| 400 |
+
|
| 401 |
+
# Final overall metrics.
|
| 402 |
+
sr = sr_count / len(navi_data)
|
| 403 |
+
ne = ne_count / len(navi_data)
|
| 404 |
+
print(f"SR: {sr}, SPL: {spl}, NE: {ne}")
|
| 405 |
+
np.set_printoptions(precision=3)
|
| 406 |
+
print("Group SR:", sr_count_sets / num_sets)
|
| 407 |
+
print("Group SPL:", spl_sets / num_sets)
|
| 408 |
+
print("Group DTG:", ne_count_sets / num_sets)
|
| 409 |
+
|
| 410 |
+
def perform_act(self, act_enum):
|
| 411 |
+
"""
|
| 412 |
+
Execute one parsed action enum in AirSim.
|
| 413 |
+
"""
|
| 414 |
+
# Action table: enum -> (name, value)
|
| 415 |
+
# - tuple value: relative translation (dx, dy, dz)
|
| 416 |
+
# - scalar value: rotation in degrees or camera angle delta in degrees
|
| 417 |
+
commands_map = {
|
| 418 |
+
6: ("move_forth", (10, 0, 0)),
|
| 419 |
+
7: ("move_back", (-10, 0, 0)),
|
| 420 |
+
8: ("move_left", (0, -10, 0)),
|
| 421 |
+
9: ("move_right", (0, 10, 0)),
|
| 422 |
+
10: ("move_up", (0, 0, -10)),
|
| 423 |
+
11: ("move_down", (0, 0, 10)),
|
| 424 |
+
2: ("turn_left", -22.5),
|
| 425 |
+
3: ("turn_right", 22.5),
|
| 426 |
+
4: ("angle_up", 45),
|
| 427 |
+
5: ("angle_down", -45),
|
| 428 |
+
}
|
| 429 |
+
|
| 430 |
+
try:
|
| 431 |
+
command, value = commands_map[act_enum]
|
| 432 |
+
|
| 433 |
+
if command in ["angle_up", "angle_down"]:
|
| 434 |
+
# Clamp gimbal angle to the valid range [-90, 90].
|
| 435 |
+
self.camera_angle += value
|
| 436 |
+
self.camera_angle = max(-90, min(90, self.camera_angle))
|
| 437 |
+
self.airsim_client.set_camera_angle(self.camera_angle)
|
| 438 |
+
elif act_enum in commands_map.keys():
|
| 439 |
+
# Movement or yaw rotation.
|
| 440 |
+
if isinstance(value, tuple):
|
| 441 |
+
dx, dy, dz = value
|
| 442 |
+
self.airsim_client.move_relative(dx, dy, dz)
|
| 443 |
+
else:
|
| 444 |
+
yaw_change = value
|
| 445 |
+
pose = self.airsim_client.client.simGetVehiclePose()
|
| 446 |
+
current_orientation = airsim.to_eularian_angles(pose.orientation)
|
| 447 |
+
new_orientation = [
|
| 448 |
+
current_orientation[0],
|
| 449 |
+
current_orientation[1],
|
| 450 |
+
current_orientation[2] + np.radians(yaw_change),
|
| 451 |
+
]
|
| 452 |
+
self.airsim_client.set_vehicle_pose(
|
| 453 |
+
[pose.position.x_val, pose.position.y_val, pose.position.z_val],
|
| 454 |
+
new_orientation,
|
| 455 |
+
)
|
| 456 |
+
else:
|
| 457 |
+
print(f"Unknown action {act_enum}, keep still.")
|
| 458 |
+
except Exception:
|
| 459 |
+
pass
|
| 460 |
+
|
| 461 |
+
|
| 462 |
+
if __name__ == "__main__":
|
| 463 |
+
# Configure your model deployment and credentials before running this file.
|
| 464 |
+
#
|
| 465 |
+
# Recommended setup:
|
| 466 |
+
# 1) Fill values via environment variables:
|
| 467 |
+
# AZURE_OPENAI_MODEL
|
| 468 |
+
# AZURE_OPENAI_API_KEY
|
| 469 |
+
# AZURE_OPENAI_ENDPOINT
|
| 470 |
+
# AZURE_OPENAI_API_VERSION (optional, defaults to 2024-07-01-preview)
|
| 471 |
+
#
|
| 472 |
+
# 2) Or directly replace the placeholder strings below.
|
| 473 |
+
model = os.getenv("AZURE_OPENAI_MODEL", "YOUR_AZURE_OPENAI_DEPLOYMENT")
|
| 474 |
+
api_key = os.getenv("AZURE_OPENAI_API_KEY", "YOUR_AZURE_OPENAI_API_KEY")
|
| 475 |
+
azure_endpoint = os.getenv(
|
| 476 |
+
"AZURE_OPENAI_ENDPOINT",
|
| 477 |
+
"https://YOUR-RESOURCE-NAME.openai.azure.com/",
|
| 478 |
+
)
|
| 479 |
+
api_version = os.getenv("AZURE_OPENAI_API_VERSION", "2024-07-01-preview")
|
| 480 |
+
|
| 481 |
+
if (
|
| 482 |
+
model == "YOUR_AZURE_OPENAI_DEPLOYMENT"
|
| 483 |
+
or api_key == "YOUR_AZURE_OPENAI_API_KEY"
|
| 484 |
+
or azure_endpoint == "https://YOUR-RESOURCE-NAME.openai.azure.com/"
|
| 485 |
+
):
|
| 486 |
+
raise ValueError(
|
| 487 |
+
"Azure OpenAI is not configured.\n"
|
| 488 |
+
"Set environment variables (AZURE_OPENAI_MODEL, AZURE_OPENAI_API_KEY, "
|
| 489 |
+
"AZURE_OPENAI_ENDPOINT, optional AZURE_OPENAI_API_VERSION) or replace "
|
| 490 |
+
"the placeholder values in `embodied_vln.py` before running."
|
| 491 |
+
)
|
| 492 |
+
|
| 493 |
+
llm_client = AzureOpenAI(
|
| 494 |
+
api_key=api_key,
|
| 495 |
+
api_version=api_version,
|
| 496 |
+
azure_endpoint=azure_endpoint,
|
| 497 |
+
)
|
| 498 |
+
|
| 499 |
+
# Name used in output directory: results/<agent_method>/<model>/
|
| 500 |
+
agent_method = "action_generation"
|
| 501 |
+
|
| 502 |
+
# Initialize evaluator and run all tasks in dataset/navi_data.pkl.
|
| 503 |
+
vln_eval = VLN_evaluator("dataset", model, llm_client, agent_method)
|
| 504 |
+
vln_eval.evaluation()
|
image/QuantitativeResults.png
ADDED
|
Git LFS Details
|
image/statistics.png
ADDED
|
Git LFS Details
|
video/1.gif
ADDED
|
Git LFS Details
|
video/1.mp4
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fdddc7bf7760989c63f1edf7f105f285d32988c8100ae0297ccc2f35df49173
|
| 3 |
+
size 7559547
|
video/2.gif
ADDED
|
Git LFS Details
|
video/2.mp4
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d399799233be67396b0c26d8fa28573c6c8f6116582c2968604c03e6709d59ba
|
| 3 |
+
size 18715886
|
video/3.gif
ADDED
|
Git LFS Details
|
video/3.mp4
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e5cf50aab2e5a5ad31947e15f20412411d6f9a267192652308266f9b0d9788a3
|
| 3 |
+
size 11274226
|