Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
embodied-ai
embodied-navigation
urban-airspace
drone-navigation
multimodal-reasoning
spatial-reasoning
License:
Clean dataset card and remove redundant project files
Browse filesKeep the Hugging Face repository focused on dataset artifacts only: canonical PKL, JSON preview, Parquet table, and a dedicated dataset card. Project code and media remain in the GitHub repository.
- README.md +19 -166
- airsim_utils/__init__.py +0 -0
- airsim_utils/coord_transformation.py +0 -10
- embodied_vln.py +0 -504
- image/QuantitativeResults.png +0 -3
- image/statistics.png +0 -3
- video/1.gif +0 -3
- video/1.mp4 +0 -3
- video/2.gif +0 -3
- video/2.mp4 +0 -3
- video/3.gif +0 -3
- video/3.mp4 +0 -3
README.md
CHANGED
|
@@ -22,193 +22,46 @@ configs:
|
|
| 22 |
path: data/train-00000-of-00001.parquet
|
| 23 |
---
|
| 24 |
|
| 25 |
-
#
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating how large multimodal models act in urban 3D airspace. The released sample set contains 300 human-collected trajectories with natural-language goals, drone start poses, target positions, and ground-truth 3D paths. The original evaluation data is provided as `dataset/navi_data.pkl`, and a Parquet conversion is provided at `data/train-00000-of-00001.parquet` for the Hugging Face Dataset Viewer table.
|
| 36 |
-
|
| 37 |
-
### Navigation Example
|
| 38 |
-
|
| 39 |
-
| Example 1 | Example 2 | Example 3 |
|
| 40 |
-
| :-------------------------------------------------------------------: | :-------------------------------------------------------------------: | :-------------------------------------------------------------------: |
|
| 41 |
-
| *Goal: Nearby bus stop* | *Goal: The fresh food shop in the building below* | *Goal: The balcony on the 20th floor of the building on the right* |
|
| 42 |
-
| <a href="video/1.mp4"><img src="video/1.gif" width="300"></a> | <a href="video/2.mp4"><img src="video/2.gif" width="300"></a> | <a href="video/3.mp4"><img src="video/3.gif" width="300"></a> |
|
| 43 |
-
|
| 44 |
-
> **Note**: The videos above demonstrate goal-oriented embodied navigation examples in urban airspace. Given linguistic instructions, the task evaluates the ability to progressively act based on continuous embodied observations to approach the goal location.
|
| 45 |
-
|
| 46 |
-
### Dataset Statistics
|
| 47 |
-
|
| 48 |
-
**Key Statistics:**
|
| 49 |
-
|
| 50 |
-
- **Total Trajectories**: 5,037 high-quality goal-oriented navigation trajectories
|
| 51 |
-
- **Data Collection**: Over 500 hours of human-controlled data collection
|
| 52 |
-
- **Average Trajectory Length**: ~203.4 meters
|
| 53 |
-
- **Annotators**: 10 volunteers (5 for case creation, 5 experienced drone pilots with 100+ hours flight experience)
|
| 54 |
-
- **Action Types**:
|
| 55 |
-
- Horizontal movement (move-forth, move-left, move-right, move-back)
|
| 56 |
-
- Vertical movement (move-up, move-down)
|
| 57 |
-
- Rotation/view Change (turn-left, turn-right,adjust-camera-gimbal-upwards, adjust-camera-gimbal-downwards)
|
| 58 |
-
- **Trajectory Distribution**: Pay more attention to vertical movement
|
| 59 |
-
|
| 60 |
-
**Dataset Construction and Statistical Visualization:**
|
| 61 |
-
|
| 62 |
-

|
| 63 |
-
|
| 64 |
-
*Figure: a. Dataset Construction Pipeline. b. The length distribution of navigation trajectories. c. Proportion of various types of actions. d. The relative position of trajectories to the origin. e. Word cloud of goal instructions.*
|
| 65 |
-
|
| 66 |
-
---
|
| 67 |
-
|
| 68 |
-
## Environment Setup and Simulator Deployment
|
| 69 |
-
|
| 70 |
-
This project references [EmbodiedCity](https://github.com/tsinghua-fib-lab/EmbodiedCity) for the urban simulation environment.
|
| 71 |
-
|
| 72 |
-
### 1. Download the simulator
|
| 73 |
-
|
| 74 |
-
- Offline simulator download (official): [EmbodiedCity-Simulator on HuggingFace](https://huggingface.co/datasets/EmbodiedCity/EmbodiedCity-Simulator)
|
| 75 |
-
- Download and extract the simulator package, then launch the provided executable (`.exe`) and keep it running before evaluation.
|
| 76 |
-
|
| 77 |
-
### 2. Create the Python environment
|
| 78 |
-
|
| 79 |
-
Use one of the following ways:
|
| 80 |
-
|
| 81 |
-
```bash
|
| 82 |
-
conda create -n EmbodiedCity python=3.10 -y
|
| 83 |
-
conda activate EmbodiedCity
|
| 84 |
-
pip install airsim openai opencv-python numpy pandas
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
If you are using the simulator package's built-in environment files:
|
| 88 |
-
|
| 89 |
-
```bash
|
| 90 |
-
conda env create -n EmbodiedCity -f environment.yml
|
| 91 |
-
conda activate EmbodiedCity
|
| 92 |
-
```
|
| 93 |
-
|
| 94 |
-
### 3. Dataset release
|
| 95 |
-
|
| 96 |
-
All paths below are **relative to the project root**.
|
| 97 |
-
|
| 98 |
-
We are currently open-sourcing 300 trajectories as public examples:
|
| 99 |
|
| 100 |
-
|
| 101 |
-
- `dataset/navi_data_preview.json` (human-readable JSON preview)
|
| 102 |
-
- `data/train-00000-of-00001.parquet` (Hugging Face Dataset Viewer table split)
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
#### 3.1 `navi_data.pkl` field schema
|
| 107 |
-
|
| 108 |
-
Each sample in `dataset/navi_data.pkl` is a Python `dict` with the following fields:
|
| 109 |
|
| 110 |
| Field | Type | Description |
|
| 111 |
| :-- | :-- | :-- |
|
| 112 |
| `folder` | `str` | Scene folder identifier |
|
| 113 |
| `start_pos` | `float[3]` | Initial drone world position `(x, y, z)` |
|
| 114 |
| `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians |
|
| 115 |
-
| `start_ang` | `float` | Initial camera gimbal angle
|
| 116 |
| `task_desc` | `str` | Natural-language navigation instruction |
|
| 117 |
| `target_pos` | `float[3]` | Target world position `(x, y, z)` |
|
| 118 |
| `gt_traj` | `float[N,3]` | Ground-truth trajectory points |
|
| 119 |
| `gt_traj_len` | `float` | Ground-truth trajectory length |
|
| 120 |
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
To make inspection easier without loading PKL directly, we provide:
|
| 124 |
-
|
| 125 |
-
- `dataset/navi_data_preview.json`
|
| 126 |
-
|
| 127 |
-
This JSON contains:
|
| 128 |
-
|
| 129 |
-
- field descriptions
|
| 130 |
-
- total sample count
|
| 131 |
-
- preview of the first few samples (including `gt_traj` partial points)
|
| 132 |
-
|
| 133 |
-
Example item (simplified):
|
| 134 |
|
| 135 |
-
|
| 136 |
-
{
|
| 137 |
-
"sample_index": 0,
|
| 138 |
-
"folder": "0",
|
| 139 |
-
"task_desc": "the entrance of the red building on the left front",
|
| 140 |
-
"start_pos": [6589.18164, -4162.23877, -36.2995872],
|
| 141 |
-
"start_rot": [0.0, 0.0, 3.14159251],
|
| 142 |
-
"start_ang": 0.0,
|
| 143 |
-
"target_pos": [6390.7041, -4154.58545, -6.29958725],
|
| 144 |
-
"gt_traj_len": 229.99981973603806,
|
| 145 |
-
"gt_traj_num_points": 28,
|
| 146 |
-
"gt_traj_preview_first5": [
|
| 147 |
-
[6589.18164, -4162.23877, -36.2995872],
|
| 148 |
-
[6579.18164, -4162.23877, -36.2995872],
|
| 149 |
-
[6569.18164, -4162.23877, -36.2995872],
|
| 150 |
-
[6559.18164, -4162.23877, -36.2995872],
|
| 151 |
-
[6549.18164, -4162.23877, -36.2995872]
|
| 152 |
-
]
|
| 153 |
-
}
|
| 154 |
-
```
|
| 155 |
-
|
| 156 |
-
#### 3.3 Hugging Face Dataset Viewer table
|
| 157 |
-
|
| 158 |
-
The `train` split is stored as `data/train-00000-of-00001.parquet` so the dataset can be inspected directly in the Hugging Face Table view. Each table row corresponds to one navigation trajectory and includes flattened coordinate columns (`start_x`, `target_x`, etc.) together with the original structured fields (`start_pos`, `start_rot`, `target_pos`, and `gt_traj`).
|
| 159 |
-
|
| 160 |
-
### 4. How to test your own model
|
| 161 |
|
| 162 |
-
|
|
|
|
| 163 |
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
- Supported commands include: `move_forth`, `move_back`, `move_left`, `move_right`, `move_up`, `move_down`, `turn_left`, `turn_right`, `angle_up`, `angle_down`.
|
| 167 |
-
|
| 168 |
-
Then run:
|
| 169 |
-
|
| 170 |
-
```bash
|
| 171 |
-
python embodied_vln.py
|
| 172 |
```
|
| 173 |
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
Use the API placeholder pattern in `embodied_vln.py` as a template for plugging in your own model service.
|
| 177 |
-
|
| 178 |
-
Current placeholders (in `embodied_vln.py`) are:
|
| 179 |
-
|
| 180 |
-
- `AZURE_OPENAI_MODEL`
|
| 181 |
-
- `AZURE_OPENAI_API_KEY`
|
| 182 |
-
- `AZURE_OPENAI_ENDPOINT`
|
| 183 |
-
- `AZURE_OPENAI_API_VERSION` (optional, default: `2024-07-01-preview`)
|
| 184 |
-
|
| 185 |
-
PowerShell example:
|
| 186 |
-
|
| 187 |
-
```powershell
|
| 188 |
-
$env:AZURE_OPENAI_MODEL="your-deployment-name"
|
| 189 |
-
$env:AZURE_OPENAI_API_KEY="your-api-key"
|
| 190 |
-
$env:AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com/"
|
| 191 |
-
$env:AZURE_OPENAI_API_VERSION="2024-07-01-preview"
|
| 192 |
-
```
|
| 193 |
-
|
| 194 |
-
If you use a non-Azure model API, keep this contract unchanged:
|
| 195 |
-
|
| 196 |
-
- `ActionGen.query(...)` must return one text command each step.
|
| 197 |
-
- Returned command should still be compatible with `parse_llm_action(...)`.
|
| 198 |
-
|
| 199 |
-
Minimal expected return format:
|
| 200 |
-
|
| 201 |
-
```text
|
| 202 |
-
Thinking: <your model reasoning>
|
| 203 |
-
Command: move_forth
|
| 204 |
-
```
|
| 205 |
-
|
| 206 |
-
---
|
| 207 |
-
|
| 208 |
-
## Experimental Results
|
| 209 |
|
| 210 |
-
##
|
| 211 |
|
| 212 |
-
|
| 213 |
|
| 214 |
-
|
|
|
|
| 22 |
path: data/train-00000-of-00001.parquet
|
| 23 |
---
|
| 24 |
|
| 25 |
+
# EmbodiedNav-Bench
|
| 26 |
|
| 27 |
+
EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating spatial action in urban 3D airspace. This Hugging Face dataset repository hosts the released navigation sample data and a Dataset Viewer compatible table. Code, simulator instructions, examples, and evaluation scripts are maintained in the GitHub project repository: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
|
| 28 |
|
| 29 |
+
## Files
|
| 30 |
|
| 31 |
+
- `dataset/navi_data.pkl`: canonical PKL file for evaluation.
|
| 32 |
+
- `dataset/navi_data_preview.json`: human-readable preview of the PKL content.
|
| 33 |
+
- `data/train-00000-of-00001.parquet`: Parquet conversion for the Hugging Face Dataset Viewer Table.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
## Dataset Contents
|
|
|
|
|
|
|
| 36 |
|
| 37 |
+
The current release contains 300 public example trajectories. Each row/sample corresponds to one navigation trajectory with a natural-language goal, initial drone pose, target position, and ground-truth 3D trajectory.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
| Field | Type | Description |
|
| 40 |
| :-- | :-- | :-- |
|
| 41 |
| `folder` | `str` | Scene folder identifier |
|
| 42 |
| `start_pos` | `float[3]` | Initial drone world position `(x, y, z)` |
|
| 43 |
| `start_rot` | `float[3]` | Initial drone orientation `(roll, pitch, yaw)` in radians |
|
| 44 |
+
| `start_ang` | `float` | Initial camera gimbal angle in degrees |
|
| 45 |
| `task_desc` | `str` | Natural-language navigation instruction |
|
| 46 |
| `target_pos` | `float[3]` | Target world position `(x, y, z)` |
|
| 47 |
| `gt_traj` | `float[N,3]` | Ground-truth trajectory points |
|
| 48 |
| `gt_traj_len` | `float` | Ground-truth trajectory length |
|
| 49 |
|
| 50 |
+
The Parquet table additionally includes convenience columns such as `sample_index`, `start_x`, `start_y`, `start_z`, `target_x`, `target_y`, `target_z`, and `gt_traj_num_points` to make browsing and filtering easier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
## Loading
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
+
```python
|
| 55 |
+
from datasets import load_dataset
|
| 56 |
|
| 57 |
+
ds = load_dataset("EmbodiedCity/EmbodiedNav-Bench")
|
| 58 |
+
print(ds["train"][0])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
```
|
| 60 |
|
| 61 |
+
For evaluation, use `dataset/navi_data.pkl` from this repository or the GitHub project release instructions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
+
## Notes
|
| 64 |
|
| 65 |
+
This is the dataset hosting repository. The GitHub project repository contains the project README, simulator setup, media examples, and evaluation code: https://github.com/serenditipy-AC/Embodied-Navigation-Bench
|
| 66 |
|
| 67 |
+
Hugging Face Dataset Viewer support for private dataset repositories depends on the account or organization plan. The Parquet table is included so the Table view can render when Dataset Viewer indexing is available.
|
airsim_utils/__init__.py
DELETED
|
File without changes
|
airsim_utils/coord_transformation.py
DELETED
|
@@ -1,10 +0,0 @@
|
|
| 1 |
-
import airsim
|
| 2 |
-
import numpy as np
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
# xyzw to roll, pitch, yaw
|
| 6 |
-
def quaternion2eularian_angles(quat):
|
| 7 |
-
pry = airsim.to_eularian_angles(quat) # p, r, y
|
| 8 |
-
return np.array([pry[1], pry[0], pry[2]])
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
embodied_vln.py
DELETED
|
@@ -1,504 +0,0 @@
|
|
| 1 |
-
import copy
|
| 2 |
-
import os
|
| 3 |
-
import time
|
| 4 |
-
import base64
|
| 5 |
-
import pickle
|
| 6 |
-
from collections import deque
|
| 7 |
-
|
| 8 |
-
import airsim
|
| 9 |
-
import cv2
|
| 10 |
-
import numpy as np
|
| 11 |
-
import pandas as pd
|
| 12 |
-
from openai import AzureOpenAI
|
| 13 |
-
|
| 14 |
-
from airsim_utils.coord_transformation import quaternion2eularian_angles
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
def parse_llm_action(llm_output: str) -> int:
|
| 18 |
-
"""
|
| 19 |
-
Parse one action command from the LLM output text.
|
| 20 |
-
|
| 21 |
-
Expected output format is usually:
|
| 22 |
-
"Thinking: ...\nCommand: <action_name>"
|
| 23 |
-
|
| 24 |
-
Returns:
|
| 25 |
-
int: action enum used by `perform_act`. Returns -1 if parsing fails.
|
| 26 |
-
"""
|
| 27 |
-
command_str = llm_output.split(":")[-1]
|
| 28 |
-
command_str = command_str.strip(" ")
|
| 29 |
-
command_str = command_str.lower()
|
| 30 |
-
|
| 31 |
-
if "forth" in command_str:
|
| 32 |
-
return 6
|
| 33 |
-
elif "back" in command_str:
|
| 34 |
-
return 7
|
| 35 |
-
elif "turn_left" in command_str:
|
| 36 |
-
return 2
|
| 37 |
-
elif "turn_right" in command_str:
|
| 38 |
-
return 3
|
| 39 |
-
elif "angle_up" in command_str:
|
| 40 |
-
return 4
|
| 41 |
-
elif "angle_down" in command_str:
|
| 42 |
-
return 5
|
| 43 |
-
elif "left" in command_str:
|
| 44 |
-
return 8
|
| 45 |
-
elif "right" in command_str:
|
| 46 |
-
return 9
|
| 47 |
-
elif "up" in command_str:
|
| 48 |
-
return 10
|
| 49 |
-
elif "down" in command_str:
|
| 50 |
-
return 11
|
| 51 |
-
else:
|
| 52 |
-
return -1
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
class ActionGen:
|
| 56 |
-
"""
|
| 57 |
-
Agent logic for one-step action generation.
|
| 58 |
-
|
| 59 |
-
The class keeps short conversation history and sends the current
|
| 60 |
-
first-person image + textual task context to the LLM at each step.
|
| 61 |
-
"""
|
| 62 |
-
|
| 63 |
-
def __init__(self, model, client, airsim_client, task_desc):
|
| 64 |
-
"""
|
| 65 |
-
Args:
|
| 66 |
-
model: model/deployment name used by the LLM endpoint.
|
| 67 |
-
client: initialized LLM client.
|
| 68 |
-
airsim_client: AirSim wrapper with control/perception methods.
|
| 69 |
-
task_desc: text description of the navigation target.
|
| 70 |
-
"""
|
| 71 |
-
self.model = model
|
| 72 |
-
self.model_class = model.split("-")[0]
|
| 73 |
-
self.llm_client = client
|
| 74 |
-
self.queue = deque()
|
| 75 |
-
self.messages = [] # Conversation history forwarded to the model.
|
| 76 |
-
self.airsim_client = airsim_client
|
| 77 |
-
self.task_desc = task_desc
|
| 78 |
-
|
| 79 |
-
def query(self, camera_angle):
|
| 80 |
-
"""
|
| 81 |
-
Run one decision step and return raw LLM output text.
|
| 82 |
-
|
| 83 |
-
Args:
|
| 84 |
-
camera_angle: current gimbal angle in degrees.
|
| 85 |
-
|
| 86 |
-
Returns:
|
| 87 |
-
str: LLM output string that should contain "Command: ...".
|
| 88 |
-
"""
|
| 89 |
-
# Capture front camera RGB observation.
|
| 90 |
-
img1 = self.airsim_client.get_image()
|
| 91 |
-
|
| 92 |
-
# Encode image to base64 so it can be attached to multimodal API input.
|
| 93 |
-
_, buffer = cv2.imencode(".jpg", img1)
|
| 94 |
-
base64_image1 = base64.b64encode(buffer).decode("utf-8")
|
| 95 |
-
|
| 96 |
-
# Use a longer system-style instruction for the first round only.
|
| 97 |
-
if len(self.messages) == 0:
|
| 98 |
-
user_content = (
|
| 99 |
-
f"Please follow the instructions provided to control the camera gimbal angle and drone to gradually "
|
| 100 |
-
f"move to the customer's designated location. Assuming the angle range of the camera gimbal is -90 "
|
| 101 |
-
f"degrees to 90 degrees, where -90 degrees represents vertical downward view, 0 degrees represents "
|
| 102 |
-
f"horizontal view, and 90 degrees represents vertical upward view.\n"
|
| 103 |
-
f"\n"
|
| 104 |
-
f"Camera angle commands:\n"
|
| 105 |
-
f"angle_down, angle_up\n"
|
| 106 |
-
f"\n"
|
| 107 |
-
f"Drone movement commands:\n"
|
| 108 |
-
f"move_forth, move_back, move_left, move_right, move_up, move_down, turn_left, turn_right\n"
|
| 109 |
-
f"\n"
|
| 110 |
-
f"Example:\n"
|
| 111 |
-
f"The navigation goal is: main entrance of the building directly below. "
|
| 112 |
-
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 113 |
-
f"Thinking: Should first lower the altitude and then search.\n"
|
| 114 |
-
f"Command: move_forth\n"
|
| 115 |
-
f"\n"
|
| 116 |
-
f"Rule: put reasoning after 'Thinking'. After 'Command:', output only one executable command with no "
|
| 117 |
-
f"extra text.\n"
|
| 118 |
-
f"\n"
|
| 119 |
-
f"The navigation goal is: {self.task_desc}. "
|
| 120 |
-
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 121 |
-
f"Note: avoid spinning in place repeatedly.\n"
|
| 122 |
-
f"\n"
|
| 123 |
-
f"Thinking:\n"
|
| 124 |
-
f"Command:"
|
| 125 |
-
)
|
| 126 |
-
else:
|
| 127 |
-
user_content = (
|
| 128 |
-
f"The navigation goal is: {self.task_desc}. "
|
| 129 |
-
f"The current angle of the camera gimbal is {camera_angle}.\n"
|
| 130 |
-
f"Continue to output the thinking and command to approach the destination.\n"
|
| 131 |
-
f"Thinking:\n"
|
| 132 |
-
f"Command:"
|
| 133 |
-
)
|
| 134 |
-
|
| 135 |
-
# Call the OpenAI-compatible chat completion API.
|
| 136 |
-
self.messages.append(
|
| 137 |
-
{
|
| 138 |
-
"role": "user",
|
| 139 |
-
"content": [
|
| 140 |
-
{"type": "text", "text": user_content},
|
| 141 |
-
{
|
| 142 |
-
"type": "image_url",
|
| 143 |
-
"image_url": {
|
| 144 |
-
"url": f"data:image/jpeg;base64,{copy.deepcopy(base64_image1)}"
|
| 145 |
-
},
|
| 146 |
-
},
|
| 147 |
-
],
|
| 148 |
-
}
|
| 149 |
-
)
|
| 150 |
-
|
| 151 |
-
try:
|
| 152 |
-
chat_response = self.llm_client.chat.completions.create(
|
| 153 |
-
model=self.model,
|
| 154 |
-
messages=self.messages,
|
| 155 |
-
)
|
| 156 |
-
answer = chat_response.choices[0].message.content
|
| 157 |
-
print(f"GPT: {answer}")
|
| 158 |
-
except Exception as e:
|
| 159 |
-
print(f"Error: LM response - {e}")
|
| 160 |
-
answer = "Error"
|
| 161 |
-
|
| 162 |
-
self.messages.append({"role": "assistant", "content": answer})
|
| 163 |
-
return answer
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
class AirsimClient:
|
| 167 |
-
"""Minimal AirSim wrapper for this benchmark script."""
|
| 168 |
-
|
| 169 |
-
def __init__(self, vehicle_name=""):
|
| 170 |
-
_ = vehicle_name # Reserved for future multi-vehicle extension.
|
| 171 |
-
airsim_client = airsim.VehicleClient()
|
| 172 |
-
airsim_client.confirmConnection()
|
| 173 |
-
self.client = airsim_client
|
| 174 |
-
|
| 175 |
-
def set_vehicle_pose(self, position, orientation):
|
| 176 |
-
"""
|
| 177 |
-
Teleport the vehicle to the target pose.
|
| 178 |
-
|
| 179 |
-
Args:
|
| 180 |
-
position: xyz array in world coordinates.
|
| 181 |
-
orientation: roll/pitch/yaw array in radians.
|
| 182 |
-
"""
|
| 183 |
-
client = self.client
|
| 184 |
-
pose = airsim.Pose(airsim.Vector3r(*position), airsim.to_quaternion(*orientation))
|
| 185 |
-
client.simSetVehiclePose(pose, True)
|
| 186 |
-
|
| 187 |
-
def set_camera_angle(self, angle):
|
| 188 |
-
"""
|
| 189 |
-
Set camera gimbal pitch angle (degrees).
|
| 190 |
-
"""
|
| 191 |
-
client = self.client
|
| 192 |
-
camera_pose = airsim.Pose(
|
| 193 |
-
airsim.Vector3r(0, 0, 0),
|
| 194 |
-
airsim.to_quaternion(angle * np.pi / 180, 0, 0),
|
| 195 |
-
)
|
| 196 |
-
client.simSetCameraPose("0", camera_pose)
|
| 197 |
-
|
| 198 |
-
def move_relative(self, dx, dy, dz):
|
| 199 |
-
"""
|
| 200 |
-
Move relative to the drone local coordinate system.
|
| 201 |
-
|
| 202 |
-
Args:
|
| 203 |
-
dx: forward/backward displacement.
|
| 204 |
-
dy: right/left displacement.
|
| 205 |
-
dz: up/down displacement.
|
| 206 |
-
"""
|
| 207 |
-
client = self.client
|
| 208 |
-
pose = client.simGetVehiclePose()
|
| 209 |
-
orientation = airsim.to_eularian_angles(pose.orientation)
|
| 210 |
-
yaw = orientation[2]
|
| 211 |
-
|
| 212 |
-
# Convert local displacement into world-frame displacement.
|
| 213 |
-
forward = np.array([np.cos(yaw), np.sin(yaw), 0])
|
| 214 |
-
right = np.array([-np.sin(yaw), np.cos(yaw), 0])
|
| 215 |
-
up = np.array([0, 0, 1])
|
| 216 |
-
move_vector = dx * forward + dy * right + dz * up
|
| 217 |
-
new_position = np.array(
|
| 218 |
-
[pose.position.x_val, pose.position.y_val, pose.position.z_val]
|
| 219 |
-
) + move_vector
|
| 220 |
-
|
| 221 |
-
self.set_vehicle_pose(new_position, orientation)
|
| 222 |
-
|
| 223 |
-
def get_current_state(self):
|
| 224 |
-
"""
|
| 225 |
-
Get current pose from AirSim.
|
| 226 |
-
|
| 227 |
-
Returns:
|
| 228 |
-
tuple[np.ndarray, np.ndarray]: position and euler orientation.
|
| 229 |
-
"""
|
| 230 |
-
client = self.client
|
| 231 |
-
state = client.simGetGroundTruthKinematics()
|
| 232 |
-
pos = state.position.to_numpy_array()
|
| 233 |
-
ori = quaternion2eularian_angles(state.orientation)
|
| 234 |
-
return pos, ori
|
| 235 |
-
|
| 236 |
-
def get_image(self):
|
| 237 |
-
"""
|
| 238 |
-
Get RGB observation from the front camera.
|
| 239 |
-
"""
|
| 240 |
-
response = self.client.simGetImages(
|
| 241 |
-
[airsim.ImageRequest("0", airsim.ImageType.Scene, False, False)]
|
| 242 |
-
)
|
| 243 |
-
img1d = np.frombuffer(response[0].image_data_uint8, dtype=np.uint8)
|
| 244 |
-
if img1d.size == (response[0].height * response[0].width * 3):
|
| 245 |
-
img_rgb = img1d.reshape(response[0].height, response[0].width, 3)
|
| 246 |
-
return img_rgb
|
| 247 |
-
return None
|
| 248 |
-
|
| 249 |
-
|
| 250 |
-
class VLN_evaluator:
|
| 251 |
-
"""
|
| 252 |
-
Evaluation pipeline for vision-language navigation.
|
| 253 |
-
"""
|
| 254 |
-
|
| 255 |
-
def __init__(self, root_dir, eval_model, llm_client, agent_method):
|
| 256 |
-
"""
|
| 257 |
-
Args:
|
| 258 |
-
root_dir: dataset root directory.
|
| 259 |
-
eval_model: model/deployment name.
|
| 260 |
-
llm_client: initialized LLM client.
|
| 261 |
-
agent_method: label used in output result directory.
|
| 262 |
-
"""
|
| 263 |
-
self.root_dir = root_dir
|
| 264 |
-
self.eval_model = eval_model
|
| 265 |
-
self.airsim_client = AirsimClient()
|
| 266 |
-
self.agent_method = agent_method
|
| 267 |
-
self.llm_client = llm_client
|
| 268 |
-
self.load_navi_task()
|
| 269 |
-
|
| 270 |
-
def load_navi_task(self):
|
| 271 |
-
"""Load navigation tasks from `navi_data.pkl`."""
|
| 272 |
-
with open(os.path.join(self.root_dir, "navi_data.pkl"), "rb") as f:
|
| 273 |
-
self.navi_data = pickle.load(f)
|
| 274 |
-
|
| 275 |
-
def evaluation(self):
|
| 276 |
-
"""
|
| 277 |
-
Evaluate navigation performance and print SR/SPL/DTG.
|
| 278 |
-
"""
|
| 279 |
-
navi_data = self.navi_data
|
| 280 |
-
navi_data_pd = pd.DataFrame(navi_data)
|
| 281 |
-
|
| 282 |
-
# Split samples into short/middle/long groups by trajectory length quantiles.
|
| 283 |
-
short_len = navi_data_pd["gt_traj_len"].quantile(1 / 3)
|
| 284 |
-
middle_len = navi_data_pd["gt_traj_len"].quantile(2 / 3)
|
| 285 |
-
sr_count_sets = np.zeros((3,))
|
| 286 |
-
num_sets = np.zeros((3,))
|
| 287 |
-
ne_count_sets = np.zeros((3,))
|
| 288 |
-
spl_sets = np.zeros((3,))
|
| 289 |
-
|
| 290 |
-
# Aggregate metrics over all samples.
|
| 291 |
-
sr_count = 0.0
|
| 292 |
-
spl = 0.0
|
| 293 |
-
ne_count = 0.0
|
| 294 |
-
|
| 295 |
-
# Evaluate each navigation sample independently.
|
| 296 |
-
for idx in range(len(navi_data)):
|
| 297 |
-
navi_task = navi_data[idx]
|
| 298 |
-
start_pos = navi_task["start_pos"]
|
| 299 |
-
start_rot = navi_task["start_rot"]
|
| 300 |
-
gt_traj = navi_task["gt_traj"]
|
| 301 |
-
target_pos = navi_task["target_pos"]
|
| 302 |
-
gt_traj_len = navi_task["gt_traj_len"]
|
| 303 |
-
task_desc = navi_task["task_desc"]
|
| 304 |
-
_ = gt_traj # Reserved for future path-level metrics.
|
| 305 |
-
|
| 306 |
-
# Initialize agent for this sample.
|
| 307 |
-
agent = ActionGen(self.eval_model, self.llm_client, self.airsim_client, task_desc)
|
| 308 |
-
|
| 309 |
-
# Reset drone pose and camera angle.
|
| 310 |
-
self.airsim_client.set_vehicle_pose(start_pos, start_rot)
|
| 311 |
-
self.camera_angle = 0
|
| 312 |
-
self.airsim_client.set_camera_angle(self.camera_angle)
|
| 313 |
-
print(f"Current navigation goal: {task_desc}")
|
| 314 |
-
|
| 315 |
-
# Print current state.
|
| 316 |
-
cur_pos, cur_rot = self.airsim_client.get_current_state()
|
| 317 |
-
print(f"pos: {cur_pos}, rot: {cur_rot}")
|
| 318 |
-
|
| 319 |
-
# Log full executed trajectory for this sample.
|
| 320 |
-
traj_df = pd.DataFrame(columns=["pos", "rot", "camera_angle"])
|
| 321 |
-
traj_df.loc[traj_df.shape[0]] = [start_pos, start_rot, self.camera_angle]
|
| 322 |
-
|
| 323 |
-
traj_len = 0.0
|
| 324 |
-
step = 0
|
| 325 |
-
max_steps = 50
|
| 326 |
-
threshold = 20
|
| 327 |
-
|
| 328 |
-
# Step-by-step control loop.
|
| 329 |
-
while step < max_steps:
|
| 330 |
-
# Query one action from the agent.
|
| 331 |
-
answer = agent.query(self.camera_angle)
|
| 332 |
-
|
| 333 |
-
# Parse command text into an internal action enum.
|
| 334 |
-
act = parse_llm_action(answer)
|
| 335 |
-
print("action: ", act)
|
| 336 |
-
|
| 337 |
-
# Execute action in simulator.
|
| 338 |
-
self.perform_act(act)
|
| 339 |
-
time.sleep(0.1)
|
| 340 |
-
|
| 341 |
-
former_pos = cur_pos
|
| 342 |
-
cur_pos, cur_rot = self.airsim_client.get_current_state()
|
| 343 |
-
traj_df.loc[traj_df.shape[0]] = [cur_pos, cur_rot, self.camera_angle]
|
| 344 |
-
traj_len += np.linalg.norm(cur_pos - former_pos)
|
| 345 |
-
step += 1
|
| 346 |
-
|
| 347 |
-
# Distance to goal after this step.
|
| 348 |
-
dist = np.linalg.norm(cur_pos - target_pos)
|
| 349 |
-
print(f"Task idx: {idx}, current step size: {step}, current dist: {dist}")
|
| 350 |
-
|
| 351 |
-
# Stop on success or if the drone has diverged too far.
|
| 352 |
-
if dist < threshold:
|
| 353 |
-
break
|
| 354 |
-
elif dist > 300:
|
| 355 |
-
break
|
| 356 |
-
|
| 357 |
-
# Final distance for this sample.
|
| 358 |
-
print(f"Max step size reached or target reached. step: {step}")
|
| 359 |
-
dist = np.linalg.norm(cur_pos - target_pos)
|
| 360 |
-
|
| 361 |
-
# Save predicted trajectory.
|
| 362 |
-
save_folder_path = "results/%s/%s" % (self.agent_method, self.eval_model)
|
| 363 |
-
if not os.path.exists(save_folder_path):
|
| 364 |
-
os.makedirs(save_folder_path)
|
| 365 |
-
traj_df.to_csv(os.path.join(save_folder_path, "%d.csv" % idx), index=False)
|
| 366 |
-
|
| 367 |
-
# Update group-level DTG accumulators.
|
| 368 |
-
if gt_traj_len < short_len:
|
| 369 |
-
num_sets[0] += 1
|
| 370 |
-
ne_count_sets[0] += dist
|
| 371 |
-
elif gt_traj_len < middle_len:
|
| 372 |
-
num_sets[1] += 1
|
| 373 |
-
ne_count_sets[1] += dist
|
| 374 |
-
else:
|
| 375 |
-
num_sets[2] += 1
|
| 376 |
-
ne_count_sets[2] += dist
|
| 377 |
-
|
| 378 |
-
# Update SR/SPL if success.
|
| 379 |
-
if dist < threshold:
|
| 380 |
-
sr_count += 1
|
| 381 |
-
spl_count = gt_traj_len / max(gt_traj_len, traj_len)
|
| 382 |
-
spl += spl_count
|
| 383 |
-
|
| 384 |
-
if gt_traj_len < short_len:
|
| 385 |
-
sr_count_sets[0] += 1
|
| 386 |
-
spl_sets[0] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 387 |
-
elif gt_traj_len < middle_len:
|
| 388 |
-
sr_count_sets[1] += 1
|
| 389 |
-
spl_sets[1] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 390 |
-
else:
|
| 391 |
-
sr_count_sets[2] += 1
|
| 392 |
-
spl_sets[2] += gt_traj_len / max(gt_traj_len, traj_len)
|
| 393 |
-
|
| 394 |
-
ne_count += dist
|
| 395 |
-
print(f"####### SR count: {sr_count}, SPL: {spl}, NE: {ne_count}")
|
| 396 |
-
print("Group SR:", sr_count_sets / num_sets)
|
| 397 |
-
print("Group SPL:", spl_sets / num_sets)
|
| 398 |
-
print("Group DTG:", ne_count_sets / num_sets)
|
| 399 |
-
print("Group sample counts:", num_sets)
|
| 400 |
-
|
| 401 |
-
# Final overall metrics.
|
| 402 |
-
sr = sr_count / len(navi_data)
|
| 403 |
-
ne = ne_count / len(navi_data)
|
| 404 |
-
print(f"SR: {sr}, SPL: {spl}, NE: {ne}")
|
| 405 |
-
np.set_printoptions(precision=3)
|
| 406 |
-
print("Group SR:", sr_count_sets / num_sets)
|
| 407 |
-
print("Group SPL:", spl_sets / num_sets)
|
| 408 |
-
print("Group DTG:", ne_count_sets / num_sets)
|
| 409 |
-
|
| 410 |
-
def perform_act(self, act_enum):
|
| 411 |
-
"""
|
| 412 |
-
Execute one parsed action enum in AirSim.
|
| 413 |
-
"""
|
| 414 |
-
# Action table: enum -> (name, value)
|
| 415 |
-
# - tuple value: relative translation (dx, dy, dz)
|
| 416 |
-
# - scalar value: rotation in degrees or camera angle delta in degrees
|
| 417 |
-
commands_map = {
|
| 418 |
-
6: ("move_forth", (10, 0, 0)),
|
| 419 |
-
7: ("move_back", (-10, 0, 0)),
|
| 420 |
-
8: ("move_left", (0, -10, 0)),
|
| 421 |
-
9: ("move_right", (0, 10, 0)),
|
| 422 |
-
10: ("move_up", (0, 0, -10)),
|
| 423 |
-
11: ("move_down", (0, 0, 10)),
|
| 424 |
-
2: ("turn_left", -22.5),
|
| 425 |
-
3: ("turn_right", 22.5),
|
| 426 |
-
4: ("angle_up", 45),
|
| 427 |
-
5: ("angle_down", -45),
|
| 428 |
-
}
|
| 429 |
-
|
| 430 |
-
try:
|
| 431 |
-
command, value = commands_map[act_enum]
|
| 432 |
-
|
| 433 |
-
if command in ["angle_up", "angle_down"]:
|
| 434 |
-
# Clamp gimbal angle to the valid range [-90, 90].
|
| 435 |
-
self.camera_angle += value
|
| 436 |
-
self.camera_angle = max(-90, min(90, self.camera_angle))
|
| 437 |
-
self.airsim_client.set_camera_angle(self.camera_angle)
|
| 438 |
-
elif act_enum in commands_map.keys():
|
| 439 |
-
# Movement or yaw rotation.
|
| 440 |
-
if isinstance(value, tuple):
|
| 441 |
-
dx, dy, dz = value
|
| 442 |
-
self.airsim_client.move_relative(dx, dy, dz)
|
| 443 |
-
else:
|
| 444 |
-
yaw_change = value
|
| 445 |
-
pose = self.airsim_client.client.simGetVehiclePose()
|
| 446 |
-
current_orientation = airsim.to_eularian_angles(pose.orientation)
|
| 447 |
-
new_orientation = [
|
| 448 |
-
current_orientation[0],
|
| 449 |
-
current_orientation[1],
|
| 450 |
-
current_orientation[2] + np.radians(yaw_change),
|
| 451 |
-
]
|
| 452 |
-
self.airsim_client.set_vehicle_pose(
|
| 453 |
-
[pose.position.x_val, pose.position.y_val, pose.position.z_val],
|
| 454 |
-
new_orientation,
|
| 455 |
-
)
|
| 456 |
-
else:
|
| 457 |
-
print(f"Unknown action {act_enum}, keep still.")
|
| 458 |
-
except Exception:
|
| 459 |
-
pass
|
| 460 |
-
|
| 461 |
-
|
| 462 |
-
if __name__ == "__main__":
|
| 463 |
-
# Configure your model deployment and credentials before running this file.
|
| 464 |
-
#
|
| 465 |
-
# Recommended setup:
|
| 466 |
-
# 1) Fill values via environment variables:
|
| 467 |
-
# AZURE_OPENAI_MODEL
|
| 468 |
-
# AZURE_OPENAI_API_KEY
|
| 469 |
-
# AZURE_OPENAI_ENDPOINT
|
| 470 |
-
# AZURE_OPENAI_API_VERSION (optional, defaults to 2024-07-01-preview)
|
| 471 |
-
#
|
| 472 |
-
# 2) Or directly replace the placeholder strings below.
|
| 473 |
-
model = os.getenv("AZURE_OPENAI_MODEL", "YOUR_AZURE_OPENAI_DEPLOYMENT")
|
| 474 |
-
api_key = os.getenv("AZURE_OPENAI_API_KEY", "YOUR_AZURE_OPENAI_API_KEY")
|
| 475 |
-
azure_endpoint = os.getenv(
|
| 476 |
-
"AZURE_OPENAI_ENDPOINT",
|
| 477 |
-
"https://YOUR-RESOURCE-NAME.openai.azure.com/",
|
| 478 |
-
)
|
| 479 |
-
api_version = os.getenv("AZURE_OPENAI_API_VERSION", "2024-07-01-preview")
|
| 480 |
-
|
| 481 |
-
if (
|
| 482 |
-
model == "YOUR_AZURE_OPENAI_DEPLOYMENT"
|
| 483 |
-
or api_key == "YOUR_AZURE_OPENAI_API_KEY"
|
| 484 |
-
or azure_endpoint == "https://YOUR-RESOURCE-NAME.openai.azure.com/"
|
| 485 |
-
):
|
| 486 |
-
raise ValueError(
|
| 487 |
-
"Azure OpenAI is not configured.\n"
|
| 488 |
-
"Set environment variables (AZURE_OPENAI_MODEL, AZURE_OPENAI_API_KEY, "
|
| 489 |
-
"AZURE_OPENAI_ENDPOINT, optional AZURE_OPENAI_API_VERSION) or replace "
|
| 490 |
-
"the placeholder values in `embodied_vln.py` before running."
|
| 491 |
-
)
|
| 492 |
-
|
| 493 |
-
llm_client = AzureOpenAI(
|
| 494 |
-
api_key=api_key,
|
| 495 |
-
api_version=api_version,
|
| 496 |
-
azure_endpoint=azure_endpoint,
|
| 497 |
-
)
|
| 498 |
-
|
| 499 |
-
# Name used in output directory: results/<agent_method>/<model>/
|
| 500 |
-
agent_method = "action_generation"
|
| 501 |
-
|
| 502 |
-
# Initialize evaluator and run all tasks in dataset/navi_data.pkl.
|
| 503 |
-
vln_eval = VLN_evaluator("dataset", model, llm_client, agent_method)
|
| 504 |
-
vln_eval.evaluation()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
image/QuantitativeResults.png
DELETED
Git LFS Details
|
image/statistics.png
DELETED
Git LFS Details
|
video/1.gif
DELETED
Git LFS Details
|
video/1.mp4
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4fdddc7bf7760989c63f1edf7f105f285d32988c8100ae0297ccc2f35df49173
|
| 3 |
-
size 7559547
|
|
|
|
|
|
|
|
|
|
|
|
video/2.gif
DELETED
Git LFS Details
|
video/2.mp4
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d399799233be67396b0c26d8fa28573c6c8f6116582c2968604c03e6709d59ba
|
| 3 |
-
size 18715886
|
|
|
|
|
|
|
|
|
|
|
|
video/3.gif
DELETED
Git LFS Details
|
video/3.mp4
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e5cf50aab2e5a5ad31947e15f20412411d6f9a267192652308266f9b0d9788a3
|
| 3 |
-
size 11274226
|
|
|
|
|
|
|
|
|
|
|
|