EmbodiedNav-Bench / README.md
serendipityAc2Win's picture
Add navigation data and Dataset Viewer table
d76cf4f verified
|
raw
history blame
9.69 kB
metadata
license: cc-by-4.0
pretty_name: EmbodiedNav-Bench
language:
  - en
task_categories:
  - visual-question-answering
  - reinforcement-learning
tags:
  - embodied-ai
  - embodied-navigation
  - urban-airspace
  - drone-navigation
  - multimodal-reasoning
  - spatial-reasoning
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-00000-of-00001.parquet

How Far Are Large Multimodal Models from Human-Level Spatial Action? A Benchmark for Goal-Oriented Embodied Navigation in Urban Airspace

Abstract

Large multimodal models (LMMs) show strong visual-linguistic reasoning but their capacity for spatial decision-making and action remains unclear. In this work, we investigate whether LMMs can achieve embodied spatial action like human through a challenging scenario: goal-oriented navigation in urban 3D spaces. We first spend over 500 hours constructing a dataset comprising 5,037 high-quality goal-oriented navigation samples, with an emphasis on 3D vertical actions and rich urban semantic information. Then, we comprehensively assess 17 representative models, including non-reasoning LMMs, reasoning LMMs, agent-based methods, and vision-language-action models. Experiments show that current LMMs exhibit emerging action capabilities, yet remain far from human-level performance. Furthermore, we reveal an intriguing phenomenon: navigation errors do not accumulate linearly but instead diverge rapidly from the destination after a critical decision bifurcation. The limitations of LMMs are investigated by analyzing their behavior at these critical decision bifurcations. Finally, we experimentally explore four promising directions for improvement: geometric perception, cross-view understanding, spatial imagination, and long-term memory.


Dataset Overview

EmbodiedNav-Bench is a goal-oriented embodied navigation benchmark for evaluating how large multimodal models act in urban 3D airspace. The released sample set contains 300 human-collected trajectories with natural-language goals, drone start poses, target positions, and ground-truth 3D paths. The original evaluation data is provided as dataset/navi_data.pkl, and a Parquet conversion is provided at data/train-00000-of-00001.parquet for the Hugging Face Dataset Viewer table.

Navigation Example

Example 1 Example 2 Example 3
Goal: Nearby bus stop Goal: The fresh food shop in the building below Goal: The balcony on the 20th floor of the building on the right

Note: The videos above demonstrate goal-oriented embodied navigation examples in urban airspace. Given linguistic instructions, the task evaluates the ability to progressively act based on continuous embodied observations to approach the goal location.

Dataset Statistics

Key Statistics:

  • Total Trajectories: 5,037 high-quality goal-oriented navigation trajectories
  • Data Collection: Over 500 hours of human-controlled data collection
  • Average Trajectory Length: ~203.4 meters
  • Annotators: 10 volunteers (5 for case creation, 5 experienced drone pilots with 100+ hours flight experience)
  • Action Types:
    • Horizontal movement (move-forth, move-left, move-right, move-back)
    • Vertical movement (move-up, move-down)
    • Rotation/view Change (turn-left, turn-right,adjust-camera-gimbal-upwards, adjust-camera-gimbal-downwards)
  • Trajectory Distribution: Pay more attention to vertical movement

Dataset Construction and Statistical Visualization:

Dataset Statistics

Figure: a. Dataset Construction Pipeline. b. The length distribution of navigation trajectories. c. Proportion of various types of actions. d. The relative position of trajectories to the origin. e. Word cloud of goal instructions.


Environment Setup and Simulator Deployment

This project references EmbodiedCity for the urban simulation environment.

1. Download the simulator

  • Offline simulator download (official): EmbodiedCity-Simulator on HuggingFace
  • Download and extract the simulator package, then launch the provided executable (.exe) and keep it running before evaluation.

2. Create the Python environment

Use one of the following ways:

conda create -n EmbodiedCity python=3.10 -y
conda activate EmbodiedCity
pip install airsim openai opencv-python numpy pandas

If you are using the simulator package's built-in environment files:

conda env create -n EmbodiedCity -f environment.yml
conda activate EmbodiedCity

3. Dataset release

All paths below are relative to the project root.

We are currently open-sourcing 300 trajectories as public examples:

  • dataset/navi_data.pkl
  • dataset/navi_data_preview.json (human-readable JSON preview)
  • data/train-00000-of-00001.parquet (Hugging Face Dataset Viewer table split)

dataset/navi_data.pkl is the canonical dataset file for evaluation.

3.1 navi_data.pkl field schema

Each sample in dataset/navi_data.pkl is a Python dict with the following fields:

Field Type Description
folder str Scene folder identifier
start_pos float[3] Initial drone world position (x, y, z)
start_rot float[3] Initial drone orientation (roll, pitch, yaw) in radians
start_ang float Initial camera gimbal angle (degrees)
task_desc str Natural-language navigation instruction
target_pos float[3] Target world position (x, y, z)
gt_traj float[N,3] Ground-truth trajectory points
gt_traj_len float Ground-truth trajectory length

3.2 Example view for humans

To make inspection easier without loading PKL directly, we provide:

  • dataset/navi_data_preview.json

This JSON contains:

  • field descriptions
  • total sample count
  • preview of the first few samples (including gt_traj partial points)

Example item (simplified):

{
  "sample_index": 0,
  "folder": "0",
  "task_desc": "the entrance of the red building on the left front",
  "start_pos": [6589.18164, -4162.23877, -36.2995872],
  "start_rot": [0.0, 0.0, 3.14159251],
  "start_ang": 0.0,
  "target_pos": [6390.7041, -4154.58545, -6.29958725],
  "gt_traj_len": 229.99981973603806,
  "gt_traj_num_points": 28,
  "gt_traj_preview_first5": [
    [6589.18164, -4162.23877, -36.2995872],
    [6579.18164, -4162.23877, -36.2995872],
    [6569.18164, -4162.23877, -36.2995872],
    [6559.18164, -4162.23877, -36.2995872],
    [6549.18164, -4162.23877, -36.2995872]
  ]
}

3.3 Hugging Face Dataset Viewer table

The train split is stored as data/train-00000-of-00001.parquet so the dataset can be inspected directly in the Hugging Face Table view. Each table row corresponds to one navigation trajectory and includes flattened coordinate columns (start_x, target_x, etc.) together with the original structured fields (start_pos, start_rot, target_pos, and gt_traj).

4. How to test your own model

To evaluate your model, modify the Agent logic in embodied_vln.py, mainly in the ActionGen class:

  • ActionGen.query(...): replace prompt design / model API call / decision logic.
  • Keep output command format compatible with parse_llm_action(...) (one command per step).
  • Supported commands include: move_forth, move_back, move_left, move_right, move_up, move_down, turn_left, turn_right, angle_up, angle_down.

Then run:

python embodied_vln.py

Example: connect other API models

Use the API placeholder pattern in embodied_vln.py as a template for plugging in your own model service.

Current placeholders (in embodied_vln.py) are:

  • AZURE_OPENAI_MODEL
  • AZURE_OPENAI_API_KEY
  • AZURE_OPENAI_ENDPOINT
  • AZURE_OPENAI_API_VERSION (optional, default: 2024-07-01-preview)

PowerShell example:

$env:AZURE_OPENAI_MODEL="your-deployment-name"
$env:AZURE_OPENAI_API_KEY="your-api-key"
$env:AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com/"
$env:AZURE_OPENAI_API_VERSION="2024-07-01-preview"

If you use a non-Azure model API, keep this contract unchanged:

  • ActionGen.query(...) must return one text command each step.
  • Returned command should still be compatible with parse_llm_action(...).

Minimal expected return format:

Thinking: <your model reasoning>
Command: move_forth

Experimental Results

Quantitative Results

We evaluate 17 representative models across five categories: Basic Baselines, Non-Reasoning LMMs, Reasoning LMMs, Agent-Based Approaches, and Vision-Language-Action Models.QuantitativeResults

Note: Short, Middle, and Long groups correspond to ground truth trajectories of <118.2m, 118.2-223.6m, and >223.6m respectively. SR = Success Rate, SPL = Success weighted by Path Length, DTG = Distance to Goal.