OneRING / README.md
AinazEftekhar's picture
Create README.md
adf715b verified
metadata
library_name: pytorch
tags:
  - robotics
  - visual-navigation
  - imitation-learning
  - indoor-navigation
  - generalist-policy
model_name: 'RING: Robotic Indoor Navigation Generalist'
papers:
  - https://arxiv.org/pdf/2412.14401
repo: AinazEftekhar/OneRING

Model Overview

RING (Robotic Indoor Navigation Generalist) is a generalist policy for indoor visual navigation trained solely in simulation with diverse, randomly initialized embodiments at scale (1M embodiments). Despite training exclusively in simulation, RING demonstrates robust performance on unseen real-world robots (RB‑Y1, Stretch RE‑1, LoCoBot, Unitree Go1).

What this model does

  • Maps egocentric visual observations and a natural-language goal to a discrete navigation action.
  • Trained for robust generalization across embodiments and environments.

Intended use

  • Research in embodied AI, navigation policy learning, sim2real transfer, and generalist robot policies.
  • Evaluation and benchmarking for indoor visual navigation tasks.

Checkpoints

  • Default checkpoint: ring_model_step_40356421.ckpt

Quick Start

Python (download checkpoint)

from huggingface_hub import hf_hub_download

ckpt_path = hf_hub_download(
    repo_id="AinazEftekhar/OneRING",
    filename="ring_model_step_40356421.ckpt"
)
print(ckpt_path)  # local path to the checkpoint file