File size: 8,099 Bytes
9d31fd5 dcdf1bd 9d31fd5 dcdf1bd 9d31fd5 e9874e8 9d31fd5 e9874e8 dcdf1bd e9874e8 dcdf1bd e9874e8 dcdf1bd e9874e8 dcdf1bd e9874e8 dcdf1bd e9874e8 dcdf1bd e9874e8 dcdf1bd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
license: cc-by-4.0
task_categories:
- other
arxiv: 2601.05573
datasets:
- Viglong/Hunyuan3D-FLUX-Gen
space: Viglong/Orient-Anything-V2
model: Viglong/OriAnyV2_ckpt
---
<div align="center">
<h1>[NeurIPS 2025 Spotlight]<br>
Orient Anything V2: Unifying Orientation and Rotation Understanding</h1>
[**Zehan Wang**](https://scholar.google.com/citations?user=euXK0lkAAAAJ)<sup>1*</sup> · [**Ziang Zhang**](https://scholar.google.com/citations?hl=zh-CN&user=DptGMnYAAAAJ)<sup>1*</sup> · [**Jialei Wang**](https://scholar.google.com/citations?hl=en&user=OIuFz1gAAAAJ)<sup>1</sup> · [**Jiayang Xu**](https://github.com/1339354001)<sup>1</sup> · [**Tianyu Pang**](https://scholar.google.com/citations?hl=zh-CN&user=wYDbtFsAAAAJ)<sup>2</sup> · [**Du Chao**](https://scholar.google.com/citations?hl=zh-CN&user=QOp7xW0AAAAJ)<sup>2</sup> · [**Hengshuang Zhao**](https://scholar.google.com/citations?user=4uE10I0AAAAJ&hl&oi=ao)<sup>3</sup> · [**Zhou Zhao**](https://scholar.google.com/citations?user=IIoFY90AAAAJ&hl&oi=ao)<sup>1</sup>
<sup>1</sup>Zhejiang University    <sup>2</sup>SEA AI Lab    <sup>3</sup>HKU
*Equal Contribution
<a href='https://huggingface.co/papers/2601.05573'><img src='https://img.shields.io/badge/arXiv-PDF-red' alt='Paper PDF'></a>
<a href='https://orient-anythingv2.github.io'><img src='https://img.shields.io/badge/Project_Page-OriAnyV2-green' alt='Project Page'></a>
<a href='https://github.com/SpatialVision/Orient-Anything-V2'><img src='https://img.shields.io/badge/GitHub-Code-black' alt='GitHub Code'></a>
<a href='https://huggingface.co/spaces/Viglong/Orient-Anything-V2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Train_Render'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Train Data-orange'></a>
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Inference'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Test Data-orange'></a>
<a href='https://huggingface.co/papers/2601.05573'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-yellow'></a>
</div>
**Orient Anything V2**, a unified spatial vision model for understanding orientation, symmetry, and relative rotation, achieves SOTA performance across 14 datasets.
<!--  -->
## News
* **2025-10-24:** 🔥[Paper](https://huggingface.co/papers/2601.05573), [Project Page](https://orient-anythingv2.github.io), [Code](https://github.com/SpatialVision/Orient-Anything-V2), [Model Checkpoint](https://huggingface.co/Viglong/OriAnyV2_ckpt/blob/main/demo_ckpts/rotmod_realrotaug_best.pt), and [Demo](https://huggingface.co/spaces/Viglong/Orient-Anything-V2) have been released!
* **2025-09-18:** 🔥Orient Anything V2 has been accepted as a Spotlight @ NeurIPS 2025!
## Pre-trained Model Weights
We provide pre-trained model weights and are continuously iterating on them to support more inference scenarios:
| Model | Params | Checkpoint |
|:-|-:|:-:|
| Orient-Anything-V2 | 5.05 GB | [Download](https://huggingface.co/Viglong/OriAnyV2_ckpt/blob/main/demo_ckpts/rotmod_realrotaug_best.pt) |
## Quick Start
### 1 Dependency Installation
```shell
conda create -n orianyv2 python=3.11
conda activate orianyv2
pip install -r requirements.txt
```
### 2 Gradio App
Start gradio by executing the following script:
```bash
python app.py
```
then open GUI page(default is https://127.0.0.1:7860) in web browser.
or, you can try it in our [Huggingface-Space](https://huggingface.co/spaces/Viglong/Orient-Anything-V2)
### 3 Python Scripts
```python
import numpy as np
from PIL import Image
import torch
import tempfile
import os
from paths import *
from vision_tower import VGGT_OriAny_Ref
from inference import *
from app_utils import *
mark_dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] >= 8 else torch.float16
# device = 'cuda:0'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if os.path.exists(LOCAL_CKPT_PATH):
ckpt_path = LOCAL_CKPT_PATH
else:
from huggingface_hub import hf_hub_download
ckpt_path = hf_hub_download(repo_id="Viglong/Orient-Anything-V2", filename=HF_CKPT_PATH, repo_type="model", cache_dir='./', resume_download=True)
model = VGGT_OriAny_Ref(out_dim=900, dtype=mark_dtype, nopretrain=True)
model.load_state_dict(torch.load(ckpt_path, map_location='cpu'))
model.eval()
model = model.to(device)
print('Model loaded.')
@torch.no_grad()
def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
if pil_tgt is not None:
if do_rm_bkg:
pil_ref = background_preprocess(pil_ref, True)
pil_tgt = background_preprocess(pil_tgt, True)
else:
if do_rm_bkg:
pil_ref = background_preprocess(pil_ref, True)
try:
ans_dict = inf_single_case(model, pil_ref, pil_tgt)
except Exception as e:
print("Inference error:", e)
raise gr.Error(f\"Inference failed: {str(e)}\")
def safe_float(val, default=0.0):
try:
return float(val)
except:
return float(default)
az = safe_float(ans_dict.get('ref_az_pred', 0))
el = safe_float(ans_dict.get('ref_el_pred', 0))
ro = safe_float(ans_dict.get('ref_ro_pred', 0))
alpha = int(ans_dict.get('ref_alpha_pred', 1))
if pil_tgt is not None:
rel_az = safe_float(ans_dict.get('rel_az_pred', 0))
rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))
print("Relative Pose: Azi\",rel_az,\"Ele\",rel_el,\"Rot\",rel_ro)
image_ref_path = 'assets/examples/F35-0.jpg'
image_tgt_path = 'assets/examples/F35-1.jpg' # optional
image_ref = Image.open(image_ref_path).convert('RGB')
image_tgt = Image.open(image_tgt_path).convert('RGB')
run_inference(image_ref, image_tgt, True)
```
## Evaluate Orient-Anything-V2
### Data Preparation
Download the absolute orientation, relative rotation, and symm-orientation test datasets from [Huggingface Dataset](https://huggingface.co/datasets/Viglong/OriAnyV2_Inference).
```shell
# set mirror endpoint to accelerate
# export HF_ENDPOINT='https://hf-mirror.com'
huggingface-cli download --repo-type dataset Viglong/OriAnyV2_Inference --local-dir OriAnyV2_Inference
```
Use the following command to extract the dataset:
```shell
cd OriAnyV2_Inference
for f in *.tar.gz; do
tar -xzf "$f"
done
```
Modify `DATA_ROOT` in `paths.py` to point to the dataset root directory(`/path/to/OriAnyV2_Inference`).
### Evaluate with torch-lightning
To evaluate on test datasets, run the following code:
```shell
python eval_on_dataset.py
```
## Train Orient-Anything-V2
We use `FLUX.1-dev` and `Hunyuan3D-2.0` to generate our training data and render it with Blender. We provide the fully rendered data, which you can obtain from the link below.
[Hunyuan3D-FLUX-Gen](https://huggingface.co/datasets/Viglong/Hunyuan3D-FLUX-Gen)
To store all this data, we recommend having at least **2TB** of free disk space on your server.
We are currently organizing the complete **data construction pipeline** and **training code** for Orient-Anything-V2 — stay tuned.
## Acknowledgement
We would like to express our sincere gratitude to the following excellent works:
- [VGGT](https://github.com/facebookresearch/vggt)
- [FLUX](https://github.com/black-forest-labs/flux)
- [Hunyuan3D-2.0](https://github.com/Tencent-Hunyuan/Hunyuan3D-2)
- [Blender](https://github.com/blender/blender)
- [rembg](https://github.com/danielgatis/rembg)
## Citation
If you find this project useful, please consider citing:
```bibtex
@inproceedings{wangorient,
title={Orient Anything V2: Unifying Orientation and Rotation Understanding},
author={Wang, Zehan and Zhang, Ziang and Xu, Jiayang and Wang, Jialei and Pang, Tianyu and Du, Chao and Zhao, Hengshuang and Zhao, Zhou},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}
``` |