Add task categories and update citation
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,8 +1,10 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
| 3 |
datasets:
|
| 4 |
-
|
| 5 |
-
papers:
|
| 6 |
space: Viglong/Orient-Anything-V2
|
| 7 |
model: Viglong/OriAnyV2_ckpt
|
| 8 |
---
|
|
@@ -18,12 +20,13 @@ Orient Anything V2: Unifying Orientation and Rotation Understanding</h1>
|
|
| 18 |
*Equal Contribution
|
| 19 |
|
| 20 |
|
| 21 |
-
<a href='https://
|
| 22 |
<a href='https://orient-anythingv2.github.io'><img src='https://img.shields.io/badge/Project_Page-OriAnyV2-green' alt='Project Page'></a>
|
|
|
|
| 23 |
<a href='https://huggingface.co/spaces/Viglong/Orient-Anything-V2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
|
| 24 |
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Train_Render'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Train Data-orange'></a>
|
| 25 |
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Inference'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Test Data-orange'></a>
|
| 26 |
-
<a href='https://huggingface.co/papers/
|
| 27 |
</div>
|
| 28 |
|
| 29 |
**Orient Anything V2**, a unified spatial vision model for understanding orientation, symmetry, and relative rotation, achieves SOTA performance across 14 datasets.
|
|
@@ -31,7 +34,7 @@ Orient Anything V2: Unifying Orientation and Rotation Understanding</h1>
|
|
| 31 |
<!--  -->
|
| 32 |
|
| 33 |
## News
|
| 34 |
-
* **2025-10-24:** 🔥[Paper](https://
|
| 35 |
|
| 36 |
* **2025-09-18:** 🔥Orient Anything V2 has been accepted as a Spotlight @ NeurIPS 2025!
|
| 37 |
|
|
@@ -108,7 +111,7 @@ def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
|
|
| 108 |
ans_dict = inf_single_case(model, pil_ref, pil_tgt)
|
| 109 |
except Exception as e:
|
| 110 |
print("Inference error:", e)
|
| 111 |
-
raise gr.Error(f"Inference failed: {str(e)}")
|
| 112 |
|
| 113 |
def safe_float(val, default=0.0):
|
| 114 |
try:
|
|
@@ -126,7 +129,7 @@ def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
|
|
| 126 |
rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
|
| 127 |
rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))
|
| 128 |
|
| 129 |
-
print("Relative Pose: Azi",rel_az
|
| 130 |
|
| 131 |
image_ref_path = 'assets/examples/F35-0.jpg'
|
| 132 |
image_tgt_path = 'assets/examples/F35-1.jpg' # optional
|
|
@@ -189,5 +192,9 @@ We would like to express our sincere gratitude to the following excellent works:
|
|
| 189 |
If you find this project useful, please consider citing:
|
| 190 |
|
| 191 |
```bibtex
|
| 192 |
-
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- other
|
| 5 |
+
arxiv: 2601.05573
|
| 6 |
datasets:
|
| 7 |
+
- Viglong/Hunyuan3D-FLUX-Gen
|
|
|
|
| 8 |
space: Viglong/Orient-Anything-V2
|
| 9 |
model: Viglong/OriAnyV2_ckpt
|
| 10 |
---
|
|
|
|
| 20 |
*Equal Contribution
|
| 21 |
|
| 22 |
|
| 23 |
+
<a href='https://huggingface.co/papers/2601.05573'><img src='https://img.shields.io/badge/arXiv-PDF-red' alt='Paper PDF'></a>
|
| 24 |
<a href='https://orient-anythingv2.github.io'><img src='https://img.shields.io/badge/Project_Page-OriAnyV2-green' alt='Project Page'></a>
|
| 25 |
+
<a href='https://github.com/SpatialVision/Orient-Anything-V2'><img src='https://img.shields.io/badge/GitHub-Code-black' alt='GitHub Code'></a>
|
| 26 |
<a href='https://huggingface.co/spaces/Viglong/Orient-Anything-V2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
|
| 27 |
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Train_Render'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Train Data-orange'></a>
|
| 28 |
<a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Inference'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Test Data-orange'></a>
|
| 29 |
+
<a href='https://huggingface.co/papers/2601.05573'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-yellow'></a>
|
| 30 |
</div>
|
| 31 |
|
| 32 |
**Orient Anything V2**, a unified spatial vision model for understanding orientation, symmetry, and relative rotation, achieves SOTA performance across 14 datasets.
|
|
|
|
| 34 |
<!--  -->
|
| 35 |
|
| 36 |
## News
|
| 37 |
+
* **2025-10-24:** 🔥[Paper](https://huggingface.co/papers/2601.05573), [Project Page](https://orient-anythingv2.github.io), [Code](https://github.com/SpatialVision/Orient-Anything-V2), [Model Checkpoint](https://huggingface.co/Viglong/OriAnyV2_ckpt/blob/main/demo_ckpts/rotmod_realrotaug_best.pt), and [Demo](https://huggingface.co/spaces/Viglong/Orient-Anything-V2) have been released!
|
| 38 |
|
| 39 |
* **2025-09-18:** 🔥Orient Anything V2 has been accepted as a Spotlight @ NeurIPS 2025!
|
| 40 |
|
|
|
|
| 111 |
ans_dict = inf_single_case(model, pil_ref, pil_tgt)
|
| 112 |
except Exception as e:
|
| 113 |
print("Inference error:", e)
|
| 114 |
+
raise gr.Error(f\"Inference failed: {str(e)}\")
|
| 115 |
|
| 116 |
def safe_float(val, default=0.0):
|
| 117 |
try:
|
|
|
|
| 129 |
rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
|
| 130 |
rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))
|
| 131 |
|
| 132 |
+
print("Relative Pose: Azi\",rel_az,\"Ele\",rel_el,\"Rot\",rel_ro)
|
| 133 |
|
| 134 |
image_ref_path = 'assets/examples/F35-0.jpg'
|
| 135 |
image_tgt_path = 'assets/examples/F35-1.jpg' # optional
|
|
|
|
| 192 |
If you find this project useful, please consider citing:
|
| 193 |
|
| 194 |
```bibtex
|
| 195 |
+
@inproceedings{wangorient,
|
| 196 |
+
title={Orient Anything V2: Unifying Orientation and Rotation Understanding},
|
| 197 |
+
author={Wang, Zehan and Zhang, Ziang and Xu, Jiayang and Wang, Jialei and Pang, Tianyu and Du, Chao and Zhao, Hengshuang and Zhao, Zhou},
|
| 198 |
+
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
|
| 199 |
+
}
|
| 200 |
+
```
|