Datasets:
metadata
license: cc-by-4.0
task_categories:
- other
tags:
- 3d
- computer-vision
- orientation-estimation
Orient Anything V2 Dataset
Project Page | Paper | GitHub
Orient Anything V2 is an enhanced foundation model for unified understanding of object 3D orientation and rotation from single or paired images. This repository contains the training data (final rendering data) used for the model.
Sample Usage
Below is a snippet to run inference using the model and data logic, as found in the official GitHub repository:
import numpy as np
from PIL import Image
import torch
import tempfile
import os
from paths import *
from vision_tower import VGGT_OriAny_Ref
from inference import *
from app_utils import *
mark_dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] >= 8 else torch.float16
# device = 'cuda:0'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if os.path.exists(LOCAL_CKPT_PATH):
ckpt_path = LOCAL_CKPT_PATH
else:
from huggingface_hub import hf_hub_download
ckpt_path = hf_hub_download(repo_id="Viglong/Orient-Anything-V2", filename=HF_CKPT_PATH, repo_type="model", cache_dir='./', resume_download=True)
model = VGGT_OriAny_Ref(out_dim=900, dtype=mark_dtype, nopretrain=True)
model.load_state_dict(torch.load(ckpt_path, map_location='cpu'))
model.eval()
model = model.to(device)
print('Model loaded.')
@torch.no_grad()
def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
if pil_tgt is not None:
if do_rm_bkg:
pil_ref = background_preprocess(pil_ref, True)
pil_tgt = background_preprocess(pil_tgt, True)
else:
if do_rm_bkg:
pil_ref = background_preprocess(pil_ref, True)
try:
ans_dict = inf_single_case(model, pil_ref, pil_tgt)
except Exception as e:
print("Inference error:", e)
raise gr.Error(f"Inference failed: {str(e)}")
def safe_float(val, default=0.0):
try:
return float(val)
except:
return float(default)
az = safe_float(ans_dict.get('ref_az_pred', 0))
el = safe_float(ans_dict.get('ref_el_pred', 0))
ro = safe_float(ans_dict.get('ref_ro_pred', 0))
alpha = int(ans_dict.get('ref_alpha_pred', 1))
if pil_tgt is not None:
rel_az = safe_float(ans_dict.get('rel_az_pred', 0))
rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))
print("Relative Pose: Azi",rel_az,"Ele",rel_el,"Rot",rel_ro)
image_ref_path = 'assets/examples/F35-0.jpg'
image_tgt_path = 'assets/examples/F35-1.jpg' # optional
image_ref = Image.open(image_ref_path).convert('RGB')
image_tgt = Image.open(image_tgt_path).convert('RGB')
run_inference(image_ref, image_tgt, True)
Citation
If you find this project useful, please consider citing:
@inproceedings{wangorient,
title={Orient Anything V2: Unifying Orientation and Rotation Understanding},
author={Wang, Zehan and Zhang, Ziang and Xu, Jiayang and Wang, Jialei and Pang, Tianyu and Du, Chao and Zhao, Hengshuang and Zhao, Zhou},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}