Improve model card with paper info, links, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +67 -4
README.md CHANGED
@@ -1,10 +1,73 @@
1
  ---
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
- pipeline_tag: image-to-3d
6
  ---
7
 
8
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
9
- - Library: [More Information Needed]
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: image-to-3d
3
  tags:
4
  - model_hub_mixin
5
  - pytorch_model_hub_mixin
6
+ license: cc-by-nc-4.0
7
  ---
8
 
9
+ # $\pi^3$: Permutation-Equivariant Visual Geometry Learning
10
+
11
+ This repository contains the weights for **Pi3X**, an enhanced version of the $\pi^3$ model introduced in the paper [$\pi^3$: Permutation-Equivariant Visual Geometry Learning](https://huggingface.co/papers/2507.13347).
12
+
13
+ $\pi^3$ is a feed-forward neural network for visual geometry reconstruction that eliminates the need for a fixed reference view. It employs a fully permutation-equivariant architecture to predict affine-invariant camera poses and scale-invariant local point maps from an unordered set of images, making it robust to input ordering and achieving state-of-the-art performance.
14
+
15
+ - **Project Page:** [yyfz.github.io/pi3/](https://yyfz.github.io/pi3/)
16
+ - **GitHub Repository:** [github.com/yyfz/Pi3](https://github.com/yyfz/Pi3)
17
+ - **Demo:** [Hugging Face Space](https://huggingface.co/spaces/yyfz233/Pi3)
18
+
19
+ ## Pi3X Engineering Update
20
+ Pi3X is an enhanced version focusing on flexibility and reconstruction quality:
21
+ * **Smoother Reconstruction:** Uses a Convolutional Head to reduce grid-like artifacts.
22
+ * **Flexible Conditioning:** Supports optional injection of camera poses, intrinsics, and depth.
23
+ * **Reliable Confidence:** Predicts continuous quality levels for better noise filtering.
24
+ * **Metric Scale:** Supports approximate metric scale reconstruction.
25
+
26
+ ## Sample Usage
27
+
28
+ To use this model, you need to clone the [official repository](https://github.com/yyfz/Pi3) and install the dependencies.
29
+
30
+ ```python
31
+ import torch
32
+ from pi3.models.pi3x import Pi3X # new version (Recommended)
33
+ from pi3.utils.basic import load_images_as_tensor
34
+
35
+ # --- Setup ---
36
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
37
+ model = Pi3X.from_pretrained("yyfz233/Pi3X").to(device).eval()
38
+
39
+ # --- Load Data ---
40
+ # Load a sequence of N images into a tensor (N, 3, H, W)
41
+ # pixel values in the range [0, 1]
42
+ imgs = load_images_as_tensor('path/to/your/data', interval=10).to(device)
43
+
44
+ # --- Inference ---
45
+ print("Running model inference...")
46
+ # Use mixed precision for better performance on compatible GPUs
47
+ dtype = torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else torch.float16
48
+
49
+ with torch.no_grad():
50
+ with torch.amp.autocast('cuda', dtype=dtype):
51
+ # Add a batch dimension -> (1, N, 3, H, W)
52
+ results = model(imgs[None])
53
+
54
+ print("Reconstruction complete!")
55
+ # Access outputs: results['points'], results['camera_poses'] and results['local_points'].
56
+ ```
57
+
58
+ ## Citation
59
+
60
+ If you find this work useful, please consider citing:
61
+
62
+ ```bibtex
63
+ @article{wang2025pi,
64
+ title={$\pi^3$: Permutation-Equivariant Visual Geometry Learning},
65
+ author={Wang, Yifan and Zhou, Jianjun and Zhu, Haoyi and Chang, Wenzheng and Zhou, Yang and Li, Zizun and Chen, Junyi and Pang, Jiangmiao and Shen, Chunhua and He, Tong},
66
+ journal={arXiv preprint arXiv:2507.13347},
67
+ year={2025}
68
+ }
69
+ ```
70
+
71
+ ## License
72
+ - **Code**: BSD 3-Clause
73
+ - **Model Weights**: [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Strictly Non-Commercial)