PRIMA-demo / README.md
HF Space deploy
Deploy snapshot (LFS for demo images per .gitattributes)
c3a4f1d

A newer version of the Gradio SDK is available: 6.14.0

Upgrade
metadata
title: PRIMA Demo
emoji: 🦮
colorFrom: blue
colorTo: green
sdk: gradio
python_version: '3.10'
app_file: app.py
startup_duration_timeout: 60m

PRIMA: Boosting Animal Mesh Recovery with Biological Priors and Test-Time Adaptation

This is the official implementation of the approach described in the preprint:

PRIMA: Boosting Animal Mesh Recovery with Biological Priors and Test-Time Adaptation
Xiaohang Yu, Ti Wang, Mackenzie Weygandt Mathis

PRIMA teaser


🚀 TL;DR

PRIMA creates a 3D quadruped mesh from a single 2D image. It leverages BioCLIP-based biological priors for robust cross-species shape understanding, then applies test-time adaptation with 2D reprojection and auxiliary keypoint guidance to refine SMAL pose and shape predictions.

It further can be used to build Quadruped3D, a large-scale pseudo-3D dataset with diverse species and poses.

PRIMA achieves state-of-the-art results on Animal3D, CtrlAni3D, Quadruped2D, and Animal Kingdom datasets.

Installation

Install from PyPI

Recommended: Python 3.10 and a CUDA-enabled PyTorch installation.

conda create -n prima python=3.10 -y
conda activate prima

# Install PyTorch matching your CUDA (example: CUDA 11.8)
pip install --index-url https://download.pytorch.org/whl/cu118 \
    "torch==2.2.1" "torchvision==0.17.1" "torchaudio==2.2.1"

# Install chumpy and PyTorch3D
python -m pip install --no-build-isolation \
      "git+https://github.com/mattloper/chumpy.git"
python -m pip install --no-build-isolation \
      "git+https://github.com/facebookresearch/pytorch3d.git"

# Install PRIMA from PyPI
pip install prima-animal

prima-animal includes demo runtime dependencies used by demo.py, demo_tta.py, and app.py (including Detectron2 and DeepLabCut).

Clean install from this repository

Use these when developing from a git clone (not the PyPI wheel). The shell scripts are non-interactive (pip uses --no-input; GIT_TERMINAL_PROMPT=0 for git). Put Hugging Face credentials in your environment or git credential helper before pushing the Space.

Local (fresh venv, LFS assets, Hub demo weights, smoke test) — requires Python 3.10+ (Gradio 5.1+ / Space-provided Gradio 6.x and app.py type hints). On macOS without python3.10 on your PATH, install brew install python@3.10 and set PRIMA_PYTHON=/opt/homebrew/bin/python3.10.

chmod +x scripts/clean_install_local.sh scripts/clean_redeploy_hf_space.sh scripts/deploy_hf_space.sh
PRIMA_PYTHON=/opt/homebrew/bin/python3.10 ./scripts/clean_install_local.sh

Options:

  • PRIMA_VENV=.venv ./scripts/clean_install_local.sh --skip-data — skip the large setup_demo_data download if data/ is already populated.
  • ./scripts/clean_install_local.sh --wipe-data --force-data — delete downloaded data/ assets and redownload.
  • ./scripts/clean_install_local.sh --no-editable — only requirements.txt (no pip install -e .); use if editable install fails and you will install the training stack via conda as in the PyPI section above. You still need Python 3.10+ for Gradio 5.1+. The smoke test sets PYTHONPATH to the repo root so import prima works without an editable install.
  • macOS: the script omits the deeplabcut line from pip install because DeepLabCut’s pinned PyTables version often does not build on Apple Silicon. Use conda/mamba for DeepLabCut if you need SuperAnimal + TTA (tta_num_iters > 0). Linux (including Hugging Face Space builds) uses the full requirements.txt including deeplabcut.

After requirements.txt, the script runs pip install --no-deps -e . so the prima package is registered without re-resolving pyproject.toml (which would pull Detectron2 and DeepLabCut again and often fail on macOS). Full pip install -e . is still recommended from a conda environment per the PyPI section if you need every training extra matched exactly.

Hugging Face Space (full redeploy from your working tree):

Requires Git LFS / Xet tooling (brew install git-lfs git-xet, git xet install, git lfs install). Then:

./scripts/clean_redeploy_hf_space.sh

This is equivalent to ./scripts/deploy_hf_space.sh and force-pushes a fresh snapshot to the Space.


Demo

Checkpoints and data

We provide an automated demo-download script for models hosted on Hugging Face. Use the helper script to download and place all demo assets automatically in data/:

python scripts/setup_demo_data.py --hf-repo-id MLAdaptiveIntelligence/PRIMA

Approximate download volume from Hugging Face is ~24 GB total (s1ckpt.ckpt ~10.2 GB + s3ckpt.ckpt ~10.2 GB + amr_vitbb.pth ~2.5 GB + SMAL files). Expected time is roughly:

  • 100 Mbps: ~35-45 minutes
  • 300 Mbps: ~12-18 minutes
  • 1 Gbps: ~4-8 minutes

To avoid re-downloading completed assets, rerun without --force. The script now re-downloads only missing or invalid checkpoints.

Expected files in that Hugging Face repo root:

  • my_smpl_00781_4_all.pkl
  • my_smpl_data_00781_4_all.pkl
  • walking_toy_symmetric_pose_prior_with_cov_35parts.pkl
  • amr_vitbb.pth
  • config_s1_HYDRA.yaml
  • config_s3_HYDRA.yaml
  • s1ckpt.ckpt
  • s3ckpt.ckpt

Demo (without TTA)

Run animal detection + PRIMA 3D pose/shape inference:

python demo.py \
  --checkpoint data/PRIMAS1/checkpoints/s1ckpt.ckpt \
  --img_folder demo_data/ \
  --out_folder demo_out/

Outputs are written to demo_out/.


Demo (with TTA)

demo_tta.py pipeline: specify learning rate and number of iterations:

Example:

python demo_tta.py \
  --checkpoint data/PRIMAS1/checkpoints/s1ckpt.ckpt \
  --img_folder demo_data/ \
  --out_folder demo_out_tta/ \
  --tta_lr 1e-6 \
  --tta_num_iters 30

Outputs are written to demo_out_tta/ (before/after TTA renders, keypoints, and optional meshes).


Gradio demo

We also provide a simple Gradio-based web demo for interactive testing in the browser:

python app.py \
  --checkpoint data/PRIMAS1/checkpoints/s1ckpt.ckpt \
  --out_folder demo_out_tta_gradio/

This starts a local Gradio app (by default on http://127.0.0.1:7860), where you can upload images and visualize PRIMA predictions and adaptation results.

Hugging Face Space (maintainers)

Demo images under demo_data/ and images/teaser.png are tracked with Git LFS (see .gitattributes) so they can be pushed to a Hugging Face Space under the Hub’s LFS / Xet bridge. Install tooling once:

brew install git-lfs git-xet
git xet install
git lfs install

Then from a clean checkout with LFS files present, redeploy the Space (same as clean_redeploy_hf_space.sh):

./scripts/deploy_hf_space.sh
# or
./scripts/clean_redeploy_hf_space.sh

The script rsyncs the working tree (not git archive) so image files are materialized before git add turns them into LFS blobs.


Training and Evaluation

Dataset Setup

Download datasets from Animal3D, CtrlAni3D, Quadruped2D, and Animal Kingdom. For Quadruped2D, download the images from SuperAnimal-Quadruped80K and our processed annotations from here. Put all the datasets under datasets/.

Training

Two-stage training script:

bash train.sh

Training outputs are written to logs/train/runs/<exp_name>/.

Evaluation

python eval.py \
  --config data/PRIMAS1/.hydra/config.yaml \
  --checkpoint data/PRIMAS1/checkpoints/s1ckpt.ckpt

Common values for --dataset are controlled by:

  • configs_hydra/experiment/default_val.yaml

Acknowledgements

This release builds on several open-source projects, including:


Citation

If you use this code in your research, please cite our PRIMA paper.

@misc{yu_prima,
  title={PRIMA: Boosting Animal Mesh Recovery with Biological Priors and Test-Time Adaptation},
  author={Xiaohang Yu and Ti Wang and Mackenzie Weygandt Mathis},
}

Contact

For issues, please open a GitHub issue in this repository.