nielsr HF Staff commited on
Commit
304bb40
·
verified ·
1 Parent(s): b6c5119

Add project page link, architecture summary, and detailed usage instructions

Browse files

Hi! I'm Niels from the Hugging Face community science team.

This PR improves the model card for TrajLoom by:
- Adding a link to the official **project page**.
- Updating the **pipeline tag** to `other` as per our documentation standards for trajectory generation.
- Providing a brief summary of the **framework architecture** (VAE, Flow, and Encoding).
- Adding **CLI usage examples** for both VAE reconstruction and trajectory generation, sourced directly from the GitHub repository.

These changes make the repository more discoverable and easier to use for researchers in the field.

Files changed (1) hide show
  1. README.md +42 -8
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - zeweizhang/TrajLoomDatasets
5
  language:
6
  - en
7
- pipeline_tag: video-to-video
 
8
  tags:
9
  - trajectory
10
  - flow matching
@@ -12,6 +12,7 @@ tags:
12
  - motion
13
  - VAE
14
  ---
 
15
  <p align="center">
16
  <h1 align="center"><em>TrajLoom</em>: Dense Future Trajectory Generation from Video</h1>
17
  <div align="center">
@@ -30,7 +31,9 @@ tags:
30
  <strong>Renjie Liao</strong>
31
  </div>
32
  <br>
33
- <div align="center">
 
 
34
  <a href="https://arxiv.org/abs/2603.22606"><img src="https://img.shields.io/badge/arXiv-Preprint-brightgreen.svg" alt="arXiv Preprint"></a>
35
  &nbsp;
36
  <a href="https://github.com/zewei-Zhang/TrajLoom">
@@ -44,7 +47,14 @@ tags:
44
  </p>
45
 
46
  ## Introduction
47
- TrajLoom is a framework for dense future trajectory generation from video. Given observed video and trajectory history, it predicts future point trajectories and visibility over a long horizon. The released checkpoints include TrajLoom-VAE, TrajLoom-Flow, and the visibility predictor used by the inference pipeline in the GitHub repository.
 
 
 
 
 
 
 
48
 
49
  ## Download the model
50
  ### Option 1: clone the full repository
@@ -68,7 +78,7 @@ hf download zeweizhang/TrajLoom trajloom_visibility.pt --local-dir ./TrajLoom
68
  ```
69
 
70
  ## How to use with the GitHub repo
71
- Copy the downloaded checkpoints into the `models/` folder of the GitHub repository:
72
 
73
  ```text
74
  TrajLoom/
@@ -78,11 +88,35 @@ TrajLoom/
78
  │ └── trajloom_visibility.pt
79
  ```
80
 
81
- Then follow the inference instructions in the GitHub repo:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
- - `run_trajloom_generator.py` for future trajectory generation
84
- - `run_trajloom_vae_recon.py` for VAE reconstruction
85
 
 
 
 
 
 
 
 
 
 
 
86
 
87
  ## Citation
88
  ```bibtex
 
1
  ---
 
2
  datasets:
3
  - zeweizhang/TrajLoomDatasets
4
  language:
5
  - en
6
+ license: apache-2.0
7
+ pipeline_tag: other
8
  tags:
9
  - trajectory
10
  - flow matching
 
12
  - motion
13
  - VAE
14
  ---
15
+
16
  <p align="center">
17
  <h1 align="center"><em>TrajLoom</em>: Dense Future Trajectory Generation from Video</h1>
18
  <div align="center">
 
31
  <strong>Renjie Liao</strong>
32
  </div>
33
  <br>
34
+ <div align="center")>
35
+ <a href="https://trajloom.github.io/"><img src="https://img.shields.io/badge/Project-Page-green.svg" alt="Project Page"></a>
36
+ &nbsp;
37
  <a href="https://arxiv.org/abs/2603.22606"><img src="https://img.shields.io/badge/arXiv-Preprint-brightgreen.svg" alt="arXiv Preprint"></a>
38
  &nbsp;
39
  <a href="https://github.com/zewei-Zhang/TrajLoom">
 
47
  </p>
48
 
49
  ## Introduction
50
+ TrajLoom is a framework for dense future trajectory generation from video, as described in the paper [TrajLoom: Dense Future Trajectory Generation from Video](https://arxiv.org/abs/2603.22606). Given an observed video and trajectory history, it predicts future point trajectories and visibility over a long horizon (extending the horizon from 24 to 81 frames).
51
+
52
+ The framework consists of three main components:
53
+ 1. **Grid-Anchor Offset Encoding**: Reduces location-dependent bias by representing points as offsets from anchors.
54
+ 2. **TrajLoom-VAE**: Learns a compact spatiotemporal latent space for dense trajectories.
55
+ 3. **TrajLoom-Flow**: Generates future trajectories in the latent space via flow matching.
56
+
57
+ The released checkpoints include TrajLoom-VAE, TrajLoom-Flow, and the visibility predictor.
58
 
59
  ## Download the model
60
  ### Option 1: clone the full repository
 
78
  ```
79
 
80
  ## How to use with the GitHub repo
81
+ First, clone the [GitHub repository](https://github.com/zewei-Zhang/TrajLoom) and install the environment. Copy the downloaded checkpoints into the `models/` folder:
82
 
83
  ```text
84
  TrajLoom/
 
88
  │ └── trajloom_visibility.pt
89
  ```
90
 
91
+ ### Future Trajectory Generation
92
+ Run the generator to predict future trajectories from observed history:
93
+
94
+ ```bash
95
+ python run_trajloom_generator.py \
96
+ --gen_config configs/trajloom_generator_config.json \
97
+ --gen_ckpt models/trajloom_generator.pt \
98
+ --vis_config configs/vis_predictor_config.json \
99
+ --vis_ckpt models/trajloom_visibility.pt \
100
+ --video_dir "/path/to/videos/" \
101
+ --video_glob "*.mp4" \
102
+ --gt_dir "/path/to/ground_truth/tracks/" \
103
+ --out_dir "/path/to/output/" \
104
+ --pred_len 81
105
+ ```
106
 
107
+ ### VAE Reconstruction
108
+ Use the VAE reconstruction script to verify that your trajectory data and latent statistics are configured correctly:
109
 
110
+ ```bash
111
+ python run_trajloom_vae_recon.py \
112
+ --config configs/trajloom_vae_config.json \
113
+ --video_dir "/path/to/videos/" \
114
+ --video_glob "*.mp4" \
115
+ --gt_dir "/path/to/ground_truth/tracks/" \
116
+ --out_dir "/path/to/output/" \
117
+ --pred_len 81 \
118
+ --save_video
119
+ ```
120
 
121
  ## Citation
122
  ```bibtex