TrajLoom: Dense Future Trajectory Generation from Video
More details, code, and future training scripts can be found in the GitHub repository.
Introduction
TrajLoom is a framework for dense future trajectory generation from video, as described in the paper TrajLoom: Dense Future Trajectory Generation from Video. Given an observed video and trajectory history, it predicts future point trajectories and visibility over a long horizon (extending the horizon from 24 to 81 frames).
The framework consists of three main components:
- Grid-Anchor Offset Encoding: Reduces location-dependent bias by representing points as offsets from anchors.
- TrajLoom-VAE: Learns a compact spatiotemporal latent space for dense trajectories.
- TrajLoom-Flow: Generates future trajectories in the latent space via flow matching.
The released checkpoints include TrajLoom-VAE, TrajLoom-Flow, and the visibility predictor.
Download the model
Option 1: clone the full repository
git lfs install
git clone https://huggingface.co/zeweizhang/TrajLoom
Option 2: download with hf
pip install -U "huggingface_hub[cli]"
hf download zeweizhang/TrajLoom \
--local-dir ./TrajLoom
You can also download only a single checkpoint:
hf download zeweizhang/TrajLoom trajloom_generator.pt --local-dir ./TrajLoom
hf download zeweizhang/TrajLoom trajloom_vae.pt --local-dir ./TrajLoom
hf download zeweizhang/TrajLoom trajloom_visibility.pt --local-dir ./TrajLoom
How to use with the GitHub repo
First, clone the GitHub repository and install the environment. Copy the downloaded checkpoints into the models/ folder:
TrajLoom/
βββ models/
β βββ trajloom_generator.pt
β βββ trajloom_vae.pt
β βββ trajloom_visibility.pt
Future Trajectory Generation
Run the generator to predict future trajectories from observed history:
python run_trajloom_generator.py \
--gen_config configs/trajloom_generator_config.json \
--gen_ckpt models/trajloom_generator.pt \
--vis_config configs/vis_predictor_config.json \
--vis_ckpt models/trajloom_visibility.pt \
--video_dir "/path/to/videos/" \
--video_glob "*.mp4" \
--gt_dir "/path/to/ground_truth/tracks/" \
--out_dir "/path/to/output/" \
--pred_len 81
VAE Reconstruction
Use the VAE reconstruction script to verify that your trajectory data and latent statistics are configured correctly:
python run_trajloom_vae_recon.py \
--config configs/trajloom_vae_config.json \
--video_dir "/path/to/videos/" \
--video_glob "*.mp4" \
--gt_dir "/path/to/ground_truth/tracks/" \
--out_dir "/path/to/output/" \
--pred_len 81 \
--save_video
Citation
@misc{zhang2026trajloomdensefuturetrajectory,
title={TrajLoom: Dense Future Trajectory Generation from Video},
author={Zewei Zhang and Jia Jun Cheng Xian and Kaiwen Liu and Ming Liang and Hang Chu and Jun Chen and Renjie Liao},
year={2026},
eprint={2603.22606},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.22606},
}