TrackCraft3R: Repurposing Video Diffusion Transformers for Dense 3D Tracking
Abstract
TrackCraft3R enables efficient dense 3D tracking from monocular video by adapting video diffusion transformers to follow physical points across frames using dual-latent representation and temporal RoPE alignment.
Dense 3D tracking from monocular video is fundamental to dynamic scene understanding. While recent 3D foundation models provide reliable per-frame geometry, recovering object motion in this geometry remains challenging and benefits from strong motion priors learned from real-world videos. Existing 3D trackers either follow iterative paradigms trained from scratch on synthetic data or fine-tune 3D reconstruction models learned from static multi-view images, both lacking real-world motion priors. Pre-trained video diffusion transformers (video DiTs) offer rich spatio-temporal priors from internet-scale videos, making them a promising foundation for 3D tracking. However, their frame-anchored formulation, which generates each frame's content, is fundamentally mismatched with reference-anchored dense 3D tracking, which must follow the same physical points from a reference frame across time. We present TrackCraft3R, the first method to repurpose a video DiT as a feed-forward dense 3D tracker. Given a monocular video and its frame-anchored reconstruction pointmap, TrackCraft3R predicts a reference-anchored tracking pointmap that follows every pixel of the first frame across time in a single forward pass, along with its visibility. We achieve this through two designs: (i) a dual-latent representation that uses per-frame geometry latents and reference-anchored track latents as dense queries, and (ii) temporal RoPE alignment, which specifies the target timestamp of each track latent. Together, these designs convert the per-frame generative paradigm of video DiTs into a reference-anchored tracking formulation with LoRA fine-tuning. TrackCraft3R achieves state-of-the-art performance on standard sparse and dense 3D tracking benchmarks, while running 1.3x faster and using 4.6x less peak memory than the strongest prior method. We further demonstrate robustness to large motions and long videos.
Community
Dense 3D tracking from monocular video is fundamental to dynamic scene under-
standing. While recent 3D foundation models provide reliable per-frame geometry,
recovering object motion in this geometry remains challenging and benefits from
strong motion priors learned from real-world videos. Existing 3D trackers either
follow iterative paradigms trained from scratch on synthetic data or fine-tune 3D re-
construction models learned from static multi-view images, both lacking real-world
motion priors. Pre-trained video diffusion transformers (video DiTs) offer rich
spatio-temporal priors from internet-scale videos, making them a promising founda-
tion for 3D tracking. However, their frame-anchored formulation, which generates
each frame’s content, is fundamentally mismatched with reference-anchored dense
3D tracking, which must follow the same physical points from a reference frame
across time. We present TrackCraft3R, the first method to repurpose a video DiT as
a feed-forward dense 3D tracker. Given a monocular video and its frame-anchored
reconstruction pointmap, TrackCraft3R predicts a reference-anchored tracking
pointmap that follows every pixel of the first frame across time in a single forward
pass, along with its visibility. We achieve this through two designs: (i) a dual-latent
representation that uses per-frame geometry latents and reference-anchored track
latents as dense queries, and (ii) temporal RoPE alignment, which specifies the
target timestamp of each track latent. Together, these designs convert the per-frame
generative paradigm of video DiTs into a reference-anchored tracking formulation
with LoRA fine-tuning. TrackCraft3R achieves state-of-the-art performance on
standard sparse and dense 3D tracking benchmarks, while running 1.3×faster
and using 4.6×less peak memory than the strongest prior method. We further
demonstrate robustness to large motions and long videos.
Get this paper in your agent:
hf papers read 2605.12587 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper