video video | class_id int64 0 4 | transformation stringclasses 3 values | transformation_description stringclasses 3 values | cosine_similarity float64 0.95 0.99 |
|---|---|---|---|---|
0 | full_reverse | All frames reversed in time | 0.992953 | |
0 | block_reverse | Each contiguous block of 8 frames reversed | 0.979571 | |
0 | pingpong | Forward playback followed by backward playback | 0.982422 | |
0 | full_reverse | All frames reversed in time | 0.988527 | |
0 | block_reverse | Each contiguous block of 8 frames reversed | 0.984915 | |
0 | pingpong | Forward playback followed by backward playback | 0.97428 | |
1 | full_reverse | All frames reversed in time | 0.994178 | |
1 | block_reverse | Each contiguous block of 8 frames reversed | 0.9851 | |
1 | pingpong | Forward playback followed by backward playback | 0.993694 | |
1 | full_reverse | All frames reversed in time | 0.987741 | |
1 | block_reverse | Each contiguous block of 8 frames reversed | 0.987358 | |
1 | pingpong | Forward playback followed by backward playback | 0.989486 | |
2 | full_reverse | All frames reversed in time | 0.993366 | |
2 | block_reverse | Each contiguous block of 8 frames reversed | 0.97894 | |
2 | pingpong | Forward playback followed by backward playback | 0.982779 | |
2 | full_reverse | All frames reversed in time | 0.993763 | |
2 | block_reverse | Each contiguous block of 8 frames reversed | 0.975143 | |
2 | pingpong | Forward playback followed by backward playback | 0.975973 | |
3 | full_reverse | All frames reversed in time | 0.993311 | |
3 | block_reverse | Each contiguous block of 8 frames reversed | 0.984637 | |
3 | pingpong | Forward playback followed by backward playback | 0.968591 | |
3 | full_reverse | All frames reversed in time | 0.983672 | |
3 | block_reverse | Each contiguous block of 8 frames reversed | 0.976184 | |
3 | pingpong | Forward playback followed by backward playback | 0.977952 | |
4 | full_reverse | All frames reversed in time | 0.994205 | |
4 | block_reverse | Each contiguous block of 8 frames reversed | 0.989642 | |
4 | pingpong | Forward playback followed by backward playback | 0.989892 | |
4 | full_reverse | All frames reversed in time | 0.964831 | |
4 | block_reverse | Each contiguous block of 8 frames reversed | 0.948872 | |
4 | pingpong | Forward playback followed by backward playback | 0.98221 |
V-JEPA2 Temporal Order Blind Spots
This dataset documents blind spots for the pretrained video world model facebook/vjepa2-vith-fpc64-256.
The model produces nearly identical embeddings for videos whose temporal order has been severely corrupted, indicating weak sensitivity to temporal directionality and causal motion structure.
Model Tested
Model name: facebook/vjepa2-vith-fpc64-256
Type: Self-supervised video world model (JEPA-style joint embedding predictive architecture)
Modality: Video
Model card: https://huggingface.co/facebook/vjepa2-vith-fpc64-256
This is a base pretrained model, not fine-tuned for classification, captioning, or action recognition.
Evaluation Setup
Dataset
- Huggingface-ID:
nateraw/kinetics-mini - Source videos: Kinetics-Mini (validation split)
- Each video was transformed to simulate temporal corruption.
Temporal Transformations Tested
Each original video was transformed in ways that should change its semantic meaning:
| Transformation | Description |
|---|---|
| full_reverse | Entire video reversed in time |
| block_reverse | Each block of 8 frames reversed |
| pingpong | Forward playback followed by backward playback |
Metric
- Cosine similarity between original and transformed video embeddings
Observed Blind Spot
Despite drastic temporal corruption, the model outputs very high cosine similarity:
- Typical similarity range: 0.97 – 0.995 (one case of 0.94)
This indicates that the model:
- Treats time-reversed motion as equivalent
- Fails to encode the notion of time
- Is largely invariant to local temporal order
For many actions (e.g., throwing, opening, jumping), reversing time should change the meaning — but the embeddings remain nearly unchanged.
Blind Spots Dataset Structure
vjepa2-temporal-order-blindspots/
- metadata.csv: Contains video IDs, applied transformations, and similarity scores.
- videos/: Stores all video files. For each original video, multiple transformed versions are included:
- Original video (*_orig.mp4)
- Fully reversed video (*_full_reverse.mp4)
- Block-wise reversed video, with each block of 8 frames reversed (*_block_reverse.mp4)
- Ping-pong video, which plays forward then backward (*_pingpong.mp4)
How to Load the Dataset
from datasets import load_dataset
dataset = load_dataset("Nuntea/vjepa2-temporal-order-blindspots", split="train")
row = dataset[0]
print(row)
print(row["class_id"], row["transformation"], row["cosine_similarity"])
video = row["video"]
Each row contains:
- video
- Class label
- Transformation type
- Precomputed Cosine similarity metric with respect to the original video.
Metadata Description
| Column | Description |
|---|---|
| file_name | Transformed video filename |
| class_id | Kinetics class ID |
| transformation | Name of transformation |
| transformation_description | Human-readable description |
| cosine_similarity | Similarity to original embedding |
Expected vs Actual Output
Expected behavior:
- Temporal transformations should deivate and reduce embedding similarity, especially for full_reverse.
Actual behavior:
- The model produces embeddings nearly invariant to temporal order.
Each row in this dataset represents a failure case where the model’s output does not match the expected semantic distinction.
Why This Happens (My Hypothesis)
V-JEPA-style training emphasizes:
- Predictive consistency
- Spatial semantics
- Appearance invariance
However, it does not explicitly penalize temporal inversion, leading to:
- Weak causal modeling
- Motion treated as unordered frame sets
- Poor sensitivity to temporal directionality
How This Could Be Fixed with Fine-Tuning
The blind spot in V-JEPA2 arises because the model treats time-reversed or temporally scrambled videos almost identically. To address this, the model should be fine-tuned with a temporal discrimination objective that explicitly teaches it to recognize the direction and order of motion.
Recommended Dataset
The dataset should contain paired videos emphasizing temporal structure:
- Forward vs backward: original video and its reversed version
- Scrambled vs coherent: clips with frame order shuffled vs normal
- Causal vs anti-causal: actions that have clear cause-effect order (e.g., "throwing a ball" vs "ball flying into hand")
Example:
| Original | Transformed | Label |
|---|---|---|
| Person throws ball | Ball flies back into hand | Backward |
| Pouring water | Frames shuffled | Scrambled |
| Door opens | Door closes in reverse | Anti-causal |
Possible Data Sources
Existing video datasets:
- Kinetics / Something-Something (apply synthetic temporal transformations: reverse, ping-pong, block-reverse, shuffle)
Physics-based or procedural motion datasets:
- Synthetic videos of moving objects or simulations
Procedurally generated motion sequences for causal tasks
- Augmentation: Temporal jittering, frame dropping, slow-motion / fast-forward to encourage temporal sensitivity
Training Objective
A potential fine-tuning objective should explicitly reward correct temporal ordering:
Contrastive Learning:
- Anchor: original clip
- Positive: same clip in forward order
- Negative: reversed or scrambled clip
- Loss: InfoNCE or cosine similarity loss
Temporal Classification Head:
- Predict whether the video is forward, backward, or scrambled
- Direction-Aware Predictive Modeling:
- Predict next frames conditioned on current sequence (like autoregressive temporal modeling)
- Penalize physically impossible sequences (anti-causal)
Estimated Dataset Size
- Small-scale proof-of-concept: ~5k–10k video pairs
- Full fine-tuning: 20k–50k+ video pairs
- Using strong negative examples (reversed / scrambled / anti-causal) allows smaller datasets to be effective
Intended Use
This dataset is intended for:
- Model diagnostics
- Video world model evaluation
It is not a classification dataset.
- Downloads last month
- -