Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,122 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
library_name: transformers
|
| 6 |
+
tags:
|
| 7 |
+
- video-classification
|
| 8 |
+
- self-supervised-learning
|
| 9 |
+
- v-jepa
|
| 10 |
+
- laboratory-procedures
|
| 11 |
+
- domain-adaptation
|
| 12 |
+
pipeline_tag: video-classification
|
| 13 |
+
base_model:
|
| 14 |
+
- facebook/vjepa2-vitl-fpc64-384
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# Tacit
|
| 18 |
+
|
| 19 |
+
Tacit is a domain-adapted V-JEPA-2.1 video encoder for laboratory procedure understanding. It is the trained companion to the **LabProc** benchmark, released as part of our NeurIPS 2026 Evaluations & Datasets Track submission.
|
| 20 |
+
|
| 21 |
+
- **Benchmark dataset**: [`Labproc/labproc`](https://huggingface.co/datasets/Labproc/labproc)
|
| 22 |
+
- **Code & evaluation harness**: [`tacit-anon/labproc`](https://github.com/tacit-anon/labproc)
|
| 23 |
+
- **License**: CC BY 4.0
|
| 24 |
+
|
| 25 |
+
## Model Details
|
| 26 |
+
|
| 27 |
+
- **Architecture**: Vision transformer with 24 layers, 16 attention heads, hidden size 1024, MLP ratio 4. Patch size 16, image size 384×384. Uses RoPE positional encoding with interpolation, supporting variable frame counts at inference (we use 16 frames per clip at evaluation time).
|
| 28 |
+
- **Total parameters**: ~300M (full encoder); 37.8M trainable during adaptation (12.4%, last 3 of 24 transformer blocks).
|
| 29 |
+
- **Output**: 1024-dimensional clip-level features after mean-pooling across spatiotemporal patch tokens.
|
| 30 |
+
- **Base model**: V-JEPA-2.1 ViT-L distilled from ViT-G at 384×384, released by Meta FAIR.
|
| 31 |
+
- **Adaptation**: EMA target encoder (τ=0.996) + motion-conditioned masking (ratio 0.75).
|
| 32 |
+
- **Released checkpoint**: Epoch 4 of a 7-epoch adaptation run; training loss 0.70.
|
| 33 |
+
|
| 34 |
+
## Intended Use
|
| 35 |
+
|
| 36 |
+
**Primary intended uses**: Tacit is intended as a calibration target for laboratory video understanding research — producing frozen visual features for laboratory procedure clips, to be used as input to downstream linear or shallow probes. It is also intended for comparison against future video encoders, larger adaptation runs, parameter-matched open-weight VLMs, and v2 PCR/Western-blot benchmark instantiations.
|
| 37 |
+
|
| 38 |
+
**Out-of-scope uses**:
|
| 39 |
+
- Production deployment in laboratory safety, quality assurance, or regulatory compliance settings without substantial additional validation. The Strict Hard accuracy of 66.7% is a research-grade signal, not a deployment-grade reliability target.
|
| 40 |
+
- Use on non-laboratory video content. Adaptation on laboratory video specifically reshapes the representation toward this domain.
|
| 41 |
+
- Use as a foundation for behavioral or biometric inference from laboratory operator footage.
|
| 42 |
+
- Same-State CCR evaluation (within-state temporal ordering). The released checkpoint's adaptation pipeline attenuates within-state temporal coherence.
|
| 43 |
+
|
| 44 |
+
## Training Details
|
| 45 |
+
|
| 46 |
+
- **Training data**: laboratory procedure videos collected via the three-stage filtering pipeline described in the LabProc paper, spanning organic purification, polymerase chain reaction, and Western blot procedures. v1 LabProc benchmark evaluates only on the organic purification subset; the adaptation set spans all three branches.
|
| 47 |
+
- **Optimizer**: AdamW
|
| 48 |
+
- **Learning rate**: 5×10⁻⁶, cosine schedule
|
| 49 |
+
- **Weight decay**: 0.01
|
| 50 |
+
- **Batch size**: 4
|
| 51 |
+
- **Frames per clip (training)**: 64
|
| 52 |
+
- **Frames per clip (inference)**: 16
|
| 53 |
+
- **Mask ratio**: 0.75
|
| 54 |
+
- **EMA momentum**: 0.996
|
| 55 |
+
- **Epochs**: 7 (released: epoch 4)
|
| 56 |
+
- **Precision**: FP16 mixed-precision
|
| 57 |
+
- **Trainable parameters**: 37.8M (last 3 of 24 transformer blocks)
|
| 58 |
+
- **Compute**: 28 minutes single-H100-80GB wall-clock; ~$1.30 in rented compute.
|
| 59 |
+
|
| 60 |
+
## Evaluation
|
| 61 |
+
|
| 62 |
+
Headline LabProc v1 benchmark results (full table in the paper):
|
| 63 |
+
|
| 64 |
+
| Task | Random | Base V-JEPA-2.1 | **Tacit (ep4)** | Claude Opus |
|
| 65 |
+
|---|---|---|---|---|
|
| 66 |
+
| PSC-10 (10-class state) | 10.0 | 16.2 | **31.2** | 72.2 |
|
| 67 |
+
| TED visual+text (4-MCQ) | 25.0 | 75.3 | **76.1** | 82.4 |
|
| 68 |
+
| CCR pairwise | 50.0 | 43.9 | **58.7** | 67.0 |
|
| 69 |
+
| VSD aggregate | 50.0 | 50.2 | **57.8** | 73.9 |
|
| 70 |
+
| TED-V Hard | 50.0 | 60.9 | **69.6** | 67.4 |
|
| 71 |
+
| **TED-V Strict Hard** | 50.0 | 60.6 | **66.7** | 57.6 |
|
| 72 |
+
|
| 73 |
+
Tacit leads Claude Opus on the two motion-discrimination subsets (TED-V Hard, Strict Hard) where vision-language models are structurally insufficient.
|
| 74 |
+
|
| 75 |
+
## How to Use
|
| 76 |
+
|
| 77 |
+
Install the evaluation harness and use the encoder:
|
| 78 |
+
|
| 79 |
+
```bash
|
| 80 |
+
git clone https://github.com/tacit-anon/labproc
|
| 81 |
+
cd labproc
|
| 82 |
+
pip install -e .
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
```python
|
| 86 |
+
import torch
|
| 87 |
+
from labproc_tacit.encoder import build_encoder, load_checkpoint
|
| 88 |
+
|
| 89 |
+
# Load Tacit checkpoint
|
| 90 |
+
encoder = build_encoder(model_name="vit_large", patch_size=16, image_size=384)
|
| 91 |
+
load_checkpoint(encoder, "tacit_ep4.pt") # downloaded from this HF repo
|
| 92 |
+
encoder.eval().cuda()
|
| 93 |
+
|
| 94 |
+
# Encode a clip
|
| 95 |
+
with torch.no_grad():
|
| 96 |
+
features = encoder(clip) # clip: (B, T=16, C=3, H=384, W=384)
|
| 97 |
+
pooled = features.mean(dim=1) # (B, 1024) clip-level embedding
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
See [the GitHub repo](https://github.com/tacit-anon/labproc) for full evaluation scripts and benchmark reproduction.
|
| 101 |
+
|
| 102 |
+
## Limitations
|
| 103 |
+
|
| 104 |
+
- **Domain bias**: heavily skewed toward English-language YouTube laboratory content. Likely systematic features that may not transfer to industrial laboratories, foreign-language workflows, or atypical equipment.
|
| 105 |
+
- **Operator bias**: although adaptation downweights operator-specific signal via motion masking, operators are visible in nearly every frame.
|
| 106 |
+
- **Adaptation-induced trade-off**: Tacit's adaptation attenuates within-state temporal coherence by ~0.14 in τ vs the V-JEPA-2.1 base. Users requiring both within-state ordering and cross-state recognition will need a different adaptation strategy.
|
| 107 |
+
- **Single-annotator ground truth** for PSC, CCR, and VSD-aggregate evaluation labels.
|
| 108 |
+
- **Modest adaptation scale** relative to general video pretraining (1M+ hours for V-JEPA-2.1).
|
| 109 |
+
|
| 110 |
+
See the paper Section 8 ("Limitations") for the complete discussion.
|
| 111 |
+
|
| 112 |
+
## Citation
|
| 113 |
+
|
| 114 |
+
```bibtex
|
| 115 |
+
@inproceedings{labproc2026,
|
| 116 |
+
title = {LabProc and Tacit: A Benchmark and Domain-Adapted
|
| 117 |
+
Video Encoder for Laboratory Procedure Understanding},
|
| 118 |
+
author = {Anonymous},
|
| 119 |
+
booktitle = {NeurIPS 2026 Track on Datasets and Benchmarks},
|
| 120 |
+
year = {2026}
|
| 121 |
+
}
|
| 122 |
+
```
|