Create README.md
#2
by TTangenty - opened
README.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- video-text-to-text
|
| 5 |
+
- visual-question-answering
|
| 6 |
+
language: [en]
|
| 7 |
+
tags: [video, temporal-grounding, one-to-many, benchmark, evaluation, multimodal, mllm]
|
| 8 |
+
pretty_name: OMTG Bench
|
| 9 |
+
size_categories: [n<1K]
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# OMTG Bench: A Benchmark for One-to-Many Temporal Grounding
|
| 13 |
+
|
| 14 |
+
OMTG Bench is the first comprehensive benchmark tailored for One-to-Many Temporal Grounding,
|
| 15 |
+
introduced in **"Towards One-to-Many Temporal Grounding"** (ICML 2026, under review).
|
| 16 |
+
|
| 17 |
+
## Dataset Summary
|
| 18 |
+
|
| 19 |
+
| Item | Value |
|
| 20 |
+
|---|---|
|
| 21 |
+
| # Samples | 340 |
|
| 22 |
+
| Annotation | Manually curated, expert-verified |
|
| 23 |
+
| Consistency | > 90% |
|
| 24 |
+
| Source video pools | Charades, ActivityNet, QVHighlights, VTimeLLM, Moment-10M (test only) |
|
| 25 |
+
| Overlap with training | None |
|
| 26 |
+
| Domains | Sports, cooking, news, etc. |
|
| 27 |
+
|
| 28 |
+
### Distribution
|
| 29 |
+
| Property | Value |
|
| 30 |
+
|---|---|
|
| 31 |
+
| GT segments per query | 2 – 20 |
|
| 32 |
+
| 2–3 segments | 62.2% |
|
| 33 |
+
| > 6 segments | 15% |
|
| 34 |
+
| Video duration | 21 s – 17 min (avg 221.6 s) |
|
| 35 |
+
|
| 36 |
+
## Evaluation Metrics
|
| 37 |
+
| Metric | Description |
|
| 38 |
+
|---|---|
|
| 39 |
+
| tIoU | Average temporal IoU over unions |
|
| 40 |
+
| C-Acc | Count Accuracy (cardinality correctness) |
|
| 41 |
+
| tF1@ξ | Temporal F1 at IoU ξ ∈ {0.3, 0.5, 0.7} via optimal bipartite matching |
|
| 42 |
+
| **EtF1** | Effective Temporal F1 — strictly penalizes wrong cardinality (primary metric) |
|
| 43 |
+
|
| 44 |
+
## Baseline Results (OMTG Bench)
|
| 45 |
+
| Model | C-Acc | tF1@0.3 | tF1@0.5 | tF1@0.7 | tIoU | EtF1 |
|
| 46 |
+
|---|---|---|---|---|---|---|
|
| 47 |
+
| Qwen2.5-VL-3B | 0.00 | 15.17 | 7.01 | 2.86 | 11.60 | — |
|
| 48 |
+
| Gemini-3-Pro | 30.63 | 58.30 | 47.75 | 29.89 | 47.63 | 21.30 |
|
| 49 |
+
| Gemini-2.5-Pro | 50.94 | 55.72 | 43.57 | 27.97 | 43.24 | 27.80 |
|
| 50 |
+
| Seed-1.8 | 38.12 | 67.13 | 54.67 | 38.79 | 56.81 | 28.04 |
|
| 51 |
+
| **OMTG-4B (ours)** | — | — | — | — | — | **43.65** |
|
| 52 |
+
|
| 53 |
+
## Dataset Structure
|
| 54 |
+
```
|
| 55 |
+
omtg_bench/
|
| 56 |
+
├── OMTGBench.tsv # 340 samples
|
| 57 |
+
└── videos.zip # ≈3.74 GB
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
### Data Fields
|
| 61 |
+
| Field | Type | Description |
|
| 62 |
+
|---|---|---|
|
| 63 |
+
| video_id | string | Video identifier |
|
| 64 |
+
| duration | float | Video length (s) |
|
| 65 |
+
| query | string | Natural-language query |
|
| 66 |
+
| segments | list[[float,float]] | GT `[start,end]` intervals |
|
| 67 |
+
| source | string | Charades / ActivityNet / QVHighlights / VTimeLLM / Moment-10M |
|
| 68 |
+
| domain | string | Topical domain |
|
| 69 |
+
|
| 70 |
+
## Usage
|
| 71 |
+
```python
|
| 72 |
+
from datasets import load_dataset
|
| 73 |
+
bench = load_dataset("insomnia7/omtg_bench", split="train")
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
## Companion Dataset
|
| 77 |
+
Training dataset: [insomnia7/omtg56k](https://huggingface.co/datasets/insomnia7/omtg56k) (46k SFT + 10k RL).
|
| 78 |
+
|
| 79 |
+
## License
|
| 80 |
+
CC BY-NC 4.0 — non-commercial research only.
|
| 81 |
+
|
| 82 |
+
```
|