Datasets:

Modalities:
Text
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
JungleGym commited on
Commit
ee1fc5b
Β·
verified Β·
1 Parent(s): e0cdec1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: bsd-3-clause
4
+ license_link: https://github.com/TencentARC/TimeLens/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ task_categories:
8
+ - video-text-to-text
9
+ pretty_name: TimeLens
10
+ size_categories:
11
+ - 10K<n<100K
12
+ ---
13
+
14
+ # TimeLens-Bench
15
+
16
+ πŸ“‘ [**Paper**](TODO) | πŸ’» [**Code**](https://github.com/TencentARC/TimeLens) | 🏠 [**Project Page**](https://timelens-arc-lab.github.io/) | πŸ€— [**Model & Data**](https://huggingface.co/collections/TencentARC/TimeLens)
17
+
18
+ ## ✨ Dataset Description
19
+
20
+ **TimeLens-Bench** is a comprehensive, high-quality evaluation benchmark for video temporal grounding, proposed in our paper [TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs](TODO).
21
+
22
+ During our annotation process, we identified critical quality issues within existing datasets and performed extensive manual corrections. We observed a **dramatic re-ranking of models** on TimeLens-Bench compared to legacy benchmarks, demonstrating that TimeLens-Bench provides *more reliable* evaluation
23
+ (more details in our [paper](TODO) and [project page](https://timelens-arc-lab.github.io/)).
24
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65372e922c6ef949b22c26d9/31s82GO6S5LKlW0-kcIFU.png" alt="performance_comparison_charades-1" width="35%">
25
+
26
+ ### Dataset Statistics
27
+
28
+ The benchmark consists of manually refined versions of **three** widely used evaluation datasets for video temporal grounding:
29
+
30
+ | Refined Dataset | # Videos | Avg. Duration | # Annotations | Source Dataset | Source Dataset Link |
31
+ | :--- | :---: | :---: | :---: | :--- | :--- |
32
+ | **Charades-TimeLens** | 1313 | 29.6 | 3363 | Charades-STA | https://github.com/jiyanggao/TALL |
33
+ | **ActivityNet-TimeLens** | 1455* | 134.9 | 4500 | ActivityNet-Captions | https://cs.stanford.edu/people/ranjaykrishna/densevid/ |
34
+ | **QVHighlights-TimeLens** | 1511 | 149.6 | 1541 | QVHighlights | https://github.com/jayleicn/moment_detr |
35
+
36
+ <small>* To reduce the high evaluation cost from the excessively large ActivityNet Captions, we sampled videos uniformly across duration bins to curate ActivityNet-TimeLens.</small>
37
+
38
+ ## πŸš€ Usage
39
+
40
+ To download and use the benchmark for evaluation, please refer to the instructions in our [**GitHub Repository**](https://github.com/TencentARC/TimeLens#-evaluation-on-timelens-bench).
41
+
42
+ ## πŸ“ Citation
43
+
44
+ If you find our work helpful for your research and applications, please cite our paper:
45
+
46
+ ```bibtex
47
+ TODO
48
+ ```