Wiselnn commited on
Commit
651c688
·
verified ·
1 Parent(s): c05d47e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ ---
5
+ license: apache-2.0
6
+ ---
7
+ ## V-NIAH-D Benchmark
8
+ A Visual Needle-In-A-Haystack Benchmark with Periodic Distractors
9
+
10
+ One can use it by following steps similar to [V-NIAH](https://github.com/EvolvingLMMs-Lab/LongVA).
11
+
12
+ ## VideoRoPE Training Data
13
+ To facilitate the reproduction of our experimental results, we have also uploaded the data used by VideoRoPE. We use a subset of the [LLaVA-Video-178K dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K) to train VideoRoPE.
14
+
15
+ The LLaVA-Video-178K dataset consists of 178K videos and approximately 5 million question-answer (QA) pairs from diverse sources such as HD-VILA, Kinetics, and ActivityNet. To balance training efficiency and long-video comprehension, we randomly select 136K videos with durations under 2 minutes and 18K videos with durations between 2 and 3 minutes. This process resulted in our training set containing approximately 1.3 million pairs.