viktoroo commited on
Commit
f66b24e
·
verified ·
1 Parent(s): 5763542

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -16,4 +16,42 @@ configs:
16
  data_files:
17
  - split: train
18
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  data_files:
17
  - split: train
18
  path: data/train-*
19
+ pretty_name: LongBench2-128k-plus
20
+ tags:
21
+ - long-context
22
+ - longbench
23
+ - language-modeling
24
+ - text-generation
25
+ language:
26
+ - en
27
+ license: apache-2.0
28
+ task_categories:
29
+ - text-generation
30
+ - language-modeling
31
  ---
32
+
33
+ # LongBench2-128k-plus
34
+
35
+ LongBench2-128k-plus is a long-context corpus derived from the
36
+ [zai-org/LongBench-v2](https://huggingface.co/datasets/zai-org/LongBench-v2)
37
+ benchmark. It keeps only the "long" examples and exposes just the raw
38
+ long documents, making it convenient for:
39
+
40
+ - long-context pretraining or continued training,
41
+ - long-context adaptation (e.g., RoPE scaling, attention tuning),
42
+ - retrieval and RAG-style experimentation where only documents are needed.
43
+
44
+ All question/answer and multiple-choice metadata from LongBench v2 are
45
+ dropped; each row is a single long text.
46
+
47
+ ## Source dataset
48
+
49
+ This dataset is a processed subset of:
50
+
51
+ - **Original dataset:** `zai-org/LongBench-v2`
52
+ - **Project page:** https://longbench2.github.io
53
+ - **Paper:** LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks (arXiv:2412.15204)
54
+
55
+ LongBench v2 is a long-context evaluation benchmark with contexts ranging from
56
+ thousands to millions of words, spanning multiple realistic domains and task
57
+ types (QA, multi-document reasoning, code, dialogue, and more).