czyPL commited on
Commit
97459fd
·
verified ·
1 Parent(s): 97c4097

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ longbench_pro.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-classification
6
+ - table-question-answering
7
+ - summarization
8
+ language:
9
+ - en
10
+ - zh
11
+ tags:
12
+ - Long Context
13
+ - Realistic
14
+ - Comprehensive
15
+ pretty_name: LongBench Pro
16
+ size_categories:
17
+ - 1K<n<10K
18
+ ---
19
+
20
+ <div align="center">
21
+ <h1>
22
+ <img src="images/logo.png" width="40" style="vertical-align: -30%;" alt="LongBench-Pro Logo"/>
23
+ LongBench-Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark
24
+ </h1>
25
+ </div>
26
+
27
+ <div align="center">
28
+
29
+ [![Github Repo](https://img.shields.io/badge/Github-Repo-blue?logo=github&logoColor=white)]() &nbsp;&nbsp;
30
+ [![Leaderboard](https://img.shields.io/badge/🏆-Leaderboard-red)]() &nbsp;&nbsp;
31
+ [![Paper](https://img.shields.io/badge/📄-Arxiv_Paper-green)]()
32
+
33
+ </div>
34
+
35
+ ---
36
+
37
+ **LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
38
+
39
+ In addition, **LongBench-Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
40
+
41
+ - **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
42
+ - **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
43
+ - **Difficulty**: Four levels ranging from *Easy to Extreme*, defined based on model performance.
44
+
45
+ <div align="center">
46
+ <img src="images/bench_comparison.png" width="100%"/>
47
+ </div>
48
+
49
+ ## 🧩 Task Framework
50
+
51
+ <div align="center">
52
+ <img src="images/task_definition.png" width="100%"/>
53
+ <br />
54
+ <br />
55
+ <img src="images/task_map.png" width="80%"/>
56
+ <br />
57
+ <b>Task mapping between LongBench Pro and existing benchmarks</b>
58
+ </div>
59
+
60
+ ## 📊 Dataset Statistics
61
+
62
+ <div align="center">
63
+ <img src="images/sample_distrubution.png" width="100%"/>
64
+ </div>
65
+
66
+ ## 📝 Data Format
67
+
68
+ **LongBench Pro** organizes data in the following format:
69
+
70
+ ```json
71
+ {
72
+ "id": "Sample ID: unique for each sample.",
73
+ "context": "Long context: 14 types of texts covering domains such as news, medicine, science, literature, law, and education, with various forms such as reports, tables, code, dialogues, lists, and JSON.",
74
+ "language": "Sample language: English or Chinese.",
75
+ "token_length": "Sample token length: 8k, 16k, 32k, 64k, 128k, or 256k (calculated using the Qwen tokenizer)",
76
+ "primary_task": "Primary task type: 11 types.",
77
+ "secondary_task": "Secondary task type: 25 types.",
78
+ "contextual_requirement": "Contextual Requirement: Full or Partial.",
79
+ "question_nonthinking": "Non-thinking prompt of the question: direct answer required.",
80
+ "question_thinking": "Thinking prompt of the question: think first, then answer.",
81
+ "answer": ["List of components that constitute the answer."],
82
+ "difficulty": "Sample difficulty: Easy, Moderate, Hard or Extreme."
83
+ }
84
+ ```
85
+
86
+ ## 🧰 How to use it?
87
+
88
+ ### Loading Data
89
+
90
+ You can download and load **LongBench Pro** data using the following code:
91
+
92
+ ```python
93
+ from datasets import load_dataset
94
+ dataset = load_dataset('caskcsg/LongBench_Pro', split='train')
95
+ ```
96
+
97
+ ### Evaluation
98
+
99
+ Please refer to our [Github Repo]() for automated evaluation.
100
+
101
+ ## 📖 Citation
102
+
103
+ *Coming Soon...*
images/bench_comparison.png ADDED

Git LFS Details

  • SHA256: 7a36d7c25fdf5f0b46c7b286da426620c48a693db5c688142ed8fe400ec5673b
  • Pointer size: 131 Bytes
  • Size of remote file: 399 kB
images/logo.png ADDED

Git LFS Details

  • SHA256: b43cb874e5d374b8dca9269fe827fed602a1255ea6ae5939c54a1a3ff272f88f
  • Pointer size: 131 Bytes
  • Size of remote file: 348 kB
images/sample_distrubution.png ADDED

Git LFS Details

  • SHA256: c7a0ec127b6cd27fd43bdab2fc529fd3e8d046248ace0ce28a3b04c4a7df5807
  • Pointer size: 131 Bytes
  • Size of remote file: 943 kB
images/task_definition.png ADDED

Git LFS Details

  • SHA256: 669c51ae6b9a8f203e6e3525bf2faee889813a93618397400402545deddab211
  • Pointer size: 131 Bytes
  • Size of remote file: 732 kB
images/task_map.png ADDED

Git LFS Details

  • SHA256: b9b93f0cb670b758027881b0540adc1f525a6fcfab9a5daea7c7c340a7b45f36
  • Pointer size: 132 Bytes
  • Size of remote file: 3.47 MB
longbench_pro.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92ff05f6088e212d06c5a731ab86000b69cee6a0900cbbd524a25851e3c30de0
3
+ size 531535940