File size: 3,981 Bytes
97459fd ace13b8 8c26d43 97459fd 13a8ab1 f035093 97459fd f9f05c1 13279ba 97459fd e3f4410 97459fd cd7dd22 97459fd e3f4410 97459fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: test
path: longbench_pro.json
task_categories:
- question-answering
- text-classification
- table-question-answering
- summarization
language:
- en
- zh
tags:
- Long Context
- Realistic
- Comprehensive
pretty_name: LongBench Pro
size_categories:
- 1K<n<10K
---
<div align="center">
<img src="images/logo.png" width="80" alt="LongBench-Pro Logo"/>
<h1>LongBench Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark</h1>
</div>
<div align="center">
[](https://huggingface.co/datasets/caskcsg/LongBench-Pro)
[](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro)
[]()
[](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard)
</div>
---
**LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
In addition, **LongBench Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
- **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
- **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
- **Difficulty**: Four levels ranging from *Easy to Extreme*, defined based on model performance.
<div align="center">
<img src="images/bench_comparison.png" width="100%"/>
</div>
## 🧩 Task Framework
<div align="center">
<img src="images/task_definition.png" width="100%"/>
<br />
<br />
<img src="images/task_map.png" width="80%"/>
<br />
<b>Task mapping between LongBench Pro and existing benchmarks</b>
</div>
## 📊 Dataset Statistics
<div align="center">
<img src="images/sample_distrubution.png" width="100%"/>
</div>
## 📝 Data Format
**LongBench Pro** organizes data in the following format:
```json
{
"id": "Sample ID: unique for each sample.",
"context": "Long context: 14 types of texts covering domains such as news, medicine, science, literature, law, and education, with various forms such as reports, tables, code, dialogues, lists, and JSON.",
"language": "Sample language: English or Chinese.",
"token_length": "Sample token length: 8k, 16k, 32k, 64k, 128k, or 256k (calculated using the Qwen tokenizer)",
"primary_task": "Primary task type: 11 types.",
"secondary_task": "Secondary task type: 25 types.",
"contextual_requirement": "Contextual Requirement: Full or Partial.",
"question_nonthinking": "Non-thinking prompt of the question: direct answer required.",
"question_thinking": "Thinking prompt of the question: think first, then answer.",
"answer": ["List of components that constitute the answer."],
"difficulty": "Sample difficulty: Easy, Moderate, Hard or Extreme."
}
```
## 🧰 How to use it?
### Loading Data
You can download and load **LongBench Pro** data using the following code:
```python
from datasets import load_dataset
dataset = load_dataset('caskcsg/LongBench-Pro', split='test')
```
### Evaluation
Please refer to our [Github Repo](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) for automated evaluation.
## 📖 Citation
*Coming Soon...*
|