--- license: apache-2.0 configs: - config_name: default data_files: - split: test path: longbench_pro.json task_categories: - question-answering - text-classification - table-question-answering - summarization language: - en - zh tags: - Long Context - Realistic - Comprehensive pretty_name: LongBench Pro size_categories: - 1K LongBench-Pro Logo

LongBench Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark

[![Dataset](https://img.shields.io/badge/Dataset-yellow?logo=huggingface&logoColor=yellow&labelColor=white)](https://huggingface.co/datasets/caskcsg/LongBench-Pro)    [![Code](https://img.shields.io/badge/Code-181717?logo=github&logoColor=181717&labelColor=white)](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro)    [![Paper](https://img.shields.io/badge/Paper-red?logo=arxiv&logoColor=B31B1B&labelColor=white)]()    [![Leaderboard](https://img.shields.io/badge/🏆-Leaderboard-blue?labelColor=white)](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard)
--- **LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**. In addition, **LongBench Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions: - **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval); - **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior; - **Difficulty**: Four levels ranging from *Easy to Extreme*, defined based on model performance.
## 🧩 Task Framework



Task mapping between LongBench Pro and existing benchmarks
## 📊 Dataset Statistics
## 📝 Data Format **LongBench Pro** organizes data in the following format: ```json { "id": "Sample ID: unique for each sample.", "context": "Long context: 14 types of texts covering domains such as news, medicine, science, literature, law, and education, with various forms such as reports, tables, code, dialogues, lists, and JSON.", "language": "Sample language: English or Chinese.", "token_length": "Sample token length: 8k, 16k, 32k, 64k, 128k, or 256k (calculated using the Qwen tokenizer)", "primary_task": "Primary task type: 11 types.", "secondary_task": "Secondary task type: 25 types.", "contextual_requirement": "Contextual Requirement: Full or Partial.", "question_nonthinking": "Non-thinking prompt of the question: direct answer required.", "question_thinking": "Thinking prompt of the question: think first, then answer.", "answer": ["List of components that constitute the answer."], "difficulty": "Sample difficulty: Easy, Moderate, Hard or Extreme." } ``` ## 🧰 How to use it? ### Loading Data You can download and load **LongBench Pro** data using the following code: ```python from datasets import load_dataset dataset = load_dataset('caskcsg/LongBench-Pro', split='test') ``` ### Evaluation Please refer to our [Github Repo](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) for automated evaluation. ## 📖 Citation *Coming Soon...*