Datasets:
File size: 1,582 Bytes
5763542 f66b24e 5763542 f66b24e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 314291200
num_examples: 108
download_size: 117688580
dataset_size: 314291200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: LongBench2-128k-plus
tags:
- long-context
- longbench
- language-modeling
- text-generation
language:
- en
license: apache-2.0
task_categories:
- text-generation
---
# LongBench2-128k-plus
LongBench2-128k-plus is a long-context corpus derived from the
[zai-org/LongBench-v2](https://huggingface.co/datasets/zai-org/LongBench-v2)
benchmark. It keeps only the "long" examples and exposes just the raw
long documents, making it convenient for:
- long-context pretraining or continued training,
- long-context adaptation (e.g., RoPE scaling, attention tuning),
- retrieval and RAG-style experimentation where only documents are needed.
All question/answer and multiple-choice metadata from LongBench v2 are
dropped; each row is a single long text.
## Source dataset
This dataset is a processed subset of:
- **Original dataset:** `zai-org/LongBench-v2`
- **Project page:** https://longbench2.github.io
- **Paper:** LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks (arXiv:2412.15204)
LongBench v2 is a long-context evaluation benchmark with contexts ranging from
thousands to millions of words, spanning multiple realistic domains and task
types (QA, multi-document reasoning, code, dialogue, and more).
|