viktoroo's picture
Update README.md
e0fb842 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 314291200
      num_examples: 108
  download_size: 117688580
  dataset_size: 314291200
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
pretty_name: LongBench2-128k-plus
tags:
  - long-context
  - longbench
  - language-modeling
  - text-generation
language:
  - en
license: apache-2.0
task_categories:
  - text-generation

LongBench2-128k-plus

LongBench2-128k-plus is a long-context corpus derived from the zai-org/LongBench-v2 benchmark. It keeps only the "long" examples and exposes just the raw long documents, making it convenient for:

  • long-context pretraining or continued training,
  • long-context adaptation (e.g., RoPE scaling, attention tuning),
  • retrieval and RAG-style experimentation where only documents are needed.

All question/answer and multiple-choice metadata from LongBench v2 are dropped; each row is a single long text.

Source dataset

This dataset is a processed subset of:

  • Original dataset: zai-org/LongBench-v2
  • Project page: https://longbench2.github.io
  • Paper: LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks (arXiv:2412.15204)

LongBench v2 is a long-context evaluation benchmark with contexts ranging from thousands to millions of words, spanning multiple realistic domains and task types (QA, multi-document reasoning, code, dialogue, and more).