prefill-dataset / README.md
di2ox3's picture
Upload README.md with huggingface_hub
f6ab675 verified
metadata
language:
  - en
  - fr
license: apache-2.0
task_categories:
  - question-answering
  - translation
  - text-retrieval
tags:
  - long-context
  - prefill
  - tokenized
  - qwen3
  - kv-cache
  - benchmark
pretty_name: Prefill Dataset - Long-Context Tokenized Corpus for Qwen3-8B
size_categories:
  - 1K<n<10K

Prefill Dataset

Long-context tokenized corpus for benchmarking LLM prefill computation with Qwen3-8B. Contains ~10M tokens of copyright-free English text pre-tokenized with character offset mappings for fast position lookup.

Dataset Structure

Files

File Description Rows
data/documents.parquet English documents with token IDs and char offsets ~100-500
data/tasks.parquet QA, translation, and retrieval tasks ~1K-5K
data/translations.parquet French translations of OPUS-Books English documents ~100-500
data/aligned_chunks.parquet EN/FR aligned chunk pairs packed to ~1k source tokens ~1K-5K

documents.parquet Schema

Column Type Description
doc_id string Unique document ID
source string "narrativeqa" / "opus_books" / "pg19"
title string Book title
language string Always "en"
text large_string Full document text
token_ids list<int32> Qwen3-8B token IDs
char_offsets list<int32> Character start position per token
token_count int32 Number of tokens

tasks.parquet Schema

Column Type Description
task_id string Unique task ID
doc_id string References documents.doc_id
task_type string "qa" / "translation" / "retrieval"
question string Task prompt
answer string Expected answer (JSON list for multi-answer)
metadata string JSON with extra fields

translations.parquet Schema

Column Type Description
doc_id string References English document
target_language string Always "fr"
target_text large_string Full translation text
target_token_ids list<int32> Tokenized translation
target_char_offsets list<int32> Char offsets for translation tokens

aligned_chunks.parquet Schema

Column Type Description
chunk_id string Unique chunk ID (doc_id + chunk index)
doc_id string References OPUS English document
chunk_idx int32 Chunk index within document
segment_start_idx int32 Start aligned segment index (inclusive)
segment_end_idx int32 End aligned segment index (exclusive)
src_lang string Always "en"
tgt_lang string Always "fr"
src_text large_string English chunk text
tgt_text large_string French chunk text
src_char_start / src_char_end int32 Character span in source document
tgt_char_start / tgt_char_end int32 Character span in translation document
src_tok_start / src_tok_end int32 Token span in source token IDs
tgt_tok_start / tgt_tok_end int32 Token span in target token IDs
src_token_count int32 Source tokens in chunk (target ~1000)
tgt_token_count int32 Target tokens in chunk

Sources

Source Purpose Target Tokens
NarrativeQA Gutenberg books with human Q&A pairs ~5M
OPUS-Books Parallel EN-FR book translations ~3M
pg19 Supplementary long Gutenberg books ~2M+

Tokenizer

  • Model: Qwen/Qwen3-8B (vocab size: 151,936)
  • Offset mapping: char_offsets[i] is the character position where token i starts. BPE tokens with leading spaces point to the space character — this is correct: text[char_offsets[i]:char_offsets[i+1]] recovers exact token text.

Usage

import pyarrow.parquet as pq

# Load
docs = pq.read_table("data/documents.parquet").to_pandas()
tasks = pq.read_table("data/tasks.parquet").to_pandas()

# Get a document and its tasks
doc = docs.iloc[0]
doc_tasks = tasks[tasks.doc_id == doc.doc_id]

print(f"Title: {doc.title}")
print(f"Tokens: {doc.token_count:,}")
print(f"Tasks: {len(doc_tasks)}")

# Verify token-to-text mapping
offsets = doc.char_offsets
text = doc.text
for i in range(5):
    end = offsets[i + 1] if i + 1 < len(offsets) else len(text)
    print(f"  Token {i}: '{text[offsets[i]:end]}'")

See generate_examples.py for a full usage example.

Regeneration

uv run build_dataset.py

Requires Python 3.11+. Dependencies are declared inline (PEP 723) — uv run handles them automatically.

License

The dataset is released under Apache 2.0. Source texts are public domain (Project Gutenberg).