|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
tags: |
|
|
- long-context |
|
|
- post-training |
|
|
- context-window-extension |
|
|
- packed-sequences |
|
|
- continual-training |
|
|
pretty_name: Mix-Context Post-Training 128K |
|
|
dataset_info: |
|
|
features: |
|
|
- name: input_ids |
|
|
sequence: int32 |
|
|
- name: position_ids |
|
|
sequence: int64 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 113246784000 |
|
|
num_examples: 72000 |
|
|
download_size: 34848646144 |
|
|
dataset_size: 113246784000 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
# Mix-Context Post-Training Dataset for 128K Context Extension |
|
|
|
|
|
## Overview |
|
|
|
|
|
**Mix-Context Post-Training 128K** is a dataset designed specifically for **post-training context window extension** of pretrained LLMs. |
|
|
|
|
|
It targets the stage *after base pretraining*, where a model is adapted to operate over **much longer contexts (up to 128K tokens)** while preserving short-context behavior. The dataset mixes short- and long-context packed sequences with a controlled length distribution to support: |
|
|
|
|
|
- Post-training context window extension |
|
|
- Length generalization / robustness evaluation |
|
|
- Continued training after positional / RoPE scaling methods |
|
|
|
|
|
If you use this dataset for post-training, context window extension, or evaluation, **please cite this dataset** (see Citation). |
|
|
|
|
|
## Sequence Length Distribution and Data Sources |
|
|
|
|
|
| Context Type | Token Length Range | Packed Context Length | Samples | Data Source | |
|
|
|-------------|-------------------|----------------------|---------|-------------| |
|
|
| Short | 64 – 2,048 | 8K | 8,000 | FineWeb-Edu (sample/10BT) | |
|
|
| Short | 2,048 – 4,096 | 8K | 8,000 | FineWeb-Edu (sample/10BT) | |
|
|
| Short | 4,096 – 9,216 | 8K | 16,000 | FineWeb-Edu (sample/10BT) | |
|
|
| Long | 8K – 32K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) | |
|
|
| Long | 32K – 64K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) | |
|
|
| Long | 64K – 128K | 128K | 16,000 | RedPajama-Data-1T (arXiv, Common Crawl) | |
|
|
| Long | 128K – 200K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Common Crawl) | |
|
|
| **Total** | — | — | **72,000** | — | |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset Format |
|
|
|
|
|
Each example is a **packed sequence** ready for causal LM training: |
|
|
|
|
|
- `input_ids`: token IDs |
|
|
- `position_ids`: positional indices aligned to the packed sequence |
|
|
|
|
|
**Note:** This dataset does **not** include raw text. It contains tokenized, packed sequences produced by the preprocessing pipeline. |
|
|
|
|
|
--- |
|
|
|
|
|
## Construction Summary (High-Level) |
|
|
|
|
|
This dataset is generated by: |
|
|
1. Downloading public corpora used for short- and long-context content |
|
|
2. Tokenizing with a specified tokenizer (default in scripts: `meta-llama/Meta-Llama-3-8B`) |
|
|
3. Filtering and bucketing by token length |
|
|
4. Packing sequences to target context windows |
|
|
5. Concatenating short- and long-context components into the final dataset |
|
|
|
|
|
### Tokenizer |
|
|
- Tokenizer name/path: `meta-llama/Meta-Llama-3-8B` |
|
|
- Each text is encoded with explicit BOS/EOS: |
|
|
- `BOS + text + EOS` |
|
|
- Length statistics and buckets are **tokenizer-dependent** |
|
|
|
|
|
### Short-Context Component |
|
|
- Source: FineWeb-Edu (`HuggingFaceFW/fineweb-edu`, `sample/10BT`) |
|
|
- Bucketed by token length (target sample sizes): |
|
|
- 64–2,048: 8,000 |
|
|
- 2,048–4,096: 8,000 |
|
|
- 4,096–9,216: 16,000 |
|
|
- Packed to **8K context** (short context length) |
|
|
|
|
|
### Long-Context Component |
|
|
- Source: RedPajama-Data-1T (`togethercomputer/RedPajama-Data-1T`) |
|
|
- Splits used: |
|
|
- `arxiv` |
|
|
- `wikipedia` |
|
|
- `common_crawl` (subset used in preprocessing) |
|
|
- Documents are filtered before tokenization by raw byte length (approx): |
|
|
- min: 32 KB |
|
|
- max: 800 KB |
|
|
- After tokenization, long sequences are filtered and bucketed in token ranges: |
|
|
- 8K–32K, 32K–64K, 64K–128K, 128K–200K |
|
|
- Packed to **128K context** (long context length) |
|
|
|
|
|
### Packing / Sequence Construction |
|
|
Packing concatenates tokenized samples sequentially until reaching `max_seq_len`: |
|
|
- `max_seq_len = 128K` |
|
|
- Short packing `context_len = 8K` |
|
|
- Long packing `context_len = 128K` |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{wang_chen_mix_context_post_training_128k_2026, |
|
|
author = {Qi Wang and Lizhang Chen}, |
|
|
title = {Mix-Context Post-Training Dataset for 128K Context Extension}, |
|
|
year = {2026}, |
|
|
publisher = {Hugging Face}, |
|
|
url = {https://huggingface.co/datasets/ghostcc3/mix-context-post-training-128k} |
|
|
} |
|
|
|