Datasets:
input_ids
listlengths 131k
131k
| position_ids
listlengths 131k
131k
|
|---|---|
[128000,791,22765,22195,198,2520,682,279,3021,11,30363,323,26681,304,22195,13222,268,753,6603,11,114(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,22111,1469,288,733,1555,5209,26928,11501,198,7009,1253,539,1935,709,58840,477,1212,2132,3113(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,99608,3520,31974,315,53689,198,1966,420,2199,512,12,578,12649,5294,315,32666,3520,33360,198,(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,60210,304,13527,617,8040,264,5623,814,2019,1436,14110,553,279,27375,315,25407,389,9572,13,13(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,2746,499,1935,701,25212,369,11938,11,9020,709,25,11011,4814,374,279,4948,1455,4279,2890,3575(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,19480,13,220,966,11,220,1049,21,1198,3234,279,10578,315,1057,10068,15823,7958,1268,584,6865,(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,12,5751,10343,220,17,13,15,482,1795,1129,414,14957,6441,84,12871,13920,18722,3923,1587,81423(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,67794,5859,279,89812,43680,315,36867,198,23562,12074,505,279,3907,315,32790,323,279,3907,315(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,2181,574,8767,3463,430,6126,454,275,42750,355,8136,548,4511,285,15203,304,264,272,583,2454,4(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
[128000,18820,13,220,16,11,220,679,16,362,3544,40806,4007,706,3779,311,279,7982,1160,315,19465,27103(...TRUNCATED)
| [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,3(...TRUNCATED)
|
Mix-Context Post-Training Dataset for 128K Context Extension
Overview
Mix-Context Post-Training 128K is a dataset designed specifically for post-training context window extension of pretrained LLMs.
It targets the stage after base pretraining, where a model is adapted to operate over much longer contexts (up to 128K tokens) while preserving short-context behavior. The dataset mixes short- and long-context packed sequences with a controlled length distribution to support:
- Post-training context window extension
- Length generalization / robustness evaluation
- Continued training after positional / RoPE scaling methods
If you use this dataset for post-training, context window extension, or evaluation, please cite this dataset (see Citation).
Sequence Length Distribution and Data Sources
| Context Type | Token Length Range | Packed Context Length | Samples | Data Source |
|---|---|---|---|---|
| Short | 64 β 2,048 | 8K | 8,000 | FineWeb-Edu (sample/10BT) |
| Short | 2,048 β 4,096 | 8K | 8,000 | FineWeb-Edu (sample/10BT) |
| Short | 4,096 β 9,216 | 8K | 16,000 | FineWeb-Edu (sample/10BT) |
| Long | 8K β 32K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) |
| Long | 32K β 64K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) |
| Long | 64K β 128K | 128K | 16,000 | RedPajama-Data-1T (arXiv, Common Crawl) |
| Long | 128K β 200K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Common Crawl) |
| Total | β | β | 72,000 | β |
Dataset Format
Each example is a packed sequence ready for causal LM training:
input_ids: token IDsposition_ids: positional indices aligned to the packed sequence
Note: This dataset does not include raw text. It contains tokenized, packed sequences produced by the preprocessing pipeline.
Construction Summary (High-Level)
This dataset is generated by:
- Downloading public corpora used for short- and long-context content
- Tokenizing with a specified tokenizer (default in scripts:
meta-llama/Meta-Llama-3-8B) - Filtering and bucketing by token length
- Packing sequences to target context windows
- Concatenating short- and long-context components into the final dataset
Tokenizer
- Tokenizer name/path:
meta-llama/Meta-Llama-3-8B - Each text is encoded with explicit BOS/EOS:
BOS + text + EOS
- Length statistics and buckets are tokenizer-dependent
Short-Context Component
- Source: FineWeb-Edu (
HuggingFaceFW/fineweb-edu,sample/10BT) - Bucketed by token length (target sample sizes):
- 64β2,048: 8,000
- 2,048β4,096: 8,000
- 4,096β9,216: 16,000
- Packed to 8K context (short context length)
Long-Context Component
- Source: RedPajama-Data-1T (
togethercomputer/RedPajama-Data-1T) - Splits used:
arxivwikipediacommon_crawl(subset used in preprocessing)
- Documents are filtered before tokenization by raw byte length (approx):
- min: 32 KB
- max: 800 KB
- After tokenization, long sequences are filtered and bucketed in token ranges:
- 8Kβ32K, 32Kβ64K, 64Kβ128K, 128Kβ200K
- Packed to 128K context (long context length)
Packing / Sequence Construction
Packing concatenates tokenized samples sequentially until reaching max_seq_len:
max_seq_len = 128K- Short packing
context_len = 8K - Long packing
context_len = 128K
Citation
If you use this dataset, please cite:
@dataset{wang_chen_mix_context_post_training_128k_2026,
author = {Qi Wang and Lizhang Chen},
title = {Mix-Context Post-Training Dataset for 128K Context Extension},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/ghostcc3/mix-context-post-training-128k}
}
- Downloads last month
- 27