File size: 4,421 Bytes
d20902c
031fb70
 
 
 
 
 
 
 
 
 
 
 
d20902c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
031fb70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b1b5114
 
 
 
 
 
 
 
 
 
 
 
 
031fb70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c02845f
031fb70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
language:
- en
license: mit
task_categories:
- text-generation
tags:
- long-context
- post-training
- context-window-extension
- packed-sequences
- continual-training
pretty_name: Mix-Context Post-Training 128K
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  - name: position_ids
    sequence: int64
  splits:
  - name: train
    num_bytes: 113246784000
    num_examples: 72000
  download_size: 34848646144
  dataset_size: 113246784000
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Mix-Context Post-Training Dataset for 128K Context Extension

## Overview

**Mix-Context Post-Training 128K** is a dataset designed specifically for **post-training context window extension** of pretrained LLMs.

It targets the stage *after base pretraining*, where a model is adapted to operate over **much longer contexts (up to 128K tokens)** while preserving short-context behavior. The dataset mixes short- and long-context packed sequences with a controlled length distribution to support:

- Post-training context window extension
- Length generalization / robustness evaluation
- Continued training after positional / RoPE scaling methods

If you use this dataset for post-training, context window extension, or evaluation, **please cite this dataset** (see Citation).

## Sequence Length Distribution and Data Sources

| Context Type | Token Length Range | Packed Context Length | Samples | Data Source |
|-------------|-------------------|----------------------|---------|-------------|
| Short | 64 – 2,048 | 8K | 8,000 | FineWeb-Edu (sample/10BT) |
| Short | 2,048 – 4,096 | 8K | 8,000 | FineWeb-Edu (sample/10BT) |
| Short | 4,096 – 9,216 | 8K | 16,000 | FineWeb-Edu (sample/10BT) |
| Long | 8K – 32K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) |
| Long | 32K – 64K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Wikipedia, Common Crawl) |
| Long | 64K – 128K | 128K | 16,000 | RedPajama-Data-1T (arXiv, Common Crawl) |
| Long | 128K – 200K | 128K | 8,000 | RedPajama-Data-1T (arXiv, Common Crawl) |
| **Total** | — | — | **72,000** | — |

---

## Dataset Format

Each example is a **packed sequence** ready for causal LM training:

- `input_ids`: token IDs
- `position_ids`: positional indices aligned to the packed sequence

**Note:** This dataset does **not** include raw text. It contains tokenized, packed sequences produced by the preprocessing pipeline.

---

## Construction Summary (High-Level)

This dataset is generated by:
1. Downloading public corpora used for short- and long-context content
2. Tokenizing with a specified tokenizer (default in scripts: `meta-llama/Meta-Llama-3-8B`)
3. Filtering and bucketing by token length
4. Packing sequences to target context windows
5. Concatenating short- and long-context components into the final dataset

### Tokenizer
- Tokenizer name/path: `meta-llama/Meta-Llama-3-8B`
- Each text is encoded with explicit BOS/EOS:
  - `BOS + text + EOS`
- Length statistics and buckets are **tokenizer-dependent**

### Short-Context Component
- Source: FineWeb-Edu (`HuggingFaceFW/fineweb-edu`, `sample/10BT`)
- Bucketed by token length (target sample sizes):
  - 64–2,048: 8,000
  - 2,048–4,096: 8,000
  - 4,096–9,216: 16,000
- Packed to **8K context** (short context length)

### Long-Context Component
- Source: RedPajama-Data-1T (`togethercomputer/RedPajama-Data-1T`)
- Splits used:
  - `arxiv`
  - `wikipedia`
  - `common_crawl` (subset used in preprocessing)
- Documents are filtered before tokenization by raw byte length (approx):
  - min: 32 KB
  - max: 800 KB
- After tokenization, long sequences are filtered and bucketed in token ranges:
  - 8K–32K, 32K–64K, 64K–128K, 128K–200K
- Packed to **128K context** (long context length)

### Packing / Sequence Construction
Packing concatenates tokenized samples sequentially until reaching `max_seq_len`:
- `max_seq_len = 128K`
- Short packing `context_len = 8K`
- Long packing `context_len = 128K`

---

## Citation

If you use this dataset, please cite:

```bibtex
@dataset{wang_chen_mix_context_post_training_128k_2026,
  author       = {Qi Wang and Lizhang Chen},
  title        = {Mix-Context Post-Training Dataset for 128K Context Extension},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/ghostcc3/mix-context-post-training-128k}
}