ghostcc3 commited on
Commit
031fb70
·
verified ·
1 Parent(s): d20902c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md CHANGED
@@ -1,4 +1,16 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: input_ids
@@ -17,3 +29,107 @@ configs:
17
  - split: train
18
  path: data/train-*
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ task_categories:
6
+ - text-generation
7
+ tags:
8
+ - long-context
9
+ - post-training
10
+ - context-window-extension
11
+ - packed-sequences
12
+ - continual-training
13
+ pretty_name: Mix-Context Post-Training 128K
14
  dataset_info:
15
  features:
16
  - name: input_ids
 
29
  - split: train
30
  path: data/train-*
31
  ---
32
+
33
+ # Mix-Context Post-Training Dataset for 128K Context Extension
34
+
35
+ ## Overview
36
+
37
+ **Mix-Context Post-Training 128K** is a dataset designed specifically for **post-training context window extension** of pretrained LLMs.
38
+
39
+ It targets the stage *after base pretraining*, where a model is adapted to operate over **much longer contexts (up to 128K tokens)** while preserving short-context behavior. The dataset mixes short- and long-context packed sequences with a controlled length distribution to support:
40
+
41
+ - Post-training context window extension
42
+ - Length generalization / robustness evaluation
43
+ - Continued training after positional / RoPE scaling methods
44
+
45
+ If you use this dataset for post-training, context window extension, or evaluation, **please cite this dataset** (see Citation).
46
+
47
+ ---
48
+
49
+ ## Dataset Format
50
+
51
+ Each example is a **packed sequence** ready for causal LM training:
52
+
53
+ - `input_ids`: token IDs
54
+ - `position_ids`: positional indices aligned to the packed sequence
55
+
56
+ **Note:** This dataset does **not** include raw text. It contains tokenized, packed sequences produced by the preprocessing pipeline.
57
+
58
+ ---
59
+
60
+ ## Construction Summary (High-Level)
61
+
62
+ This dataset is generated by:
63
+ 1. Downloading public corpora used for short- and long-context content
64
+ 2. Tokenizing with a specified tokenizer (default in scripts: `meta-llama/Meta-Llama-3-8B`)
65
+ 3. Filtering and bucketing by token length
66
+ 4. Packing sequences to target context windows
67
+ 5. Concatenating short- and long-context components into the final dataset
68
+
69
+ ### Tokenizer
70
+ - Tokenizer name/path: `meta-llama/Meta-Llama-3-8B` (default; configurable in preprocessing)
71
+ - Each text is encoded with explicit BOS/EOS:
72
+ - `BOS + text + EOS`
73
+ - Length statistics and buckets are **tokenizer-dependent**
74
+
75
+ ### Short-Context Component
76
+ - Source: FineWeb-Edu (`HuggingFaceFW/fineweb-edu`, `sample/10BT`)
77
+ - Bucketed by token length (target sample sizes):
78
+ - 64–2,048: 8,000
79
+ - 2,048–4,096: 8,000
80
+ - 4,096–9,216: 16,000
81
+ - Packed to **8K context** (short context length)
82
+
83
+ ### Long-Context Component
84
+ - Source: RedPajama-Data-1T (`togethercomputer/RedPajama-Data-1T`)
85
+ - Splits used:
86
+ - `arxiv`
87
+ - `wikipedia`
88
+ - `common_crawl` (subset used in preprocessing)
89
+ - Documents are filtered before tokenization by raw byte length (approx):
90
+ - min: 32 KB
91
+ - max: 800 KB
92
+ - After tokenization, long sequences are filtered and bucketed in token ranges:
93
+ - 8K–32K, 32K–64K, 64K–128K, 128K–200K
94
+ - Packed to **128K context** (long context length)
95
+
96
+ ### Packing / Sequence Construction
97
+ Packing concatenates tokenized samples sequentially until reaching `max_seq_len`:
98
+ - `max_seq_len = 128K`
99
+ - Short packing `context_len = 8K`
100
+ - Long packing `context_len = 128K`
101
+
102
+ ---
103
+
104
+ ## Intended Use
105
+
106
+ This dataset is intended for:
107
+ - **Post-training context window extension** (e.g., extending an 8K/16K model to 128K)
108
+ - Continued training after applying positional / RoPE scaling techniques
109
+ - Long-context training ablations and evaluation
110
+
111
+ This dataset is **not intended** to be a standalone base pretraining corpus.
112
+
113
+ ---
114
+
115
+ ## Limitations
116
+
117
+ - Packing alters natural document boundaries
118
+ - Tokenization, length distribution, and behavior depend on tokenizer choice/version
119
+ - This artifact provides tokenized sequences, not raw text
120
+ - Upstream corpora have their own limitations and licenses/terms
121
+
122
+ ---
123
+
124
+ ## Citation
125
+
126
+ If you use this dataset, please cite:
127
+
128
+ ```bibtex
129
+ @dataset{wang_chen_mix_context_post_training_128k_2026,
130
+ author = {Qi Wang and Lizhang Chen},
131
+ title = {Mix-Context Post-Training Dataset for 128K Context Extension},
132
+ year = {2026},
133
+ publisher = {Hugging Face},
134
+ url = {https://huggingface.co/datasets/ghostcc3/mix-context-post-training-128k}
135
+ }