ghostcc3 commited on
Commit
c02845f
·
verified ·
1 Parent(s): 46b65cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -21
README.md CHANGED
@@ -67,7 +67,7 @@ This dataset is generated by:
67
  5. Concatenating short- and long-context components into the final dataset
68
 
69
  ### Tokenizer
70
- - Tokenizer name/path: `meta-llama/Meta-Llama-3-8B` (default; configurable in preprocessing)
71
  - Each text is encoded with explicit BOS/EOS:
72
  - `BOS + text + EOS`
73
  - Length statistics and buckets are **tokenizer-dependent**
@@ -101,26 +101,6 @@ Packing concatenates tokenized samples sequentially until reaching `max_seq_len`
101
 
102
  ---
103
 
104
- ## Intended Use
105
-
106
- This dataset is intended for:
107
- - **Post-training context window extension** (e.g., extending an 8K/16K model to 128K)
108
- - Continued training after applying positional / RoPE scaling techniques
109
- - Long-context training ablations and evaluation
110
-
111
- This dataset is **not intended** to be a standalone base pretraining corpus.
112
-
113
- ---
114
-
115
- ## Limitations
116
-
117
- - Packing alters natural document boundaries
118
- - Tokenization, length distribution, and behavior depend on tokenizer choice/version
119
- - This artifact provides tokenized sequences, not raw text
120
- - Upstream corpora have their own limitations and licenses/terms
121
-
122
- ---
123
-
124
  ## Citation
125
 
126
  If you use this dataset, please cite:
 
67
  5. Concatenating short- and long-context components into the final dataset
68
 
69
  ### Tokenizer
70
+ - Tokenizer name/path: `meta-llama/Meta-Llama-3-8B`
71
  - Each text is encoded with explicit BOS/EOS:
72
  - `BOS + text + EOS`
73
  - Length statistics and buckets are **tokenizer-dependent**
 
101
 
102
  ---
103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  ## Citation
105
 
106
  If you use this dataset, please cite: