fs90 commited on
Commit
390f63f
·
verified ·
1 Parent(s): 17deb4c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +144 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - fs90/nano-start-data
16
+ task_categories:
17
+ - text-generation
18
+ pretty_name: Nano-Start Tokenized Dataset
19
+ tags:
20
+ - educational
21
+ - llm-training
22
+ - tokenized
23
+ - binary
24
+ - oxidizr
25
+ ---
26
+
27
+ # Nano-Start Tokenized Dataset
28
+
29
+ Pre-tokenized binary files ready for training with [oxidizr](https://github.com/farhan-syah/oxidizr). This is the tokenized version of [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data).
30
+
31
+ ## What is Tokenization?
32
+
33
+ Language models don't process text directly - they work with numbers called **tokens**. Tokenization converts text into token IDs:
34
+
35
+ ```
36
+ "Hello world" → [9906, 1917]
37
+ ```
38
+
39
+ This dataset is **pre-tokenized** for simplicity - download and start training immediately. To learn how tokenization works and create your own datasets, see the [splintr](https://github.com/farhan-syah/splintr) project.
40
+
41
+ ## Quick Start
42
+
43
+ **Option A: Using hf**
44
+ ```bash
45
+ pip install huggingface_hub
46
+ hf download fs90/nano-start-data-bin --local-dir data/nano-start/tokenized --repo-type dataset
47
+ ```
48
+
49
+ **Option B: Direct download**
50
+
51
+ Download `combined.bin` from the [Files tab](https://huggingface.co/datasets/fs90/nano-start-data-bin/tree/main) and place it in your project.
52
+
53
+ **Train with oxidizr:**
54
+ ```bash
55
+ cargo run --release -- \
56
+ --config models/nano-start.yaml \
57
+ --data data/nano-start/tokenized/combined.bin
58
+ ```
59
+
60
+ ## Files
61
+
62
+ Download `combined.bin` for training - it contains all data merged together:
63
+
64
+ | File | Size | Tokens | Description |
65
+ |------|------|--------|-------------|
66
+ | **`combined.bin`** | 25,516 bytes | 6,379 | **All data merged (recommended)** |
67
+
68
+ ### Individual Files (Optional)
69
+
70
+ You can also train on individual subsets. Training on different data produces different model behavior:
71
+
72
+ | File | Size | Tokens | Description |
73
+ |------|------|--------|-------------|
74
+ | `completions.bin` | 8,788 bytes | 2,197 | Factual statements only |
75
+ | `qa.bin` | 11,036 bytes | 2,759 | Q&A pairs only |
76
+ | `chat.bin` | 5,692 bytes | 1,423 | Multi-turn conversations only |
77
+
78
+ Experiment with different files to see how the training data affects model behavior!
79
+
80
+ ## Binary Format
81
+
82
+ Each `.bin` file contains raw token IDs:
83
+
84
+ - **Encoding**: u32 (32-bit unsigned integer)
85
+ - **Byte order**: Little-endian
86
+ - **Headers**: None (raw token stream)
87
+ - **Tokenizer**: `cl100k_base` (OpenAI, vocab size: 100,331)
88
+
89
+ ### Reading the Data
90
+
91
+ ```python
92
+ import struct
93
+
94
+ def read_tokens(path):
95
+ with open(path, "rb") as f:
96
+ data = f.read()
97
+ return list(struct.unpack(f"<{len(data)//4}I", data))
98
+
99
+ tokens = read_tokens("combined.bin")
100
+ print(f"Total tokens: {len(tokens)}")
101
+ ```
102
+
103
+ ## Tokenizer Details
104
+
105
+ | Property | Value |
106
+ |----------|-------|
107
+ | Tokenizer | `cl100k_base` (OpenAI GPT-4/GPT-3.5) |
108
+ | Vocab size | 100,331 |
109
+ | EOS token | `<\|endoftext\|>` (ID: 100257) |
110
+
111
+ ### Special Tokens
112
+
113
+ | Token | ID | Purpose |
114
+ |-------|------|---------|
115
+ | `<\|endoftext\|>` | 100257 | Separates examples |
116
+ | `<\|system\|>` | 100277 | System instructions |
117
+ | `<\|user\|>` | 100278 | User input |
118
+ | `<\|assistant\|>` | 100279 | Model response |
119
+
120
+ ## Source Data
121
+
122
+ To see the human-readable text before tokenization: [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data)
123
+
124
+ ## Related Resources
125
+
126
+ - **Raw data**: [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data)
127
+ - **Training framework**: [oxidizr](https://github.com/farhan-syah/oxidizr)
128
+ - **Tokenization**: [splintr](https://github.com/farhan-syah/splintr) - Learn how to tokenize your own data
129
+
130
+ ## License
131
+
132
+ MIT License
133
+
134
+ ## Citation
135
+
136
+ ```bibtex
137
+ @dataset{nano_start_bin_2024,
138
+ title={Nano-Start Tokenized Dataset},
139
+ author={fs90},
140
+ year={2024},
141
+ publisher={Hugging Face},
142
+ url={https://huggingface.co/datasets/fs90/nano-start-data-bin}
143
+ }
144
+ ```