Datasets:
File size: 3,918 Bytes
390f63f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- fs90/nano-start-data
task_categories:
- text-generation
pretty_name: Nano-Start Tokenized Dataset
tags:
- educational
- llm-training
- tokenized
- binary
- oxidizr
---
# Nano-Start Tokenized Dataset
Pre-tokenized binary files ready for training with [oxidizr](https://github.com/farhan-syah/oxidizr). This is the tokenized version of [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data).
## What is Tokenization?
Language models don't process text directly - they work with numbers called **tokens**. Tokenization converts text into token IDs:
```
"Hello world" → [9906, 1917]
```
This dataset is **pre-tokenized** for simplicity - download and start training immediately. To learn how tokenization works and create your own datasets, see the [splintr](https://github.com/farhan-syah/splintr) project.
## Quick Start
**Option A: Using hf**
```bash
pip install huggingface_hub
hf download fs90/nano-start-data-bin --local-dir data/nano-start/tokenized --repo-type dataset
```
**Option B: Direct download**
Download `combined.bin` from the [Files tab](https://huggingface.co/datasets/fs90/nano-start-data-bin/tree/main) and place it in your project.
**Train with oxidizr:**
```bash
cargo run --release -- \
--config models/nano-start.yaml \
--data data/nano-start/tokenized/combined.bin
```
## Files
Download `combined.bin` for training - it contains all data merged together:
| File | Size | Tokens | Description |
|------|------|--------|-------------|
| **`combined.bin`** | 25,516 bytes | 6,379 | **All data merged (recommended)** |
### Individual Files (Optional)
You can also train on individual subsets. Training on different data produces different model behavior:
| File | Size | Tokens | Description |
|------|------|--------|-------------|
| `completions.bin` | 8,788 bytes | 2,197 | Factual statements only |
| `qa.bin` | 11,036 bytes | 2,759 | Q&A pairs only |
| `chat.bin` | 5,692 bytes | 1,423 | Multi-turn conversations only |
Experiment with different files to see how the training data affects model behavior!
## Binary Format
Each `.bin` file contains raw token IDs:
- **Encoding**: u32 (32-bit unsigned integer)
- **Byte order**: Little-endian
- **Headers**: None (raw token stream)
- **Tokenizer**: `cl100k_base` (OpenAI, vocab size: 100,331)
### Reading the Data
```python
import struct
def read_tokens(path):
with open(path, "rb") as f:
data = f.read()
return list(struct.unpack(f"<{len(data)//4}I", data))
tokens = read_tokens("combined.bin")
print(f"Total tokens: {len(tokens)}")
```
## Tokenizer Details
| Property | Value |
|----------|-------|
| Tokenizer | `cl100k_base` (OpenAI GPT-4/GPT-3.5) |
| Vocab size | 100,331 |
| EOS token | `<\|endoftext\|>` (ID: 100257) |
### Special Tokens
| Token | ID | Purpose |
|-------|------|---------|
| `<\|endoftext\|>` | 100257 | Separates examples |
| `<\|system\|>` | 100277 | System instructions |
| `<\|user\|>` | 100278 | User input |
| `<\|assistant\|>` | 100279 | Model response |
## Source Data
To see the human-readable text before tokenization: [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data)
## Related Resources
- **Raw data**: [fs90/nano-start-data](https://huggingface.co/datasets/fs90/nano-start-data)
- **Training framework**: [oxidizr](https://github.com/farhan-syah/oxidizr)
- **Tokenization**: [splintr](https://github.com/farhan-syah/splintr) - Learn how to tokenize your own data
## License
MIT License
## Citation
```bibtex
@dataset{nano_start_bin_2024,
title={Nano-Start Tokenized Dataset},
author={fs90},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/fs90/nano-start-data-bin}
}
```
|