File size: 3,960 Bytes
2cfee1a eb9164d 2cfee1a eb9164d 2cfee1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- question-answering
pretty_name: Nano-Start Learning Dataset
tags:
- educational
- llm-training
- chat
- completions
- oxidizr
configs:
- config_name: completions
data_files:
- split: train
path: completions.jsonl
- config_name: qa
data_files:
- split: train
path: qa.jsonl
- config_name: chat
data_files:
- split: train
path: chat.jsonl
---
# Nano-Start Learning Dataset
A small educational dataset for learning how to train language models from scratch.
## Dataset Description
This dataset contains simple, factual examples designed to demonstrate LLM training concepts:
- **Completions**: Factual statements the model learns to continue
- **Q&A**: Question-answer pairs using chat special tokens
- **Chat**: Multi-turn conversations with system prompts
The dataset is intentionally small (~276 examples) so models can be trained quickly on CPU. The goal is education, not production-quality models.
## Dataset Statistics
| Split | Examples | Description |
|-------|----------|-------------|
| completions | 129 | Factual statements about geography, math, science, etc. |
| qa | 96 | Q&A pairs with `<\|user\|>` and `<\|assistant\|>` tokens |
| chat | 51 | Multi-turn conversations with `<\|system\|>` prompts |
## Data Format
All files are JSONL (JSON Lines) with a single `text` field:
### Completions
```json
{"text": "The capital of France is Paris. Paris is known for the Eiffel Tower."}
{"text": "1 + 1 = 2. This is the most basic addition problem in mathematics."}
{"text": "Water boils at 100 degrees Celsius at sea level."}
```
### Q&A
```json
{"text": "<|user|>What is 1+1?<|assistant|>1+1 equals 2."}
{"text": "<|user|>What is the capital of France?<|assistant|>The capital of France is Paris."}
```
### Chat
```json
{"text": "<|system|>You are a helpful assistant.<|user|>Hello!<|assistant|>Hello! How can I help you today?"}
{"text": "<|system|>You are a math tutor.<|user|>What is 5x5?<|assistant|>5x5 equals 25."}
```
## Special Tokens
The dataset uses OpenAI-compatible special tokens from the `cl100k_base` vocabulary:
| Token | ID | Purpose |
|-------|------|---------|
| `<\|endoftext\|>` | 100257 | End of document (added during tokenization) |
| `<\|system\|>` | 100277 | System instructions |
| `<\|user\|>` | 100278 | User input |
| `<\|assistant\|>` | 100279 | Model response |
## Usage
### Download
**Option A: Using hf**
```bash
pip install huggingface_hub
hf download fs90/nano-start-data --local-dir raw --repo-type dataset
```
**Option B: Direct download**
Download files from the [Files tab](https://huggingface.co/datasets/fs90/nano-start-data/tree/main).
### View with Python
```python
from datasets import load_dataset
ds = load_dataset("fs90/nano-start-data", "completions")
for example in ds["train"][:3]:
print(example["text"])
```
### For Training
This raw data shows what the text looks like **before tokenization**. For training, use the pre-tokenized version: [fs90/nano-start-data-bin](https://huggingface.co/datasets/fs90/nano-start-data-bin)
To learn how to tokenize your own data, see the [splintr](https://github.com/farhan-syah/splintr) project.
## Related Resources
- **Pre-tokenized data**: [fs90/nano-start-data-bin](https://huggingface.co/datasets/fs90/nano-start-data-bin)
- **Training framework**: [oxidizr](https://github.com/farhan-syah/oxidizr)
- **Tokenization**: [splintr](https://github.com/farhan-syah/splintr) - Learn how to tokenize your own data
## License
MIT License
## Citation
```bibtex
@dataset{nano_start_2024,
title={Nano-Start: Educational Dataset for LLM Training},
author={fs90},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/fs90/nano-start-data}
}
```
|