Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -16,8 +16,8 @@ A 90M parameter GPT trained from scratch in Rust using the [picochat](https://gi
|
|
| 16 |
## Model details
|
| 17 |
|
| 18 |
- **Architecture**: Decoder-only transformer with grouped-query attention, RoPE, sliding window attention, ReLU-squared MLP
|
| 19 |
-
- **Parameters**:
|
| 20 |
-
- **Vocab size**:
|
| 21 |
- **Context length**: 2048 tokens
|
| 22 |
- **Training**: Pretrained on OpenWebText (10k steps), then supervised fine-tuned on UltraChat + no_robots (2k steps)
|
| 23 |
- **Framework**: [candle](https://github.com/huggingface/candle) (Rust)
|
|
@@ -51,6 +51,6 @@ This model was trained on CPU with limited data (~5M tokens vs GPT-2's 8B). It p
|
|
| 51 |
|
| 52 |
## Files
|
| 53 |
|
| 54 |
-
- `model.safetensors` -- model weights (
|
| 55 |
- `config.json` -- model architecture config
|
| 56 |
- `tokenizer.json` -- BPE tokenizer (32K vocab)
|
|
|
|
| 16 |
## Model details
|
| 17 |
|
| 18 |
- **Architecture**: Decoder-only transformer with grouped-query attention, RoPE, sliding window attention, ReLU-squared MLP
|
| 19 |
+
- **Parameters**: 31.5M (depth=8: 8 layers, 512 dim, 8 heads, 4 KV heads)
|
| 20 |
+
- **Vocab size**: 4,096 (BPE tokenizer)
|
| 21 |
- **Context length**: 2048 tokens
|
| 22 |
- **Training**: Pretrained on OpenWebText (10k steps), then supervised fine-tuned on UltraChat + no_robots (2k steps)
|
| 23 |
- **Framework**: [candle](https://github.com/huggingface/candle) (Rust)
|
|
|
|
| 51 |
|
| 52 |
## Files
|
| 53 |
|
| 54 |
+
- `model.safetensors` -- model weights (120MB)
|
| 55 |
- `config.json` -- model architecture config
|
| 56 |
- `tokenizer.json` -- BPE tokenizer (32K vocab)
|