File size: 1,498 Bytes
d87ed01 0fa7c83 d87ed01 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ---
language: en
license: mit
tags:
- text-generation
- causal-lm
- randygpt
- rust
---
# randyGPT — model-ds
A GPT-style language model trained from scratch in Rust on Project Gutenberg.
## Model Details
| | |
|---|---|
| Architecture | Transformer (causal LM) |
| Parameters | 2.78M |
| Layers | 12 |
| Heads | 4 |
| Embedding dim | 128 |
| Context window | 256 tokens |
| Vocab size | 1500 (BPE) |
| Training iters | 10800 |
| Best val loss | 3.7141 |
## Training
Trained on ~103MB of cleaned Project Gutenberg text (114 public domain books)
with BPE-1500 tokenization, AdamW optimizer, cosine LR decay,
and ReduceLROnPlateau. Metal GPU via Candle on Apple Silicon.
## Usage
```python
from modeling_randygpt import RandyGPTConfig, RandyGPTForCausalLM
from tokenizer_randygpt import RandyGPTTokenizer
from safetensors.torch import load_file
import torch
# Load
cfg = RandyGPTConfig.from_pretrained("MonumentalSystems/randygpt-ds")
model = RandyGPTForCausalLM(cfg)
state = load_file("model.safetensors")
model.load_state_dict(state, strict=True)
model.eval()
tok = RandyGPTTokenizer.from_file("tokenizer.json")
# Generate
prompt = "Once upon a time"
ids = torch.tensor([tok.encode(prompt)], dtype=torch.long)
out_ids = model.generate_text(ids, max_new_tokens=200, temperature=0.8)
print(tok.decode(out_ids[0].tolist()))
```
## Source
Trained with [randyGPT](https://github.com/MonumentalSystems/RandyGPT) —
a GPT implementation in Rust with Metal GPU acceleration.
|