| --- |
| language: [en] |
| license: mit |
| tags: |
| - database |
| - sql |
| - nosql |
| - dba |
| - optimization |
| - slm |
| - llama-style |
| - rope |
| - 5m-context |
| - from-scratch |
| - 1b-params |
| pipeline_tag: text-generation |
| --- |
| |
| # Database Admin-SLM: Role-Based Small Language Model |
|
|
| A **LLaMA-style transformer** (~1007.5M params, ~1.01B) trained from scratch for the **Database Admin** role. |
| Supports up to **5M token context** via RoPE with gradient checkpointing. |
|
|
| ## Architecture |
| | Component | Value | |
| |-----------|-------| |
| | Architecture | LLaMA-style (RoPE + RMSNorm + SwiGLU) | |
| | Parameters | ~1007.5M (~1.01B) | |
| | Layers | 32 | |
| | Heads | 20 | |
| | Embedding | 1600 | |
| | Max Context | 5,000,000 tokens | |
| | Max Output | 5,000,000 tokens | |
| | Vocab | 13,202 BPE | |
| | Model Size | ~4 GB (fp32) | |
|
|
| ## Training |
| - Best eval loss: 6.770246982574463 |
| - Trained with gradient checkpointing on Apple M4 (MPS) |
| - 3 epochs, batch_size=1, grad_accum=16 |
|
|
| ## Usage |
| ```python |
| from huggingface_hub import hf_hub_download |
| from tokenizers import Tokenizer |
| |
| model_path = hf_hub_download("sathishphdai/database-admin-slm-5m", "model.safetensors") |
| tokenizer_path = hf_hub_download("sathishphdai/database-admin-slm-5m", "database_admin_tokenizer.json") |
| tokenizer = Tokenizer.from_file(tokenizer_path) |
| ``` |
|
|