Content Writer-SLM: Role-Based Small Language Model

A LLaMA-style transformer (~989.2M params, ~0.99B) trained from scratch for the Content Writer role. Supports up to 1M token context via RoPE with gradient checkpointing.

Architecture

Component Value
Architecture LLaMA-style (RoPE + RMSNorm + SwiGLU)
Parameters 989.2M (0.99B)
Layers 32
Heads 20
Embedding 1600
Max Context 100,000,000,000 tokens
Max Output 1,000,000 tokens
Vocab 1,724 BPE
Model Size ~4 GB (fp32)

Training

  • Best eval loss: 2.737487626075745
  • Trained with gradient checkpointing on Apple M4 (MPS)
  • 3 epochs, batch_size=1, grad_accum=16

Usage

from huggingface_hub import hf_hub_download
from tokenizers import Tokenizer

model_path = hf_hub_download("sathishphdai/content-writer-slm-1m", "model.safetensors")
tokenizer_path = hf_hub_download("sathishphdai/content-writer-slm-1m", "content_writer_tokenizer.json")
tokenizer = Tokenizer.from_file(tokenizer_path)
Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support