tf2-4b-gguf / README.md
andreeatomescu's picture
Update README.md
b713615 verified

🌱 TinyFabulist-TF2-4B · Gemma 3 4-B EN→RO Fable Translator

tf2-4b is a parameter-efficiently fine-tuned checkpoint of Google Gemma 3 4 B that specialises in translating moral fables from English into Romanian.


📰 Model Summary

Field Value
Base model google/gemma-3-4b-it
Architecture Decoder-only Transformer · 3.88 B params
Fine-tuning method Supervised SFT → instruction tuning → LoRA (r = 16) · adapters merged
Training data 12 000 EN→RO fable pairs (train) + 1 500 val / 1 500 test  (TinyFabulist-TF2)
Objective Next-token cross-entropy on Romanian targets
Hardware / budget TODO (e.g. 2 × A100 80 GB · ~ h · ≈ $)
Intended use Offline literary translation of short stories / fables
Out-of-scope News, legal, medical, or very long documents; languages other than EN ↔ RO
Context window 8 192 tokens

✨ How It Works

Give the model an English fable (≤ 2 000 tokens) and it returns a fluent Romanian version that preserves both narrative style and explicit moral—without relying on costly GPT-class APIs.


🚀 Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_id = "klusai/tf2-4b"

tok   = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

translator = pipeline("text-generation", model=model, tokenizer=tok)

en_fable = (
    "Once upon a time, a small sparrow boasted to the mighty eagle that speed alone "
    "was enough to conquer the sky. … Moral: Pride often blinds us to our limits."
)

ro_fable = translator(
    f"Translate the following fable into Romanian:\n\n{en_fable}",
    max_new_tokens=512,
    temperature=0.2
)[0]["generated_text"]

print(ro_fable)

📦 Quantised Variants

File Precision Size Typical RAM
tf2-4b-f16.safetensors FP16 7.77 GB ≥ 16 GB GPU / 20 GB CPU
tf2-4b-q5_k_m.gguf 5-bit Q5_K_M 2.83 GB ≥ 6 GB RAM
# Run the 5-bit build with llama-cpp-python
pip install llama-cpp-python
python -m llama_cpp.server \
  --model tf2-4b-q5_k_m.gguf \
  --n_ctx 8192

🚧 Limitations & Biases

  • Trained entirely on synthetic TinyFabulist narratives → may echo that phrasing.
  • Domain-specific: excels at short moral stories; under-performs on highly technical or colloquial text.
  • No integrated safety filtering — downstream applications should moderate outputs.
  • Inputs longer than 8 192 tokens are truncated.

✅ Licence

Model: Apache 2.0 (commercial + research friendly)
Dataset: CC-BY-4.0 (TinyFabulist-TF2 EN–RO 15 k)