Ganda Gemma 1B

A fine-tuned Gemma 3 1B instruction model specialized for English-to-Luganda translation and Luganda conversational AI. The model accepts input in both English and Luganda but outputs responses exclusively in Luganda.

📊 Translation Performance

Translation Performance Comparison

Model Comparison

Model Parameters BLEU chrF++ Efficiency*
Gemma 3 4B 4B 1.1 20.05 0.28
Gemma 3 27B 27B 3.65 31.37 0.14
GPT-5 Mini N/A 5.14 36.55 N/A
Ganda Gemma 1B 1B 6.99 40.32 6.99
Gemini 2.0 Flash Large 7.94 43.38 N/A

*Efficiency = BLEU Score ÷ Parameters (in billions)

Key Performance Insights

🎯 Efficiency Leader: Achieves 6.99 BLEU per billion parameters (highest efficiency ratio)
🚀 Size Advantage: Outperforms Gemma 3 4B (4x larger) by 535% on BLEU score
💎 Competitive Quality: Achieves similar performance to GPT-5 Mini with known 1B parameter count
Practical Deployment: Runs efficiently on consumer hardware while maintaining quality

Evaluation Details

  • Dataset: FLORES-200 English→Luganda (1,012 translation pairs)
  • Metrics: BLEU (bilingual evaluation understudy) and chrF++ (character F-score)
  • Evaluation: Zero-shot translation performance

🚀 Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("CraneAILabs/ganda-gemma-1b")
tokenizer = AutoTokenizer.from_pretrained("CraneAILabs/ganda-gemma-1b")

# Translate to Luganda
prompt = "Translate to Luganda: Hello, how are you today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

🌍 Language Capabilities

  • Input Languages: English + Luganda
  • Output Language: Luganda only
  • Primary Focus: English-to-Luganda translation and Luganda conversation

🎯 Capabilities

  • Translation: English-to-Luganda translation
  • Conversational AI: Natural dialogue in Luganda
  • Summarization: Text summarization in Luganda
  • Writing: Creative and informational writing in Luganda
  • Question Answering: General knowledge responses in Luganda

💻 Usage Examples

Basic Translation

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CraneAILabs/ganda-gemma-1b")
tokenizer = AutoTokenizer.from_pretrained("CraneAILabs/ganda-gemma-1b")

# English to Luganda translation
prompt = "Translate to Luganda: Welcome to our school"
inputs = tokenizer(prompt, return_tensors="pt")

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_length=100,
        temperature=0.3,
        do_sample=True,
        pad_token_id=tokenizer.eos_token_id
    )

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Luganda Conversation

# Direct Luganda conversation
prompt = "Oli otya! Osobola okuntuyamba leero?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Using the Pipeline

from transformers import pipeline

# Create a text generation pipeline
generator = pipeline(
    "text-generation",
    model="CraneAILabs/ganda-gemma-1b",
    tokenizer="CraneAILabs/ganda-gemma-1b",
    device=0 if torch.cuda.is_available() else -1
)

# Generate Luganda text
result = generator(
    "Translate to Luganda: Welcome to our school",
    max_length=100,
    temperature=0.3,
    do_sample=True
)
print(result[0]['generated_text'])

🔗 Related Models

🎨 Use Cases

  • Translation Apps: Offline English-Luganda translation
  • Language Learning: Practice Luganda with instant feedback
  • Cultural Apps: Create culturally aware Luganda content
  • Educational Tools: Luganda learning assistants
  • Research: Natural language processing for Luganda
  • Content Creation: Generate Luganda content for media

⚠️ Limitations

  • Language Output: Responds only in Luganda
  • Context Length: Optimized for shorter conversational inputs
  • Cultural Context: May not capture all nuances of Luganda culture
  • Regional Variations: Trained on standard Luganda, may not reflect all dialects

🛠️ Technical Details

  • Base Model: Google Gemma 3 1B Instruct
  • Fine-tuning Method: Supervised fine-tuning on English-Luganda pairs
  • Context Length: 2048 tokens
  • Precision: 16-bit floating point
  • Framework: Transformers (PyTorch)

📄 License

This model is released under the Gemma Terms of Use. Please review the terms before use.

🙏 Acknowledgments

  • Google: For Gemma 3 base model and research
  • Luganda Community: For language resources and cultural guidance
  • FLORES Team: For evaluation dataset and benchmarking framework

Built with ❤️ by Crane AI Labs

Ganda Gemma - Your helpful Luganda AI companion!

Downloads last month
467
Safetensors
Model size
1.0B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with CraneAILabs/ganda-gemma-1b.

Model tree for CraneAILabs/ganda-gemma-1b

Finetuned
(545)
this model
Finetunes
3 models
Quantizations
2 models

Collection including CraneAILabs/ganda-gemma-1b