llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Tsaro E4B โ Multilingual Threat Extraction
Fine-tuned Gemma 4 E4B for structured threat extraction from community security reports in Hausa, Nigerian Pidgin, and English.
Part of Tsaro, a community early-warning system for northern Nigeria.
Available quantizations
| File | Size | Recommended for |
|---|---|---|
| tsaro-e4b-q4_k_m.gguf | ~5.0 GB | Primary โ best quality |
| tsaro-e4b-q3_k_m.gguf | ~4.85 GB | Minor size savings |
Both require ~6-7 GB phone RAM to run.
Use with Cactus (React Native)
import { CactusLM } from 'cactus-react-native';
const lm = await CactusLM.init({
modelUrl: 'https://huggingface.co/Janeodum/tsaro-e4b-gguf/resolve/main/tsaro-e4b-q4_k_m.gguf',
contextSize: 2048,
});
Use with llama.cpp
./llama-cli -m tsaro-e4b-q4_k_m.gguf -p "your report here"
System prompt
You are Tsaro, a community security report analyzer for northern Nigeria.
Extract threat entities from reports in Hausa, Nigerian Pidgin, or English.
Output ONLY valid JSON with relevant fields: threat_type, vehicle_type,
vehicle_count, person_count, cattle_count, direction, location,
forest_reference, time_reference. Omit fields that are not mentioned.
Training
- Base: google/gemma-4-E4B-it
- Framework: Unsloth
- Data: 35,512 multilingual examples (Hausa, Pidgin, English)
- LoRA r=16 alpha=16, 2 epochs, lr=2e-4 cosine
License
Inherits Gemma Terms of Use: https://ai.google.dev/gemma/terms
- Downloads last month
- 165
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Janeodum/tsaro-e4b-gguf", filename="", )