RotterMaatje - Qwen3-4B Fine-tuned for Homeless Support
⚠️ Disclaimer: This model is a research prototype designed to assist, not replace, professional social workers. Always verify contact information and legal advice with official sources.
A fine-tuned LLM designed to assist homeless individuals and volunteers in Rotterdam, Netherlands.
Training Approach
- Base Model: Qwen3-4B-Instruct
- SFT: ~300 examples (supervised fine-tuning)
- DPO: ~300 examples (direct preference optimization)
- Languages: Dutch, English, Arabic, Polish
Key Features
- Local Focus: Optimized for Rotterdam social services terminology.
- Multilingual Support: Fine-tuned to maintain B1-level simplicity across Dutch, English, Arabic, and Polish.
- Empathetic Persona: Trained via DPO to prioritize supportive, non-judgmental responses over standard robotic outputs.
- Hallucination Mitigation: Utilizes a diversified synthetic corpus to reduce factual errors, though users should still verify critical details.
Training Data Distribution
| Language | SFT | DPO |
|---|---|---|
| Arabic | 27.5% | 23.0% |
| English | 26.8% | 21.7% |
| Dutch | 24.2% | 29.0% |
| Polish | 21.5% | 26.3% |
Technical Details
- Fine-tuned using Unsloth on Google Colab
- GGUF quantized for efficient inference
- Designed to run locally with llama.cpp or Ollama
Quick Start with Ollama
Since this model is GGUF quantized, you can run it locally with zero setup using Ollama:
Download the Modelfile (or create one):
FROM ./qwen3-4b-rottermaatje.gguf TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .User }}<|im_start|>user {{ .User }}<|im_end|> {{ end }}<|im_start|>assistant """ SYSTEM "You are RotterMaatje, a helpful and empathetic guide for homeless individuals in Rotterdam."Run:
ollama create rottermaatje -f Modelfile
ollama run rottermaatje "Where can I find a night shelter in Centrum?"
- Downloads last month
- 5
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
