Dyslexic Writer - Qwen3-4B
Fine-tuned Qwen/Qwen3-4B for spelling and grammar correction, optimized for dyslexic writers.
Performance
| Metric | Score |
|---|---|
| Exact Match Accuracy | 85.6% |
| Error Fix Rate | 80.4% |
| No-Error Preservation | 99.3% |
| F1 Score | 99.5% |
Trained on ~495K examples including word pairs, sentence corrections, and paragraph-level error injection from synthetic stories.
Usage
With Ollama (GGUF)
Download the Q4_K_M GGUF and create a Modelfile:
FROM ./dyslexic-writer-qwen3-4b-q4_k_m.gguf
PARAMETER temperature 0
PARAMETER num_predict 256
SYSTEM You are a spelling correction assistant.
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
<think>
</think>
"""
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jburnford/dyslexic-writer-qwen3-4b")
tokenizer = AutoTokenizer.from_pretrained("jburnford/dyslexic-writer-qwen3-4b")
messages = [
{"role": "system", "content": "You are a spelling correction assistant."},
{"role": "user", "content": "Fix any spelling mistakes in this text. If there are no mistakes, output the text unchanged.\n\nI went to teh store."},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model Variants
| Model | GGUF Q4_K_M | Exact Match | Best For |
|---|---|---|---|
| Qwen3-0.6B | ~460 MB | 78.8% | Mobile/embedded |
| Qwen3-1.7B | ~1.2 GB | 82.2% | Default |
| Qwen3-4B | ~2.5 GB | 85.6% | Best quality |
- Downloads last month
- 118