Anki Card Generator V3.5 (GGUF)
This is a GGUF conversion of GilesL/anki-card-generator-v3-5, a fine-tuned model for generating Anki flashcards from text.
Model Details
- Base Model: Qwen/Qwen2.5-3B
- Fine-tuning: LoRA (r=32, alpha=64)
- Training Data: 90 premium examples (50 Claude Sonnet 4.5 + 40 Claude Opus 4.5)
- Training: 15 epochs
- Format: GGUF (for llama.cpp, Ollama, LM Studio, etc.)
What It Does
Automatically generates atomic Anki flashcards from raw text input. No prompt engineering needed - just provide the text and it generates structured JSON flashcards with front, back, and tags.
Available Quantizations
| File | Quant | Size | Description | Use Case |
|---|---|---|---|---|
| anki-card-generator-v3-5-f16.gguf | F16 | ~6GB | Full precision | Best quality |
| anki-card-generator-v3-5-q8_0.gguf | Q8_0 | ~3GB | 8-bit | High quality |
| anki-card-generator-v3-5-q5_k_m.gguf | Q5_K_M | ~2GB | 5-bit medium | Good quality |
| anki-card-generator-v3-5-q4_k_m.gguf | Q4_K_M | ~1.7GB | 4-bit medium | Recommended |
Usage with Ollama
- Download the model:
huggingface-cli download GilesL/anki-card-generator-v3-5-gguf anki-card-generator-v3-5-q4_k_m.gguf --local-dir .
- Create a
Modelfile:
FROM ./anki-card-generator-v3-5-q4_k_m.gguf
TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0.7
PARAMETER top_p 0.9
- Create and run:
ollama create anki-generator -f Modelfile
echo "Your text here" | ollama run anki-generator
Example Output
Input text about transformers generates:
[
{
"front": "What year was the transformer architecture introduced?",
"back": "2017.",
"tags": ["transformer", "history", "neural-networks"]
},
// ... more cards
]
License
Inherits license from Qwen/Qwen2.5-3B
Converted to GGUF using llama.cpp
- Downloads last month
- 5
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for GilesL/anki-card-generator-v3-5-gguf
Base model
Qwen/Qwen2.5-3B