π Blogen - Community Edition Models
This repository contains the quantized AI models used by Blogen Community Edition, a self-hosted, privacy-focused AI blogging assistant.
These models are optimized for local inference (CPU) using GGUF format.
π¦ Included Models
| Model Type | Model Name | Filename | Size | Description |
|---|---|---|---|---|
| LLM | Google Gemma 3 12B IT | gemma-3-12b-it-q4_0.gguf |
~4.7 GB | Instruction-tuned model for generating blog posts, titles, and SEO metadata. Quantized to 4-bit (Q4_0). |
| Image Gen | Stable Diffusion v1.5 | stable-diffusion-v1-5-pruned-emaonly-Q4_1.gguf |
~2.0 GB | Text-to-Image model for generating blog cover images. Quantized to Q4_1 for stable-diffusion.cpp. |
β¨ New Capabilities (v1.1)
These models power the latest version of Blogen, enabling:
- π Multilingual Blogging: Native support for generating content in Spanish, French, German, and 50+ languages via Gemma 3 instructions.
- π¨ High-Fidelity Images: Optimized Stable Diffusion pipeline with 30-step generation for clearer, artifact-free cover images.
- π‘οΈ Enterprise Grade: Ready for secure, air-gapped deployments with Ed25519 license verification.
π Usage
These models are designed to be automatically downloaded by the Blogen Docker container upon startup.
Manual Download & Run
If you prefer to download them manually (e.g., to save bandwidth on re-deployments):
- Download the files to a local folder (e.g.,
./models). - Run Blogen Community Edition:
docker run -d \ -p 3000:3000 \ -v $(pwd)/models:/app/models \ -v $(pwd)/data:/app/data \ ghcr.io/org-runink/blogen/server:free
βοΈ License & Acknowledgments
- Blogen CE: Apache 2.0
- Gemma 3: Gemma Terms of Use (Google)
- Stable Diffusion: CreativeML Open RAIL-M (RunwayML / Stability AI)
These files are quantized redistributions of the original models cited above.
- Downloads last month
- 14
Hardware compatibility
Log In
to add your hardware
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support