πŸ“ Blogen - Community Edition Models

This repository contains the quantized AI models used by Blogen Community Edition, a self-hosted, privacy-focused AI blogging assistant.

These models are optimized for local inference (CPU) using GGUF format.

πŸ“¦ Included Models

Model Type Model Name Filename Size Description
LLM Google Gemma 3 12B IT gemma-3-12b-it-q4_0.gguf ~4.7 GB Instruction-tuned model for generating blog posts, titles, and SEO metadata. Quantized to 4-bit (Q4_0).
Image Gen Stable Diffusion v1.5 stable-diffusion-v1-5-pruned-emaonly-Q4_1.gguf ~2.0 GB Text-to-Image model for generating blog cover images. Quantized to Q4_1 for stable-diffusion.cpp.

✨ New Capabilities (v1.1)

These models power the latest version of Blogen, enabling:

  • 🌍 Multilingual Blogging: Native support for generating content in Spanish, French, German, and 50+ languages via Gemma 3 instructions.
  • 🎨 High-Fidelity Images: Optimized Stable Diffusion pipeline with 30-step generation for clearer, artifact-free cover images.
  • πŸ›‘οΈ Enterprise Grade: Ready for secure, air-gapped deployments with Ed25519 license verification.

πŸš€ Usage

These models are designed to be automatically downloaded by the Blogen Docker container upon startup.

Manual Download & Run

If you prefer to download them manually (e.g., to save bandwidth on re-deployments):

  1. Download the files to a local folder (e.g., ./models).
  2. Run Blogen Community Edition:
    docker run -d \
      -p 3000:3000 \
      -v $(pwd)/models:/app/models \
      -v $(pwd)/data:/app/data \
      ghcr.io/org-runink/blogen/server:free
    

βš–οΈ License & Acknowledgments

These files are quantized redistributions of the original models cited above.

Downloads last month
14
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support