blogen / README.md
paesdan's picture
Update README.md
62ad84f verified
metadata
license: apache-2.0
tags:
  - ai
  - blogging
  - content-generation
  - gemma
  - stable-diffusion
  - gguf
  - blogen

πŸ“ Blogen - Community Edition Models

This repository contains the quantized AI models used by Blogen Community Edition, a self-hosted, privacy-focused AI blogging assistant.

These models are optimized for local inference (CPU) using GGUF format.

πŸ“¦ Included Models

Model Type Model Name Filename Size Description
LLM Google Gemma 3 12B IT gemma-3-12b-it-q4_0.gguf ~4.7 GB Instruction-tuned model for generating blog posts, titles, and SEO metadata. Quantized to 4-bit (Q4_0).
Image Gen Stable Diffusion v1.5 stable-diffusion-v1-5-pruned-emaonly-Q4_1.gguf ~2.0 GB Text-to-Image model for generating blog cover images. Quantized to Q4_1 for stable-diffusion.cpp.

✨ New Capabilities (v1.1)

These models power the latest version of Blogen, enabling:

  • 🌍 Multilingual Blogging: Native support for generating content in Spanish, French, German, and 50+ languages via Gemma 3 instructions.
  • 🎨 High-Fidelity Images: Optimized Stable Diffusion pipeline with 30-step generation for clearer, artifact-free cover images.
  • πŸ›‘οΈ Enterprise Grade: Ready for secure, air-gapped deployments with Ed25519 license verification.

πŸš€ Usage

These models are designed to be automatically downloaded by the Blogen Docker container upon startup.

Manual Download & Run

If you prefer to download them manually (e.g., to save bandwidth on re-deployments):

  1. Download the files to a local folder (e.g., ./models).
  2. Run Blogen Community Edition:
    docker run -d \
      -p 3000:3000 \
      -v $(pwd)/models:/app/models \
      -v $(pwd)/data:/app/data \
      ghcr.io/org-runink/blogen/server:free
    

βš–οΈ License & Acknowledgments

These files are quantized redistributions of the original models cited above.