pinned Running Forge Image Orchestrator (Daggr) ๐ Generate and refine images using AI with prompt scheduling and scoring
pinned Sleeping FLUX.2 [Klein] 4B ๐ป Generate or edit images from text prompts and optional reference images
Sleeping repurposed - HELVETE-3B (llama.cpp CPU) ๐ง Chat with a local AI language model via web UI or API
Running repurposed - Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF (llama.cpp CPU) ๐ง Chat with a local AI language model
Sleeping repurposed - Qwen2.5-Coder-1.5B-Instruct-GGUF (llama.cpp CPU) ๐ง Chat with an AI assistant using a local LLM
Sleeping repurposed - Qwen3-1.7B-GGUF (llama.cpp CPU) ๐ง Generate chat responses with Qwen2.5-7B model
Running Qwen3-VL-2B-Thinking-GGUF (llama.cpp CPU) ๐ผ Chat with an AI text model via OpenAIโstyle API
Sleeping Falcon-H1-Tiny-Coder-90M-GGUF (llama.cpp CPU) โจ Chat with an AI language model via a simple web UI
Running Gemma-3-Prompt-Coder-270m-it-Uncensored-GGUF (llama.cpp CPU) โจ Chat with a local AI assistant powered by Qwen3 model
Sleeping DeepSeek-R1-Distill-Qwen-1.5B-GGUF (llama.cpp CPU) ๐ง Chat with an AI assistant powered by Qwen3 0.6B
Running Meta-Llama-3.1-8B-Instruct-abliterated-GGUF (llama.cpp CPU) ๐ง Chat with a local AI language model
Running Llama-3.2-3B-Instruct-abliterated-GGUF (llama.cpp CPU) ๐ง Chat with an AI language model via text
Sleeping SmolLM2-135M-Instruct-GGUF (llama.cpp CPU) ๐ง Chat with a local AI language model via a web interface
Running Falcon H1 Tiny 90M (llama.cpp CPU, Tools + JSON) ๐ฆ Generate AI-powered chat responses with Falcon H1 Tiny
Running Qwen2.5 0.5B Instruct (ONNX Runtime CPU) ๐ง Chat with an AI assistant powered by Qwen2.5 model
Sleeping Qwen2.5 0.5B Instruct (Transformers CPU) ๐ง Generate AI chat responses from your text prompts
Running Qwen2.5 0.5B Instruct (OpenVINO CPU) ๐ง Chat with an AI assistant using OpenVINO language model