Sleeping repurposed - HELVETE-3B (llama.cpp CPU) 🧠 Chat with a local AI language model via web UI or API
Running repurposed - Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF (llama.cpp CPU) 🧠 Chat with a local AI language model
Sleeping repurposed - Qwen2.5-Coder-1.5B-Instruct-GGUF (llama.cpp CPU) 🧠 Chat with an AI assistant using a local LLM
Sleeping Falcon-H1-Tiny-Coder-90M-GGUF (llama.cpp CPU) ⌨ Chat with an AI language model via a simple web UI
Sleeping Falcon-H1-Tiny-Coder-90M-GGUF (llama.cpp CPU) ⌨ Chat with an AI language model via a simple web UI
Running Gemma-3-Prompt-Coder-270m-it-Uncensored-GGUF (llama.cpp CPU) ⌨ Chat with a local AI assistant powered by Qwen3 model
Running Gemma-3-Prompt-Coder-270m-it-Uncensored-GGUF (llama.cpp CPU) ⌨ Chat with a local AI assistant powered by Qwen3 model
Sleeping DeepSeek-R1-Distill-Qwen-1.5B-GGUF (llama.cpp CPU) 🧠 Chat with an AI assistant powered by Qwen3 0.6B
Sleeping DeepSeek-R1-Distill-Qwen-1.5B-GGUF (llama.cpp CPU) 🧠 Chat with an AI assistant powered by Qwen3 0.6B
Running Meta-Llama-3.1-8B-Instruct-abliterated-GGUF (llama.cpp CPU) 🧠 Chat with a local AI language model
Running Meta-Llama-3.1-8B-Instruct-abliterated-GGUF (llama.cpp CPU) 🧠 Chat with a local AI language model
Running Llama-3.2-3B-Instruct-abliterated-GGUF (llama.cpp CPU) 🧠 Chat with an AI language model via text
Running Llama-3.2-3B-Instruct-abliterated-GGUF (llama.cpp CPU) 🧠 Chat with an AI language model via text