view post Post 6978 We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAMRun with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cppGuide: https://unsloth.ai/docs/basics/api See translation 🔥 23 23 ❤️ 5 5 + Reply
mlx-community/translategemma-4b-it-4bit_immersive-translate Text Generation • 0.6B • Updated 18 days ago • 736 • 6
mlx-community/Huihui-Qwen3.5-4B-Claude-4.6-Opus-abliterated-4bit Image-Text-to-Text • 1.0B • Updated Mar 17 • 1.33k • 3