view post Post 2514 Run GLM-4.7-Flash locally on your device with 24GB RAM!🔥It's the best performing 30B model on SWE-Bench and GPQA. With 200K context, it excels at coding, agents, chat & reasoning.GGUF: unsloth/GLM-4.7-Flash-GGUFGuide: https://unsloth.ai/docs/models/glm-4.7-flash See translation 🔥 10 10 + Reply
EloyOn/Christian-Bible-Expert-v2.0-12B-Q5_0-GGUF Text Generation • 12B • Updated 17 days ago • 111
EloyOn/Christian-Bible-Expert-v2.0-24B-Q5_0-GGUF Text Generation • 24B • Updated 17 days ago • 23