coder-1-gguf / README.md
zeekay's picture
Upload README.md with huggingface_hub
4e96ba1 verified
metadata
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/zooai/coder-1-gguf/blob/main/LICENSE
pipeline_tag: text-generation
tags:
  - zoo
  - coder
  - coding
  - a3b
  - gguf
  - quantized

Zoo Coder-1 GGUF (Quantized Coding Model)

Zoo AI 501(c)(3)

Overview

Zoo Coder-1 GGUF provides quantized versions of our enterprise-grade coding AI model. These GGUF-formatted models enable efficient deployment across various hardware configurations while maintaining excellent coding capabilities.

Model Details

  • Base: Qwen3-Coder with A3B technology
  • Format: GGUF quantized
  • Context: 32K tokens (extensible to 128K)
  • Languages: Python, JavaScript, TypeScript, Go, Rust, Java, C++, and 50+ more

Available Quantizations

Variant Size RAM Required Use Case
Q2_K ~2GB 4GB Edge devices, prototyping
Q3_K_M ~2.5GB 5GB Mobile, lightweight servers
Q4_K_M ~3.2GB 6GB Recommended - Best balance
Q5_K_M ~4GB 7GB High-quality production
Q6_K ~5GB 8GB Maximum quality

Quick Start

With llama.cpp

./main -m Q4_K_M-GGUF/Q4_K_M-GGUF-00001-of-00032.gguf \
  -p "Write a Python function to calculate fibonacci numbers"

With Zoo Desktop

zoo model download coder-1-gguf

About Zoo AI

Zoo Labs Foundation Inc is a 501(c)(3) nonprofit organization pioneering accessible AI infrastructure.

License

Apache 2.0