| --- |
| title: JANGQ-AI |
| --- |
| |
| <p align="center"> |
| <a href="https://mlx.studio"><img src="https://raw.githubusercontent.com/jjang-ai/jangq/main/assets/mlx-studio-light.png" alt="MLX Studio" width="500"></a> |
| </p> |
|
|
| <p align="center"> |
| <a href="https://mlx.studio"><img src="https://mlx.studio/assets/screenshots/mlx-studio-featured.png?v=1" alt="MLX Studio App" width="600"></a> |
| </p> |
|
|
| <h3 align="center"><a href="https://mlx.studio">MLX Studio</a> — the only app that natively supports JANG models</h3> |
|
|
| --- |
|
|
| > **LM Studio, Ollama, oMLX, Inferencer** and other MLX apps do **not** support JANG yet. Use [MLX Studio](https://mlx.studio) for native JANG support, or pip install jang for Python inference. **Ask your favorite app's creators to add JANG support!** |
|
|
| --- |
|
|
| <p align="center"> |
| <img src="https://raw.githubusercontent.com/jjang-ai/jangq/main/assets/jangq-logo-dark.png" alt="JANG" width="300"> |
| </p> |
|
|
| # JANGQ-AI — JANG Quantized Models for Apple Silicon |
|
|
| **JANG** (**J**ang **A**daptive **N**-bit **G**rading) — the GGUF equivalent for MLX. |
|
|
| Same size as MLX, smarter bit allocation. Models stay quantized in GPU memory at full Metal speed. |
|
|
| ## Install |
|
|
| ``` |
| pip install "jang[mlx]" |
| ``` |
|
|
| ## Models |
|
|
| | Model | Profile | MMLU | HumanEval | Size | |
| |-------|---------|------|-----------|------| |
| | [Qwen3.5-122B-A10B-JANG_2S](https://huggingface.co/JANGQ-AI/Qwen3.5-122B-A10B-JANG_2S) | 2-bit | **84%** | **90%** | 38 GB | |
| | [Qwen3.5-35B-A3B-JANG_4K](https://huggingface.co/JANGQ-AI/Qwen3.5-35B-A3B-JANG_4K) | 4-bit K-quant | **84%** | 90% | 16.7 GB | |
| | [Qwen3.5-35B-A3B-JANG_2S](https://huggingface.co/JANGQ-AI/Qwen3.5-35B-A3B-JANG_2S) | 2-bit | 62% | — | 12 GB | |
|
|
| ## Links |
|
|
| [GitHub](https://github.com/jjang-ai/jangq) · [PyPI](https://pypi.org/project/jang/) · [MLX Studio](https://mlx.studio) |
|
|
| Created by Jinho Jang — [jangq.ai](https://jangq.ai) · [@dealignai](https://x.com/dealignai) |
|
|