Instructions to use spicyneuron/Kimi-K2.6-MLX-3.6bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use spicyneuron/Kimi-K2.6-MLX-3.6bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("spicyneuron/Kimi-K2.6-MLX-3.6bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use spicyneuron/Kimi-K2.6-MLX-3.6bit with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.6bit"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "spicyneuron/Kimi-K2.6-MLX-3.6bit" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use spicyneuron/Kimi-K2.6-MLX-3.6bit with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.6bit"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default spicyneuron/Kimi-K2.6-MLX-3.6bit
Run Hermes
hermes
- MLX LM
How to use spicyneuron/Kimi-K2.6-MLX-3.6bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "spicyneuron/Kimi-K2.6-MLX-3.6bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "spicyneuron/Kimi-K2.6-MLX-3.6bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "spicyneuron/Kimi-K2.6-MLX-3.6bit", "messages": [ {"role": "user", "content": "Hello"} ] }'
Will there be a 2.8bit quant?
spicyneuron/Kimi-K2.5-MLX-2.8bit fits well on M3 Ultra 512 GB with full context. So it’s the perfect size. Would you be doing 2.8bit for Kimi-K2.6 as well?
Stay tuned! Kimi trials take a bit longer since I can only fit 3 versions on a 4 TB external SSD (the dequantized model alone is 2 TB+).
So far, experiments in the 2.9 - 3.3 bpw range had significantly higher KL divergence. Still searching for the best tradeoff.
much appreciated
Uploading a 430 GB version now: https://huggingface.co/spicyneuron/Kimi-K2.6-MLX-3.3bit
This was a tricky one. At 2.9 bits, K2.6 is still ~400 GB. Perplexity barely moves, but KL divergence triples, and other evals noticeably decay.
It's entirely possible my Kimi K2.5 quants had similar decay, but my earlier workflows didn't capture it. In any case, let me know how it runs!
I’m gonna check it out. Thanks.