Feature Extraction
MLX
Safetensors
Transformers
kimi_k25
quantization
dq3
custom_code
4-bit precision
Instructions to use cs2764/Kimi-K2.6_dq3-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use cs2764/Kimi-K2.6_dq3-mlx with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Kimi-K2.6_dq3-mlx cs2764/Kimi-K2.6_dq3-mlx
- Transformers
How to use cs2764/Kimi-K2.6_dq3-mlx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="cs2764/Kimi-K2.6_dq3-mlx", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("cs2764/Kimi-K2.6_dq3-mlx", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Ctrl+K
- 1.52 kB
- 847 Bytes
- 4.02 kB
- 240 kB
- 10.6 kB
- 5.44 kB
- 52 Bytes
- 6.91 kB
- 10 kB
- 13.2 kB
- 4.35 GB xet
- 2.47 GB xet
- 4.01 GB xet
- 4.93 GB xet
- 4.01 GB xet
- 4.93 GB xet
- 4.01 GB xet
- 4.93 GB xet
- 4.01 GB xet
- 4.93 GB xet
- 3.3 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 3.3 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 3.3 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 3.3 GB xet
- 4.93 GB xet
- 5.06 GB xet
- 5.06 GB xet
- 4.93 GB xet
- 5.06 GB xet