Instructions to use themindstudio/flux2-klein-9b-mlx-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use themindstudio/flux2-klein-9b-mlx-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir flux2-klein-9b-mlx-4bit themindstudio/flux2-klein-9b-mlx-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
FLUX.2 Klein (9B) - MLX 4-bit
This is a 4-bit quantized version of FLUX.2 Klein (9B), optimized for Apple Silicon (M-series chips) using the MLX framework. To use this model, download MindCraft Studio and import this as a base model.
Model Details
- Architecture: FLUX.2 Klein (9B parameters)
- Precision: 4-bit quantization
- Framework: MLX
- Optimized For: macOS (Apple Silicon)
Original Model
This model is a conversion of FLUX.2 Klein (9B) by Black Forest Labs. Please refer to the original model repository for licensing (Non-Commercial), citation, and more details.
License
This model is licensed under the FLUX Non-Commercial License.
Hardware compatibility
Log In to add your hardware
4-bit