Instructions to use osirisbrain/OsirisCortex-v7c-MLX-Q4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use osirisbrain/OsirisCortex-v7c-MLX-Q4 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir OsirisCortex-v7c-MLX-Q4 osirisbrain/OsirisCortex-v7c-MLX-Q4
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
| base_model: Qwen/Qwen3.5-9B | |
| library_name: mlx | |
| tags: | |
| - mlx | |
| - mlx-lm | |
| - qwen3.5 | |
| - osirisbrain | |
| # OsirisCortex-v7c MLX Q4 | |
| OsirisBrain sovereign reasoning model. Qwen3.5-9B quantized to 4-bit for Apple Silicon Metal GPU. | |
| - **Base model:** Qwen/Qwen3.5-9B | |
| - **Quantization:** 4-bit (group_size=64, affine) | |
| - **Framework:** MLX (Apple Silicon native) | |
| - **Use case:** Core reasoning brain for OsirisBrain AGI Desktop Agent | |