How to use codingmavin/L1-Qwen-1.5B-Max-mlx-8Bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir L1-Qwen-1.5B-Max-mlx-8Bit codingmavin/L1-Qwen-1.5B-Max-mlx-8Bit
The community tab is the place to discuss and collaborate with the HF community!