-
-
-
-
-
-
Active filters: mlx
mlx-vision/regnet_x_8gf-mlxim
Image Classification
• Updated
• 4
mlx-vision/regnet_y_16gf-mlxim
Image Classification
• Updated
• 4
mlx-vision/regnet_x_16gf-mlxim
Image Classification
• Updated
• 5
mlx-vision/regnet_y_400mf-mlxim
Image Classification
• Updated
• 3
mlx-community/Qwen2.5-Coder-32B-Instruct-6bit
Text Generation
• Updated
• 116
• 1
mlx-community/Qwen2.5-Coder-32B-Instruct-3bit
Text Generation
• Updated
• 86
• 4
mlx-community/Hunyuan-A52B-Instruct-3bit
Text Generation
• 61B • Updated
• 125
• 5
mlx-community/Florence-2-large-ft-6bit
Image-Text-to-Text
• 0.2B • Updated
• 3
mlx-community/Molmo-7B-D-0924-6bit
Image-Text-to-Text
• Updated
• 10
mlx-community/Meta-Llama-3.1-8B-Instruct-3bit
Text Generation
• Updated
• 36
mlx-community/Florence-2-base-ft-6bit
Image-Text-to-Text
• 65.2M • Updated
mlx-community/Florence-2-base-ft-3bit
Image-Text-to-Text
• 40.6M • Updated
• 1
mlx-community/Molmo-7B-D-0924-3bit
Image-Text-to-Text
• Updated
• 11
mlx-community/Qwen2.5-72B-Instruct-3bit
Text Generation
• Updated
• 24
mlx-community/Qwen2.5-72B-Instruct-6bit
Text Generation
• Updated
• 38
• 1
Text Generation
• 2B • Updated
• 3
mlx-community/Llama-3.1-Tulu-3-8B-3bit
Text Generation
• 1B • Updated
• 3
mlx-community/Llama-3.1-Tulu-3-8B-4bit
Text Generation
• 1B • Updated
• 2
mlx-community/Llama-3.1-Tulu-3-70B-4bit
Text Generation
• 11B • Updated
• 9
sigtakahashi/google-de-gozari
Text Generation
• 1B • Updated
• 18
mlx-community/Llama-3.1-Tulu-3-70B-6bit
Text Generation
• 15B • Updated
• 11
Text Generation
• 2B • Updated
• 2
mlx-community/Llama-3.1-Tulu-3-8B-8bit
Text Generation
• 2B • Updated
• 14
• 1
litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx
Text Generation
• 5B • Updated
• 29
dwdcth/Homer-v0.5-Qwen2.5-7B-Q4-mlx
1B • Updated
• 27
CuckmeisterFuller/Marco-o1-Q4-mlx
Text Generation
• 1B • Updated
• 3
base16/Llama-3.2-1B-Instruct-4bit
Text Generation
• 0.2B • Updated
• 21
• 1
HawkonLi/Hunyuan-A52B-Instruct-2bit
Text Generation
• 30B • Updated
• 22
• 1
tensorblock/Qwen2.5-Coder-32B-Instruct-bf16-GGUF
Text Generation
• 33B • Updated
• 92
• 1
Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit
Text Generation
• 0.2B • Updated
• 60
• 1