Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantLLM
/
functiongemma-270m-it-4bit-mlx
like
0
Follow
QuantLLM
13
Text Generation
MLX
Safetensors
Transformers
English
gemma3_text
quantllm
mlx-lm
apple-silicon
q4_k_m
conversational
text-generation-inference
8-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
functiongemma-270m-it-4bit-mlx
File size: 63 Bytes
0a76aa1
1
2
3
4
5
{
"<end_of_image>"
:
262145
,
"<image_soft_token>"
:
262144
}