Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantLLM
/
functiongemma-270m-it-4bit-mlx
like
0
Follow
QuantLLM
13
Text Generation
MLX
Safetensors
Transformers
English
gemma3_text
quantllm
mlx-lm
apple-silicon
q4_k_m
conversational
text-generation-inference
8-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
functiongemma-270m-it-4bit-mlx
/
added_tokens.json
codewithdark
Upload model via QuantLLM
0a76aa1
verified
6 days ago
raw
Copy download link
history
blame
contribute
delete
Safe
63 Bytes
{
"<end_of_image>"
:
262145
,
"<image_soft_token>"
:
262144
}