Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Hyper-AI
's Collections
qwen3.5-fp8
qwen3-vl-embedding-fp8
qwen3-vl-fp8
gemma-4-fp8
gemma-4-fp8
updated
3 days ago
fp8 quant for gemma-4 models, nearly half memory decrease, speedup 30%, vllm serve can run
Upvote
-
Hyper-AI/gemma-4-31B-it-fp8
Image-Text-to-Text
•
31B
•
Updated
3 days ago
•
48
•
1
Hyper-AI/gemma-4-E4B-it-fp8
Any-to-Any
•
Updated
3 days ago
•
33
Upvote
-
Share collection
View history
Collection guide
Browse collections