Access Gemma on Hugging Face

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.

Log in or Sign Up to review the conditions and access this model content.

TranslateGemma GGUF Q8_0

By: Patrick Lumbantobing
Copyright@VertoX-AI
Downloads last month
14
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pltobing/translategemma-4b-it-Q8_0-GGUF

Quantized
(32)
this model

Collection including pltobing/translategemma-4b-it-Q8_0-GGUF