Based on Gemma - 3n - 4b
8-bit Quantized for faster inference.
Use Ollama:
ollama run hf.co/AravindKumarRajendran/WhiZ-gemma-3n-4b:Q8_0
8-bit