Provn Gemma 4 E2B Q4_K_M
This repository contains the GGUF Layer 3 semantic classifier used by Provn.
It is a fine-tuned Gemma derivative for binary leak classification on code snippets:
leakclean
Intended use
Use this model locally with Provn as the optional Layer 3 semantic classifier for ambiguous detections.
Download location for Provn
Place the GGUF file at:
- macOS/Linux:
~/.provn/models/provn-gemma4-e2b-q4km.gguf - Windows:
%USERPROFILE%\\.provn\\models\\provn-gemma4-e2b-q4km.gguf
Run with Provn
Start your llama.cpp-compatible server on 127.0.0.1:8080 with this GGUF, then run:
provn server status
Gemma terms
This model is a derivative of Gemma and is distributed subject to the Gemma Terms of Use and Gemma Prohibited Use Policy.
- Gemma Terms of Use: https://ai.google.dev/gemma/terms
- Gemma Prohibited Use Policy: https://ai.google.dev/gemma/prohibited_use_policy
Modification notice
This repository contains modified / fine-tuned model artifacts created for Provn.
- Downloads last month
- 529
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support