Instructions to use VECTORVV1/Gemma-4-E2B-Aggressive with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use VECTORVV1/Gemma-4-E2B-Aggressive with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="VECTORVV1/Gemma-4-E2B-Aggressive", filename="Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-IQ3_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use VECTORVV1/Gemma-4-E2B-Aggressive with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M # Run inference directly in the terminal: llama-cli -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M # Run inference directly in the terminal: llama-cli -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M # Run inference directly in the terminal: ./llama-cli -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Use Docker
docker model run hf.co/VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
- LM Studio
- Jan
- vLLM
How to use VECTORVV1/Gemma-4-E2B-Aggressive with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "VECTORVV1/Gemma-4-E2B-Aggressive" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VECTORVV1/Gemma-4-E2B-Aggressive", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
- Ollama
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Ollama:
ollama run hf.co/VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
- Unsloth Studio new
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/Gemma-4-E2B-Aggressive to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/Gemma-4-E2B-Aggressive to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for VECTORVV1/Gemma-4-E2B-Aggressive to start chatting
- Pi new
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Run Hermes
hermes
- Docker Model Runner
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Docker Model Runner:
docker model run hf.co/VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
- Lemonade
How to use VECTORVV1/Gemma-4-E2B-Aggressive with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull VECTORVV1/Gemma-4-E2B-Aggressive:IQ3_M
Run and chat with the model
lemonade run user.Gemma-4-E2B-Aggressive-IQ3_M
List all available models
lemonade list
Gemma-4-E2B-Uncensored-HauhauCS-Aggressive
Join the Discord for updates, roadmaps, projects, or just to chat.
Gemma 4 E2B-IT uncensored by HauhauCS. 0/465 Refusals***
HuggingFace's "Hardware Compatibility" widget doesn't recognize K_P quants — it may show fewer files than actually exist. Click "View +X variants" or go to Files and versions to see all available downloads.
About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lossless uncensored models out there.
Aggressive Variant
Stronger uncensoring — model is fully unlocked and won't refuse prompts. May occasionally append short disclaimers (baked into base model training, not refusals) but full content is always generated.
For a more conservative uncensor that keeps some safety guardrails, check the Balanced variant when it's available.
Downloads
| File | Quant | BPW | Size |
|---|---|---|---|
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q8_K_P.gguf | Q8_K_P | 9.4 | 4.7 GB |
| — | Q8_0 | 8.5 | — |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q6_K_P.gguf | Q6_K_P | 7.0 | 3.7 GB |
| — | Q6_K | 6.6 | — |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q5_K_P.gguf | Q5_K_P | 6.1 | 3.5 GB |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q4_K_P.gguf | Q4_K_P | 5.2 | 3.3 GB |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q3_K_P.gguf | Q3_K_P | 4.1 | 3.1 GB |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-IQ3_M.gguf | IQ3_M | 3.7 | 3.0 GB |
| Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q2_K_P.gguf | Q2_K_P | 3.5 | 2.9 GB |
| mmproj-Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-f16.gguf | mmproj (f16) | — | 940 MB |
All quants generated with importance matrix (imatrix) for optimal quality preservation on abliterated weights.
What are K_P quants?
K_P ("Perfect") quants are HauhauCS custom quantizations that use model-specific analysis to selectively preserve quality where it matters most. Each model gets its own optimized quantization profile.
A K_P quant effectively bumps quality up by 1-2 quant levels at only ~5-15% larger file size than the base quant. Fully compatible with llama.cpp, LM Studio, and any GGUF-compatible runtime — no special builds needed.
Note: K_P quants may show as "?" in LM Studio's quant column. This is a display issue only — the model loads and runs fine.
Specs
- 2B parameters
- 35 layers, mixed sliding window (512) + full attention
- 131K context
- Natively multimodal (text, image, video, audio)
- 20 KV shared layers for memory efficiency
- Based on google/gemma-4-e2b-it
Recommended Settings
From the official Google Gemma 4 authors:
temperature=1.0, top_p=0.95, top_k=64
Important:
- Use
--jinjaflag with llama.cpp for proper chat template handling - Vision/audio support requires the
mmprojfile alongside the main GGUF
Usage
Works with llama.cpp, LM Studio, Jan, koboldcpp, and other GGUF-compatible runtimes.
# Text only
llama-cli -m Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q4_K_P.gguf \
--jinja -c 8192 -ngl 99
# With vision/audio
llama-cli -m Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-Q4_K_P.gguf \
--mmproj mmproj-Gemma-4-E2B-Uncensored-HauhauCS-Aggressive-f16.gguf \
--jinja -c 8192 -ngl 99
Other Sizes
- Gemma-4-E4B-Uncensored-HauhauCS-Aggressive — 4B version, more capable
* Gemma 4 didn't get as much manual testing time at longer context as my other releases. Google is now using techniques similar to NVIDIA's GenRM — generative reward models that act as internal critics — making (true) uncensoring an increasingly challenging field. I expect 99.999% of users won't hit edge cases, but the asterisk is there for honesty.
** This is a 2B model. Temper your expectations — it's impressive for its size, but it's still 2B parameters. Complex reasoning, nuanced roleplay, and long coherent outputs are not its strong suit. Great for quick tasks, mobile/edge deployment, and experimentation.
- Downloads last month
- 77