How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="ICEPVP8977/Uncensored_codegemma_7b")
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ICEPVP8977/Uncensored_codegemma_7b", dtype="auto")
Quick Links

Uncensored_codegemma_7b

Downloads last month
245
GGUF
Model size
9B params
Architecture
gemma
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using ICEPVP8977/Uncensored_codegemma_7b 1

Collection including ICEPVP8977/Uncensored_codegemma_7b