QuantFactory/Guanaco-13B-Uncensored-GGUF

This is quantized version of Fredithefish/Guanaco-13B-Uncensored created using llama.cpp

Original Model Card

Alt Text

✨ Guanaco - 13B - Uncensored ✨

Guanaco-13B-Uncensored has been fine-tuned for 4 epochs on the Unfiltered Guanaco Dataset. using Llama-2-13B as the base model.
The model does not perform well with languages other than English.
Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.

Special thanks

I would like to thank AutoMeta for providing me with the computing power necessary to train this model.

Also thanks to TheBloke for creating the GGUF and the GPTQ quantizations for this model

Prompt Template

### Human: {prompt} ### Assistant:

Dataset

The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset.

Downloads last month
51
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support