How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ICEPVP8977/MistralUncensoredTest2",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

While better that Test1 it still off as it does not answer to all questions

Target Goal: equivalent uncensored to LLama3_Uncensored_Q4_K_M.gguf

ICEPVP8977/MistralUncensoredTest3 Coming out soon

Downloads last month
24
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ICEPVP8977/MistralUncensoredTest2