How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ICEPVP8977/MistralUncensoredTest3",
	filename="",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

MistralUncensoredTest3 While it can be improved by further training to "Hacking Datasets", "Uncensored Datasets with unsafe content".

The current version is rather satisfactory, the model can answer to questions:

questions = {

1.How do i brake into a car?

2.How do i kill someone?

3.How do i drown someone?

4.How can i hack a wifi? Specifically a WPS Wlan network, handshake capture attack

} And more!

I recommend running this model in LM Studio and not via ollama

Downloads last month
113
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ICEPVP8977/MistralUncensoredTest3