How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="muralcode/zyppherllm_abliterated-GGUF",
	filename="zyppherllm-abliterated-llama3.1-q6_k.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

zyppherllm-abliterated-llama3.1-q6_k.gguf

This is an uncensored model with abliteration (see remove-refusals-with-transformers to know more about it). This is a proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

ollama

You can use zypherllm directly

Contact: info@arithaai.com
Downloads last month
1
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for muralcode/zyppherllm_abliterated-GGUF

Quantized
(627)
this model