How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="criscarleo/qwen2.5-coder-3b-abliterated-basic",
	filename="qwen2.5-coder-3b-abliterated-basic.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Method & Process

This model was processed using the Obliteratus methodology. The fine-tuning/transformation was executed via the official notebook provided by [pliny-the-prompter](https://huggingface.co/spaces/pliny-the-prompter/obliteratus

Quantization & Inference Engine

  • Framework: llama.cpp
  • Format: GGUF (v3)
  • Original Authors: Georgi Gerganov and the GGML contributors.

References & Credits

Downloads last month
88
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support