How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="SebastianBodza/DeepMagiCoder-6.7B-DS-Base-AWQ")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("SebastianBodza/DeepMagiCoder-6.7B-DS-Base-AWQ")
model = AutoModelForCausalLM.from_pretrained("SebastianBodza/DeepMagiCoder-6.7B-DS-Base-AWQ")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Quantized version of: https://huggingface.co/SebastianBodza/DeepMagiCoder-6.7B-DS-Base

Used the Deepseek Template and the Evol-Instruct Code dataset for quantization:

You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
{response}

Humaneval: {'pass@1': 0.7195121951219512}

Downloads last month
5
Safetensors
Model size
7B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support