How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Bin12345/AutoCoder_QW_7B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Bin12345/AutoCoder_QW_7B")
model = AutoModelForCausalLM.from_pretrained("Bin12345/AutoCoder_QW_7B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

The base model of AutoCode_QW_7B is CodeQwen1.5-7b.

In this version, we fixed the problem that the model will only start the code interpreter when you ask it to verify its code.

you can try the code interpreter function on the AutoCoder GitHub

For the simple code generation without code interpreter ability, try the following script:

from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import load_dataset
model_path = "Bin12345/AutoCoder_QW_7B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                             device_map="auto")

Input = "" # input your question here
 
messages=[
    { 'role': 'user', 'content': Input}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, 
                                        return_tensors="pt").to(model.device)
outputs = model.generate(inputs, 
                        max_new_tokens=1024, 
                        do_sample=False, 
                        temperature=0.0,
                        top_p=1.0, 
                        num_return_sequences=1, 
                        eos_token_id=tokenizer.eos_token_id)
answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
Downloads last month
11
Safetensors
Model size
7B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using Bin12345/AutoCoder_QW_7B 1