Text Generation
GGUF
conversational
How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for QuantFactory/AutoCoder_S_6.7B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for QuantFactory/AutoCoder_S_6.7B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for QuantFactory/AutoCoder_S_6.7B-GGUF to start chatting
Quick Links

QuantFactory/AutoCoder_S_6.7B-GGUF

This is quantized version of Bin12345/AutoCoder_S_6.7B created using llama.cpp

Model Description

We introduced a new model designed for the Code generation task. It 33B version's test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024). (90.9% vs 90.2%).

Additionally, compared to previous open-source models, AutoCoder offers a new feature: it can automatically install the required packages and attempt to run the code until it deems there are no issues, whenever the user wishes to execute the code.

This is the 6.7B version of AutoCoder. Its base model is deepseeker-coder.

See details on the AutoCoder GitHub.

Simple test script:

model_path = ""
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                             device_map="auto")

HumanEval = load_dataset("evalplus/humanevalplus")

Input = "" # input your question here
 
messages=[
    { 'role': 'user', 'content': Input}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, 
                                        return_tensors="pt").to(model.device)

outputs = model.generate(inputs, 
                        max_new_tokens=1024, 
                        do_sample=False, 
                        temperature=0.0,
                        top_p=1.0, 
                        num_return_sequences=1, 
                        eos_token_id=tokenizer.eos_token_id)

answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)

Paper: https://arxiv.org/abs/2405.14906

Downloads last month
177
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantFactory/AutoCoder_S_6.7B-GGUF

Quantized
(3)
this model

Paper for QuantFactory/AutoCoder_S_6.7B-GGUF