Qwen3-8B-ABC


💻 Code   |    📑 Paper   |    📝 Blog   |    🤗 Data

Qwen3-8B-ABC is a supervised fine-tuned (SFT) variant of Qwen/Qwen3-8B, trained for agentic backend coding and tool-using / instruction-following behaviors.

Model Details

  • Model name: Qwen3-8B-ABC
  • Base model: Qwen/Qwen3-8B
  • Model type: Causal Language Model (decoder-only)
  • Training method: Agentic Supervised Fine-Tuning (SFT)

Training Data

This model was fine-tuned on nex-agi/agent-sft.

Please refer to the dataset card for detailed documentation, licensing, and usage constraints.

Performance on ABC-Bench

Following the ABC-Bench paper’s evaluation protocol:

Model Setting Average Pass@1 (%, 3 attempts)
Qwen3-8B-ABC w/ SFT 13.9%
Qwen3-8B w/o SFT 8.3%

Intended Use

Qwen3-8B-ABC is intended for:

  • Agent-style instruction following for backend development tasks
  • Code editing / patch generation in real repositories
  • Command-line oriented debugging and step-by-step problem solving
  • Research on automated software engineering and agent evaluation

Usage

Transformers (Python)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "OpenMOSS-Team/Qwen3-8B-ABC"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Write a FastAPI endpoint that returns health status as JSON."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    output = model.generate(
        **inputs,
        max_new_tokens=256,
        do_sample=True,
        temperature=0.7,
        top_p=0.9,
    )

print(tokenizer.decode(output[0], skip_special_tokens=True))

Citation

@misc{yang2026abcbenchbenchmarkingagenticbackend,
      title={ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development}, 
      author={Jie Yang and Honglin Guo and Li Ji and Jiazheng Zhou and Rui Zheng and Zhikai Lei and Shuo Zhang and Zhiheng Xi and Shichun Liu and Yuxin Wang and Bo Wang and Yining Zheng and Tao Gui and Xipeng Qiu},
      year={2026},
      eprint={2601.11077},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2601.11077}, 
}

Acknowledgements

  • Base model: Qwen/Qwen3-8B
  • Training dataset: nex-agi/agent-sft
Downloads last month
4
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OpenMOSS-Team/Qwen3-8B-ABC

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(873)
this model

Paper for OpenMOSS-Team/Qwen3-8B-ABC