How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="zai-org/SWE-Dev-32B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("zai-org/SWE-Dev-32B")
model = AutoModelForCausalLM.from_pretrained("zai-org/SWE-Dev-32B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

📝 Paper | 🌐 Github

🚀 SWE-Dev, an open-source Agent for Software Engineering tasks! This repository contains the SWE-Dev-32B model as presented in the paper SWE-Dev: Building Software Engineering Agents with Training and Inference Scaling.

💡 We develop a comprehensive pipeline for creating developer-oriented datasets from GitHub repositories, including issue tracking, code localization, test case generation, and evaluation.

🔧 Based on open-source frameworks (OpenHands) and models, SWE-Dev-7B and 32B achieved solve rates of 23.4% and 36.6% on SWE-bench-Verified, respectively, even approaching the performance of GPT-4o.

📚 We find that training data scaling and inference scaling can both effectively boost the performance of models on SWE-bench. Moreover, higher data quality further improves this trend when combined with reinforcement fine-tuning (RFT). For inference scaling specifically, the solve rate on SWE-Dev increased from 34.0% at 30 rounds to 36.6% at 75 rounds.

Downloads last month
122
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with zai-org/SWE-Dev-32B.

Model tree for zai-org/SWE-Dev-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(128)
this model
Quantizations
6 models

Paper for zai-org/SWE-Dev-32B