Qwen3-4B-AgentBench-Merged
This repository provides a merged model fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using LoRA + Unsloth, where the LoRA adapter has been merged into the base model weights.
This repository contains full merged model weights. The model can be loaded directly without requiring a separate adapter.
Training Objective
This model is trained to improve multi-turn agent task performance on ALFWorld (household tasks) and DBBench (database operations).
Loss is applied to all assistant turns in the multi-turn trajectory, enabling the model to learn environment observation, action selection, tool use, and recovery from errors.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: LoRA (merged into base model)
- Max sequence length: 2048
- Epochs: 2
- Learning rate: 2e-06
- LoRA: r=64, alpha=128
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your_id/your-repo"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
Data Sources & Attribution
Training data: tussiiiii/agentbench_sft_mix_alfworld_dbbench_v1
The training dataset above is a mixed dataset created by concatenating and shuffling:
- u-10bei/sft_alfworld_trajectory_dataset_v5
- u-10bei/dbbench_sft_dataset_react_v4
We sincerely thank the original dataset authors and contributors for making these resources available. Please comply with the licenses and terms of the original datasets and the base model.
Sources & Terms (IMPORTANT)
- Mixed dataset: tussiiiii/agentbench_sft_mix_alfworld_dbbench_v1
- Source datasets:
- u-10bei/sft_alfworld_trajectory_dataset_v5
- u-10bei/dbbench_sft_dataset_react_v4
Compliance: Users must comply with the dataset licenses and the base model's original terms of use.
- Downloads last month
- -
Model tree for tussiiiii/Qwen3-4B-AgentBench-Merged
Base model
Qwen/Qwen3-4B-Instruct-2507