qwen3-4b-agent-trajectory-lora

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using LoRA + Unsloth.

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve multi-turn agent task performance on ALFWorld (household tasks) and DBBench (database operations).

Loss is applied to all assistant turns in the multi-turn trajectory, enabling the model to learn environment observation, action selection, tool use, and recovery from errors.

Dataset Processing (Custom Filtering)

To improve the reasoning efficiency and reduce the risk of infinite loops (repetitive actions), the training dataset was customized with the following filtering strategy:

  • Optimization of Exploration: Trajectories with 9 or more "detours" were excluded from the training set.
  • Robustness Maintenance: Trajectories with 0 to 8 detours were retained.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: LoRA (full precision base)
  • Max sequence length: 4096
  • Epochs: 2
  • Learning rate: 2e-06
  • LoRA: r=64, alpha=128

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training data ALFWorld Data: u-10bei/sft_alfworld_trajectory_dataset_v5 DBBench Data: u-10bei/dbbench_sft_dataset_react_v4

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.

Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for naru0411/LLM-Competition-advanced-005

Adapter
(3348)
this model

Datasets used to train naru0411/LLM-Competition-advanced-005