qwen3-4b-agent-trajectory-merged
This repository provides a MERGED (fully materialized) model created by merging a LoRA adapter into the base model:
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: LoRA + Unsloth, then merge_and_unload() into full weights
✅ This repo contains the merged model weights.
You do NOT need PEFT / LoRA adapters at inference time.
Note: This is a derivative of the base model. Usage must comply with the base model's original terms.
What is included
- Full merged model weights (the LoRA adapter is already merged)
- Tokenizer / config files needed for inference
Training Objective
This model is trained to improve multi-turn agent task performance on agent-trajectory style data (e.g., ALFWorld household tasks and DBBench database operations, depending on the datasets listed below).
Loss is applied to all assistant turns in the multi-turn trajectories so the model learns: environment observation, action selection, tool use, and recovery from errors.
Training Configuration (summary)
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: LoRA training (base loaded without 4-bit quantization), then merged into full weights
- Max sequence length: 2048
- Epochs: 2
- Learning rate: 1e-06
- LoRA: r=64, alpha=128
Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
repo = "your_id/your-merged-repo"
tokenizer = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
repo,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
# (optional) generation example
inputs = tokenizer("Hello! What should I do next?", return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(out[0], skip_special_tokens=True))
- Downloads last month
- 2
Model tree for RinnRinnmini/qwen3-4b-agent-trajectory-lora
Base model
Qwen/Qwen3-4B-Instruct-2507