| --- |
| base_model: Qwen/Qwen2.5-7B-Instruct |
| language: |
| - en |
| license: apache-2.0 |
| library_name: peft |
| pipeline_tag: text-generation |
| tags: |
| - lora |
| - agent |
| - tool-use |
| - alfworld |
| - dbbench |
| --- |
| |
| # <qwen3-4b-agent-trajectory-lora> |
|
|
| This repository provides a **LoRA adapter** fine-tuned from |
| **Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**. |
|
|
| This repository contains **LoRA adapter weights only**. |
| The base model must be loaded separately. |
|
|
| ## Training Objective |
|
|
| This adapter is trained to improve **multi-turn agent task performance** |
| on ALFWorld (household tasks) and DBBench (database operations). |
|
|
| Loss is applied to **all assistant turns** in the multi-turn trajectory, |
| enabling the model to learn environment observation, action selection, |
| tool use, and recovery from errors. |
|
|
| ## Training Configuration |
|
|
| - Base model: Qwen/Qwen2.5-7B-Instruct |
| - Method: LoRA (full precision base) |
| - Max sequence length: 2048 |
| - Epochs: 2 |
| - Learning rate: 1e-05 |
| - LoRA: r=64, alpha=128 |
| - warmup_ratio : 0.03 |
| - weight_decay : 0.01 |
| - lora_dropout : 0.05 |
| - lora_target_modules :['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'] |
| - grad_accum : 8 |
| - train_batch_size : 2 |
| - eval_batch_size : 2 |
| |
| ## Usage |
| |
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| from peft import PeftModel |
| import torch |
| |
| base = "Qwen/Qwen2.5-7B-Instruct" |
| adapter = "your_id/your-repo" |
|
|
| tokenizer = AutoTokenizer.from_pretrained(base) |
| model = AutoModelForCausalLM.from_pretrained( |
| base, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| model = PeftModel.from_pretrained(model, adapter) |
| ``` |
| |
| ## Sources & Terms (IMPORTANT) |
|
|
| Training data: /content/merged_clean_dedup |
|
|
| Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. |
| Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use. |
|
|