File size: 1,728 Bytes
1d8458b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets:
- u-10bei/sft_alfworld_trajectory_dataset_v5
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- agent
- tool-use
- alfworld
- dbbench
---
# qwen25_7b_lora_agentbench_v21
This repository provides a **merged model** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct**. The fine-tuning was performed using **LoRA + Unsloth** and the resulting adapter has been merged back into the base model weights.
This repository contains **full model weights**, making it ready for inference
without the need to load a separate adapter.
## Training Objective
This model is optimized for **multi-turn agent tasks**, specifically for
ALFWorld (household navigation/interaction) and DBBench (database operations).
The training process applied loss to **all assistant turns** in the multi-turn
trajectories, allowing the model to learn not just final answers, but also
intermediate reasoning (Thought), environment observation processing,
action selection, and error recovery.
## Training Configuration
- **Base model:** Qwen/Qwen2.5-7B-Instruct
- **Method:** LoRA (merged post-training)
- **Max sequence length:** 2048
- **Epochs:** 2
- **Learning rate:** 2e-06
- **LoRA Parameters:** r=64, alpha=128
## Usage
This model can be loaded using the standard `transformers` library or
deployed with `vLLM` (recommended for evaluation).
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your_hf_id/your_repo_name"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
|