--- base_model: centmount/advanced-step1-alfworld-0222 datasets: - u-10bei/dbbench_sft_dataset_react_v4 language: - en license: apache-2.0 library_name: peft pipeline_tag: text-generation tags: - lora - agent - tool-use - alfworld - dbbench --- # qwen3-4b-step1-alfworld-lora0222 This repository provides a **LoRA adapter** fine-tuned from **centmount/advanced-step1-alfworld-0222** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-turn agent task performance** on ALFWorld (household tasks) and DBBench (database operations). Loss is applied to **all assistant turns** in the multi-turn trajectory, enabling the model to learn environment observation, action selection, tool use, and recovery from errors. ## Training Configuration - Base model: centmount/advanced-step1-alfworld-0222 - Method: LoRA (full precision base) - Max sequence length: 3072 - Epochs: 1 - Learning rate: 5e-07 - LoRA: r=16, alpha=32 ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch base = "centmount/advanced-step1-alfworld-0222" adapter = "your_id/your-repo" tokenizer = AutoTokenizer.from_pretrained(base) model = AutoModelForCausalLM.from_pretrained( base, torch_dtype=torch.float16, device_map="auto", ) model = PeftModel.from_pretrained(model, adapter) ``` ## Sources & Terms (IMPORTANT) Training data: u-10bei/dbbench_sft_dataset_react_v4 Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.