File size: 4,974 Bytes
9f3bc09 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
# Training Qwen2.5-3B-Instruct for Evaluation Agent with CoT Reasoning
This repository contains scripts and configurations for training Qwen2.5-3B-Instruct model on evaluation agent data with Chain-of-Thought (CoT) reasoning format.
## Overview
The training pipeline processes evaluation results from:
- **VBench**: Video quality evaluation results
- **T2I-CompBench**: Text-to-image composition evaluation results
- **Open Domain**: Open-ended query evaluation results
All results are in CoT (Chain-of-Thought) reasoning format from proprietary models.
## Dataset Preparation
### 1. Data Cleaning and Conversion
Run the data cleaning script to convert raw evaluation results into LLaMA-Factory format:
```bash
python clean_and_convert_data.py
```
This script:
- Processes JSON files from `ea-data/agent/` subdirectories
- Converts CoT-style evaluation results into instruction-response pairs
- Outputs to `LLaMA-Factory/data/evaluation_agent_cot_dataset.json`
- Updates `LLaMA-Factory/data/dataset_info.json` with dataset metadata
### Dataset Statistics
- Total training examples: ~860 (from initial processing)
- Format: Alpaca-style (instruction, input, output)
## Training Configurations
### 1. LoRA Fine-tuning (Recommended)
**Configuration:** `train_qwen2.5_eval_agent.yaml`
Key parameters:
- Model: Qwen/Qwen2.5-3B-Instruct
- Method: LoRA (rank=16, alpha=32)
- Batch size: 2 per device × 4 gradient accumulation
- Learning rate: 5e-5 with cosine scheduler
- Epochs: 3
- Memory requirement: ~16GB VRAM
### 2. Full Fine-tuning
**Configuration:** `train_qwen2.5_eval_agent_full.yaml`
Key parameters:
- Model: Qwen/Qwen2.5-3B-Instruct
- Method: Full fine-tuning with DeepSpeed
- Gradient checkpointing enabled
- Memory requirement: ~32GB+ VRAM
## Training Execution
### Quick Start
```bash
# Make script executable
chmod +x train_qwen2.5_eval_agent.sh
# Run training
./train_qwen2.5_eval_agent.sh
```
### Manual Training
```bash
cd LLaMA-Factory
llamafactory-cli train ../train_qwen2.5_eval_agent.yaml
```
### Distributed Training
For multi-GPU training:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 \
torchrun --nproc_per_node 4 \
--master_port 29500 \
src/train.py ../train_qwen2.5_eval_agent.yaml
```
## Inference
After training, run inference with:
```bash
llamafactory-cli chat ../inference_qwen2.5_eval_agent.yaml
```
Or use the API:
```bash
llamafactory-cli api ../inference_qwen2.5_eval_agent.yaml
```
## Model Merging
To merge LoRA weights with base model:
```bash
llamafactory-cli export \
--model_name_or_path Qwen/Qwen2.5-3B-Instruct \
--adapter_name_or_path saves/qwen2.5-3b/lora/eval_agent_cot \
--template qwen \
--finetuning_type lora \
--export_dir models/qwen2.5-3b-eval-agent-merged \
--export_size 4 \
--export_legacy_format false
```
## Monitoring Training
### TensorBoard
```bash
tensorboard --logdir saves/qwen2.5-3b/lora/eval_agent_cot
```
### Loss Plots
Training loss plots are automatically saved to the output directory.
## Evaluation
The model will be evaluated on:
- CoT reasoning quality
- Evaluation accuracy
- Response coherence
- Format consistency
## Directory Structure
```
evaluation_agent_dev/
├── ea-data/agent/ # Raw evaluation data
│ ├── vbench_results/
│ ├── t2i_results/
│ └── open_results/
├── LLaMA-Factory/ # Training framework
│ └── data/
│ ├── evaluation_agent_cot_dataset.json # Processed dataset
│ └── dataset_info.json
├── clean_and_convert_data.py # Data processing script
├── train_qwen2.5_eval_agent.yaml # LoRA training config
├── train_qwen2.5_eval_agent_full.yaml # Full training config
├── inference_qwen2.5_eval_agent.yaml # Inference config
└── train_qwen2.5_eval_agent.sh # Training script
```
## Requirements
- Python 3.9+
- PyTorch 2.0+
- CUDA 11.6+
- LLaMA-Factory (installed)
- 16GB+ VRAM for LoRA, 32GB+ for full fine-tuning
## Tips
1. **Memory Management**: Use gradient checkpointing and DeepSpeed for larger batch sizes
2. **Learning Rate**: Start with 5e-5 for LoRA, 2e-5 for full fine-tuning
3. **Data Quality**: Review generated dataset for quality before training
4. **Checkpointing**: Save checkpoints frequently (every 200 steps)
5. **Mixed Precision**: Use bf16 for faster training and lower memory usage
## Troubleshooting
- **OOM Errors**: Reduce batch size or enable gradient checkpointing
- **Slow Training**: Enable Flash Attention 2 if available
- **Poor Results**: Increase training epochs or adjust learning rate
- **Data Issues**: Check JSON parsing in data cleaning script
## Next Steps
1. Expand dataset with more evaluation examples
2. Implement custom evaluation metrics
3. Fine-tune on specific evaluation dimensions
4. Deploy model for production use
## License
Follow the licenses of:
- Qwen2.5 model
- LLaMA-Factory framework
- Original evaluation datasets |