evalstate commited on
Commit
83fea62
Β·
1 Parent(s): 92990b2

feat(trl): improve eval dataset handling and documentation

Browse files

- Add train/eval split to all training examples (SFT, DPO)
- Add eval_strategy configuration with proper dataset requirements
- Add critical troubleshooting section for training hangs
- Improve documentation with examples of correct vs incorrect patterns
- Make utility scripts executable (convert_to_gguf, estimate_cost, validate_dataset)
- Update references to point to example scripts for production training
- Remove outdated 'Quick Demo' and 'Production with Checkpoints' sections

FIXES: Training jobs hanging when eval_strategy is set without eval_dataset
IMPROVES: User guidance on best practices for monitoring and evaluation
ADDS: Comprehensive troubleshooting guide for common issues

trl/SKILL.md CHANGED
@@ -156,15 +156,21 @@ trackio.init(project="my-training", space_id="username/my-dashboard")
156
 
157
  dataset = load_dataset("trl-lib/Capybara", split="train")
158
 
 
 
 
159
  trainer = SFTTrainer(
160
  model="Qwen/Qwen2.5-0.5B",
161
- train_dataset=dataset,
 
162
  peft_config=LoraConfig(r=16, lora_alpha=32),
163
  args=SFTConfig(
164
  output_dir="my-model",
165
  push_to_hub=True,
166
  hub_model_id="username/my-model",
167
  num_train_epochs=3,
 
 
168
  report_to="trackio",
169
  )
170
  )
 
156
 
157
  dataset = load_dataset("trl-lib/Capybara", split="train")
158
 
159
+ # Create train/eval split for monitoring
160
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
161
+
162
  trainer = SFTTrainer(
163
  model="Qwen/Qwen2.5-0.5B",
164
+ train_dataset=dataset_split["train"],
165
+ eval_dataset=dataset_split["test"],
166
  peft_config=LoraConfig(r=16, lora_alpha=32),
167
  args=SFTConfig(
168
  output_dir="my-model",
169
  push_to_hub=True,
170
  hub_model_id="username/my-model",
171
  num_train_epochs=3,
172
+ eval_strategy="steps",
173
+ eval_steps=50,
174
  report_to="trackio",
175
  )
176
  )
trl/references/training_methods.md CHANGED
@@ -24,11 +24,14 @@ trainer = SFTTrainer(
24
  output_dir="my-model",
25
  push_to_hub=True,
26
  hub_model_id="username/my-model",
 
27
  )
28
  )
29
  trainer.train()
30
  ```
31
 
 
 
32
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
33
 
34
  ## Direct Preference Optimization (DPO)
@@ -52,11 +55,14 @@ trainer = DPOTrainer(
52
  args=DPOConfig(
53
  output_dir="dpo-model",
54
  beta=0.1, # KL penalty coefficient
 
55
  )
56
  )
57
  trainer.train()
58
  ```
59
 
 
 
60
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
61
 
62
  ## Group Relative Policy Optimization (GRPO)
 
24
  output_dir="my-model",
25
  push_to_hub=True,
26
  hub_model_id="username/my-model",
27
+ eval_strategy="no", # Disable eval for simple example
28
  )
29
  )
30
  trainer.train()
31
  ```
32
 
33
+ **Note:** For production training with evaluation monitoring, see `scripts/train_sft_example.py`
34
+
35
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer")`
36
 
37
  ## Direct Preference Optimization (DPO)
 
55
  args=DPOConfig(
56
  output_dir="dpo-model",
57
  beta=0.1, # KL penalty coefficient
58
+ eval_strategy="no", # Disable eval for simple example
59
  )
60
  )
61
  trainer.train()
62
  ```
63
 
64
+ **Note:** For production training with evaluation monitoring, see `scripts/train_dpo_example.py`
65
+
66
  **Documentation:** `hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer")`
67
 
68
  ## Group Relative Policy Optimization (GRPO)
trl/references/training_patterns.md CHANGED
@@ -2,79 +2,6 @@
2
 
3
  This guide provides common training patterns and use cases for TRL on Hugging Face Jobs.
4
 
5
- ## Quick Demo (5-10 minutes)
6
-
7
- Test setup with minimal training:
8
-
9
- ```python
10
- hf_jobs("uv", {
11
- "script": "https://raw.githubusercontent.com/huggingface/trl/main/examples/scripts/sft.py",
12
- "script_args": [
13
- "--model_name_or_path", "Qwen/Qwen2.5-0.5B",
14
- "--dataset_name", "trl-lib/Capybara",
15
- "--dataset_train_split", "train[:50]", # Only 50 examples
16
- "--max_steps", "10",
17
- "--output_dir", "demo",
18
- "--push_to_hub",
19
- "--hub_model_id", "username/demo"
20
- ],
21
- "flavor": "t4-small",
22
- "timeout": "15m",
23
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
24
- })
25
- ```
26
-
27
- **Note:** The TRL maintained script above doesn't include Trackio. For production training with monitoring, see `scripts/train_sft_example.py` for a complete template with Trackio integration.
28
-
29
- ## Production with Checkpoints
30
-
31
- Full training with intermediate saves. Use this pattern for long training runs where you want to save progress:
32
-
33
- ```python
34
- hf_jobs("uv", {
35
- "script": """
36
- # /// script
37
- # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
38
- # ///
39
-
40
- from datasets import load_dataset
41
- from peft import LoraConfig
42
- from trl import SFTTrainer, SFTConfig
43
- import trackio
44
-
45
- trackio.init(project="production-training", space_id="username/my-dashboard")
46
-
47
- dataset = load_dataset("trl-lib/Capybara", split="train")
48
-
49
- config = SFTConfig(
50
- output_dir="my-model",
51
- push_to_hub=True,
52
- hub_model_id="username/my-model",
53
- hub_strategy="every_save", # Push each checkpoint
54
- save_strategy="steps",
55
- save_steps=100,
56
- save_total_limit=3,
57
- num_train_epochs=3,
58
- report_to="trackio",
59
- )
60
-
61
- trainer = SFTTrainer(
62
- model="Qwen/Qwen2.5-0.5B",
63
- train_dataset=dataset,
64
- args=config,
65
- peft_config=LoraConfig(r=16, lora_alpha=32),
66
- )
67
-
68
- trainer.train()
69
- trainer.push_to_hub()
70
- trackio.finish()
71
- """,
72
- "flavor": "a10g-large",
73
- "timeout": "6h",
74
- "secrets": {"HF_TOKEN": "$HF_TOKEN"}
75
- })
76
- ```
77
-
78
  ## Multi-GPU Training
79
 
80
  Automatic distributed training across multiple GPUs. TRL/Accelerate handles distribution automatically:
@@ -116,18 +43,24 @@ trackio.init(project="dpo-training", space_id="username/my-dashboard")
116
 
117
  dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
118
 
 
 
 
119
  config = DPOConfig(
120
  output_dir="dpo-model",
121
  push_to_hub=True,
122
  hub_model_id="username/dpo-model",
123
  num_train_epochs=1,
124
  beta=0.1, # KL penalty coefficient
 
 
125
  report_to="trackio",
126
  )
127
 
128
  trainer = DPOTrainer(
129
  model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model as base
130
- train_dataset=dataset,
 
131
  args=config,
132
  )
133
 
@@ -169,26 +102,66 @@ hf_jobs("uv", {
169
 
170
  | Use Case | Pattern | Hardware | Time |
171
  |----------|---------|----------|------|
172
- | Test setup | Quick Demo | t4-small | 5-10 min |
173
- | Small dataset (<1K) | Production w/ Checkpoints | t4-medium | 30-60 min |
174
- | Medium dataset (1-10K) | Production w/ Checkpoints | a10g-large | 2-6 hours |
175
  | Large dataset (>10K) | Multi-GPU | a10g-largex2 | 4-12 hours |
176
  | Preference learning | DPO Training | a10g-large | 2-4 hours |
177
  | Online RL | GRPO Training | a10g-large | 3-6 hours |
178
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179
  ## Best Practices
180
 
181
- 1. **Always start with Quick Demo** - Verify setup before long runs
182
- 2. **Use checkpoints for runs >1 hour** - Protect against failures
183
- 3. **Enable Trackio** - Monitor progress in real-time
184
- 4. **Add 20-30% buffer to timeout** - Account for loading/saving overhead
185
- 5. **Test with small dataset slice first** - Use `"train[:100]"` to verify code
186
  6. **Use multi-GPU for large models** - 7B+ models benefit significantly
187
 
188
  ## See Also
189
 
190
- - `scripts/train_sft_example.py` - Complete SFT template with Trackio
191
  - `scripts/train_dpo_example.py` - Complete DPO template
192
  - `scripts/train_grpo_example.py` - Complete GRPO template
193
  - `references/hardware_guide.md` - Detailed hardware specifications
194
  - `references/training_methods.md` - Overview of all TRL training methods
 
 
2
 
3
  This guide provides common training patterns and use cases for TRL on Hugging Face Jobs.
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ## Multi-GPU Training
6
 
7
  Automatic distributed training across multiple GPUs. TRL/Accelerate handles distribution automatically:
 
43
 
44
  dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
45
 
46
+ # Create train/eval split
47
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
48
+
49
  config = DPOConfig(
50
  output_dir="dpo-model",
51
  push_to_hub=True,
52
  hub_model_id="username/dpo-model",
53
  num_train_epochs=1,
54
  beta=0.1, # KL penalty coefficient
55
+ eval_strategy="steps",
56
+ eval_steps=50,
57
  report_to="trackio",
58
  )
59
 
60
  trainer = DPOTrainer(
61
  model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model as base
62
+ train_dataset=dataset_split["train"],
63
+ eval_dataset=dataset_split["test"], # IMPORTANT: Provide eval_dataset when eval_strategy is enabled
64
  args=config,
65
  )
66
 
 
102
 
103
  | Use Case | Pattern | Hardware | Time |
104
  |----------|---------|----------|------|
105
+ | SFT training | `scripts/train_sft_example.py` | a10g-large | 2-6 hours |
 
 
106
  | Large dataset (>10K) | Multi-GPU | a10g-largex2 | 4-12 hours |
107
  | Preference learning | DPO Training | a10g-large | 2-4 hours |
108
  | Online RL | GRPO Training | a10g-large | 3-6 hours |
109
 
110
+ ## Critical: Evaluation Dataset Requirements
111
+
112
+ **⚠️ IMPORTANT**: If you set `eval_strategy="steps"` or `eval_strategy="epoch"`, you **MUST** provide an `eval_dataset` to the trainer, or the training will hang.
113
+
114
+ ### βœ… CORRECT - With eval dataset:
115
+ ```python
116
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
117
+
118
+ trainer = SFTTrainer(
119
+ model="Qwen/Qwen2.5-0.5B",
120
+ train_dataset=dataset_split["train"],
121
+ eval_dataset=dataset_split["test"], # ← MUST provide when eval_strategy is enabled
122
+ args=SFTConfig(eval_strategy="steps", ...),
123
+ )
124
+ ```
125
+
126
+ ### ❌ WRONG - Will hang:
127
+ ```python
128
+ trainer = SFTTrainer(
129
+ model="Qwen/Qwen2.5-0.5B",
130
+ train_dataset=dataset,
131
+ # NO eval_dataset but eval_strategy="steps" ← WILL HANG
132
+ args=SFTConfig(eval_strategy="steps", ...),
133
+ )
134
+ ```
135
+
136
+ ### Option: Disable evaluation if no eval dataset
137
+ ```python
138
+ config = SFTConfig(
139
+ eval_strategy="no", # ← Explicitly disable evaluation
140
+ # ... other config
141
+ )
142
+
143
+ trainer = SFTTrainer(
144
+ model="Qwen/Qwen2.5-0.5B",
145
+ train_dataset=dataset,
146
+ # No eval_dataset needed
147
+ args=config,
148
+ )
149
+ ```
150
+
151
  ## Best Practices
152
 
153
+ 1. **Use train/eval splits** - Create evaluation split for monitoring progress
154
+ 2. **Enable Trackio** - Monitor progress in real-time
155
+ 3. **Add 20-30% buffer to timeout** - Account for loading/saving overhead
156
+ 4. **Test with TRL official scripts first** - Use maintained examples before custom code
157
+ 5. **Always provide eval_dataset** - When using eval_strategy, or set to "no"
158
  6. **Use multi-GPU for large models** - 7B+ models benefit significantly
159
 
160
  ## See Also
161
 
162
+ - `scripts/train_sft_example.py` - Complete SFT template with Trackio and eval split
163
  - `scripts/train_dpo_example.py` - Complete DPO template
164
  - `scripts/train_grpo_example.py` - Complete GRPO template
165
  - `references/hardware_guide.md` - Detailed hardware specifications
166
  - `references/training_methods.md` - Overview of all TRL training methods
167
+ - `references/troubleshooting.md` - Common issues and solutions
trl/references/troubleshooting.md CHANGED
@@ -2,6 +2,49 @@
2
 
3
  Common issues and solutions when training with TRL on Hugging Face Jobs.
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ## Job Times Out
6
 
7
  **Problem:** Job terminates before training completes, all progress lost.
@@ -208,5 +251,6 @@ If issues persist:
208
  - `references/hub_saving.md` - Hub authentication issues
209
  - `references/hardware_guide.md` - Hardware selection and specs
210
  - `references/uv_scripts_guide.md` - UV script format issues
 
211
 
212
  4. **Ask in HF forums:** https://discuss.huggingface.co/
 
2
 
3
  Common issues and solutions when training with TRL on Hugging Face Jobs.
4
 
5
+ ## Training Hangs at "Starting training..." Step
6
+
7
+ **Problem:** Job starts but hangs at the training step - never progresses, never times out, just sits there.
8
+
9
+ **Root Cause:** Using `eval_strategy="steps"` or `eval_strategy="epoch"` without providing an `eval_dataset` to the trainer.
10
+
11
+ **Solution:**
12
+
13
+ **Option A: Provide eval_dataset (recommended)**
14
+ ```python
15
+ # Create train/eval split
16
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
17
+
18
+ trainer = SFTTrainer(
19
+ model="Qwen/Qwen2.5-0.5B",
20
+ train_dataset=dataset_split["train"],
21
+ eval_dataset=dataset_split["test"], # ← MUST provide when eval_strategy is enabled
22
+ args=SFTConfig(
23
+ eval_strategy="steps",
24
+ eval_steps=50,
25
+ ...
26
+ ),
27
+ )
28
+ ```
29
+
30
+ **Option B: Disable evaluation**
31
+ ```python
32
+ trainer = SFTTrainer(
33
+ model="Qwen/Qwen2.5-0.5B",
34
+ train_dataset=dataset,
35
+ # No eval_dataset
36
+ args=SFTConfig(
37
+ eval_strategy="no", # ← Explicitly disable
38
+ ...
39
+ ),
40
+ )
41
+ ```
42
+
43
+ **Prevention:**
44
+ - Always create train/eval split for better monitoring
45
+ - Use `dataset.train_test_split(test_size=0.1, seed=42)`
46
+ - Check example scripts: `scripts/train_sft_example.py` includes proper eval setup
47
+
48
  ## Job Times Out
49
 
50
  **Problem:** Job terminates before training completes, all progress lost.
 
251
  - `references/hub_saving.md` - Hub authentication issues
252
  - `references/hardware_guide.md` - Hardware selection and specs
253
  - `references/uv_scripts_guide.md` - UV script format issues
254
+ - `references/training_patterns.md` - Eval dataset requirements
255
 
256
  4. **Ask in HF forums:** https://discuss.huggingface.co/
trl/scripts/convert_to_gguf.py CHANGED
File without changes
trl/scripts/estimate_cost.py CHANGED
File without changes
trl/scripts/train_dpo_example.py CHANGED
@@ -43,9 +43,18 @@ trackio.init(
43
  )
44
 
45
  # Load preference dataset
 
46
  dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
47
  print(f"βœ… Dataset loaded: {len(dataset)} preference pairs")
48
 
 
 
 
 
 
 
 
 
49
  # Training configuration
50
  config = DPOConfig(
51
  # CRITICAL: Hub settings
@@ -69,6 +78,10 @@ config = DPOConfig(
69
  save_steps=100,
70
  save_total_limit=2,
71
 
 
 
 
 
72
  # Optimization
73
  warmup_ratio=0.1,
74
  lr_scheduler_type="cosine",
@@ -79,9 +92,11 @@ config = DPOConfig(
79
 
80
  # Initialize and train
81
  # Note: DPO requires an instruct-tuned model as the base
 
82
  trainer = DPOTrainer(
83
  model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model, not base model
84
- train_dataset=dataset,
 
85
  args=config,
86
  )
87
 
 
43
  )
44
 
45
  # Load preference dataset
46
+ print("πŸ“¦ Loading dataset...")
47
  dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")
48
  print(f"βœ… Dataset loaded: {len(dataset)} preference pairs")
49
 
50
+ # Create train/eval split
51
+ print("πŸ”€ Creating train/eval split...")
52
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
53
+ train_dataset = dataset_split["train"]
54
+ eval_dataset = dataset_split["test"]
55
+ print(f" Train: {len(train_dataset)} pairs")
56
+ print(f" Eval: {len(eval_dataset)} pairs")
57
+
58
  # Training configuration
59
  config = DPOConfig(
60
  # CRITICAL: Hub settings
 
78
  save_steps=100,
79
  save_total_limit=2,
80
 
81
+ # Evaluation - IMPORTANT: Only enable if eval_dataset provided
82
+ eval_strategy="steps",
83
+ eval_steps=100,
84
+
85
  # Optimization
86
  warmup_ratio=0.1,
87
  lr_scheduler_type="cosine",
 
92
 
93
  # Initialize and train
94
  # Note: DPO requires an instruct-tuned model as the base
95
+ print("🎯 Initializing trainer...")
96
  trainer = DPOTrainer(
97
  model="Qwen/Qwen2.5-0.5B-Instruct", # Use instruct model, not base model
98
+ train_dataset=train_dataset,
99
+ eval_dataset=eval_dataset, # CRITICAL: Must provide eval_dataset when eval_strategy is enabled
100
  args=config,
101
  )
102
 
trl/scripts/train_sft_example.py CHANGED
@@ -16,6 +16,7 @@ This script demonstrates:
16
  - Trackio integration for real-time monitoring
17
  - LoRA/PEFT for efficient training
18
  - Proper Hub saving configuration
 
19
  - Checkpoint management
20
  - Optimized training parameters
21
 
@@ -48,10 +49,19 @@ trackio.init(
48
  }
49
  )
50
 
51
- # Load and validate
 
52
  dataset = load_dataset("trl-lib/Capybara", split="train")
53
  print(f"βœ… Dataset loaded: {len(dataset)} examples")
54
 
 
 
 
 
 
 
 
 
55
  # Training configuration
56
  config = SFTConfig(
57
  # CRITICAL: Hub settings
@@ -72,6 +82,10 @@ config = SFTConfig(
72
  save_steps=100,
73
  save_total_limit=2,
74
 
 
 
 
 
75
  # Optimization
76
  warmup_ratio=0.1,
77
  lr_scheduler_type="cosine",
@@ -91,9 +105,11 @@ peft_config = LoraConfig(
91
  )
92
 
93
  # Initialize and train
 
94
  trainer = SFTTrainer(
95
  model="Qwen/Qwen2.5-0.5B",
96
- train_dataset=dataset,
 
97
  args=config,
98
  peft_config=peft_config,
99
  )
 
16
  - Trackio integration for real-time monitoring
17
  - LoRA/PEFT for efficient training
18
  - Proper Hub saving configuration
19
+ - Train/eval split for monitoring
20
  - Checkpoint management
21
  - Optimized training parameters
22
 
 
49
  }
50
  )
51
 
52
+ # Load dataset
53
+ print("πŸ“¦ Loading dataset...")
54
  dataset = load_dataset("trl-lib/Capybara", split="train")
55
  print(f"βœ… Dataset loaded: {len(dataset)} examples")
56
 
57
+ # Create train/eval split
58
+ print("πŸ”€ Creating train/eval split...")
59
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
60
+ train_dataset = dataset_split["train"]
61
+ eval_dataset = dataset_split["test"]
62
+ print(f" Train: {len(train_dataset)} examples")
63
+ print(f" Eval: {len(eval_dataset)} examples")
64
+
65
  # Training configuration
66
  config = SFTConfig(
67
  # CRITICAL: Hub settings
 
82
  save_steps=100,
83
  save_total_limit=2,
84
 
85
+ # Evaluation - IMPORTANT: Only enable if eval_dataset provided
86
+ eval_strategy="steps",
87
+ eval_steps=100,
88
+
89
  # Optimization
90
  warmup_ratio=0.1,
91
  lr_scheduler_type="cosine",
 
105
  )
106
 
107
  # Initialize and train
108
+ print("🎯 Initializing trainer...")
109
  trainer = SFTTrainer(
110
  model="Qwen/Qwen2.5-0.5B",
111
+ train_dataset=train_dataset,
112
+ eval_dataset=eval_dataset, # CRITICAL: Must provide eval_dataset when eval_strategy is enabled
113
  args=config,
114
  peft_config=peft_config,
115
  )
trl/scripts/validate_dataset.py CHANGED
File without changes