evalstate commited on
Commit
d55cac6
·
1 Parent(s): 0a40fad

aksel skill updates

Browse files
Files changed (1) hide show
  1. trl/SKILL.md +65 -2
trl/SKILL.md CHANGED
@@ -42,7 +42,7 @@ Use this skill when users want to:
42
 
43
  When assisting with training jobs:
44
 
45
- 1. **Submit jobs directly with inline scripts** - The `script` parameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string to `hf_jobs()`.
46
 
47
  2. **Always include Trackio** - Every training script should include Trackio for real-time monitoring. Use example scripts in `scripts/` as templates.
48
 
@@ -50,6 +50,13 @@ When assisting with training jobs:
50
 
51
  4. **Use example scripts as templates** - Reference `scripts/train_sft_example.py`, `scripts/train_dpo_example.py`, etc. as starting points.
52
 
 
 
 
 
 
 
 
53
  ## Prerequisites Checklist
54
 
55
  Before starting any training job, verify:
@@ -149,15 +156,21 @@ trackio.init(project="my-training", space_id="username/my-dashboard")
149
 
150
  dataset = load_dataset("trl-lib/Capybara", split="train")
151
 
 
 
 
152
  trainer = SFTTrainer(
153
  model="Qwen/Qwen2.5-0.5B",
154
- train_dataset=dataset,
 
155
  peft_config=LoraConfig(r=16, lora_alpha=32),
156
  args=SFTConfig(
157
  output_dir="my-model",
158
  push_to_hub=True,
159
  hub_model_id="username/my-model",
160
  num_train_epochs=3,
 
 
161
  report_to="trackio",
162
  )
163
  )
@@ -388,6 +401,56 @@ See `references/training_patterns.md` for detailed examples including:
388
  - DPO training (preference learning)
389
  - GRPO training (online RL)
390
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
391
  ## Troubleshooting
392
 
393
  **Common issues:**
 
42
 
43
  When assisting with training jobs:
44
 
45
+ 1. **Submit jobs directly with inline scripts** - The `script` parameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string to `hf_jobs()`. If user asks to "train a model", "fine-tune", or similar requests, you MUST create the training script AND submit the job immediately.
46
 
47
  2. **Always include Trackio** - Every training script should include Trackio for real-time monitoring. Use example scripts in `scripts/` as templates.
48
 
 
50
 
51
  4. **Use example scripts as templates** - Reference `scripts/train_sft_example.py`, `scripts/train_dpo_example.py`, etc. as starting points.
52
 
53
+ ## Local Script Dependencies
54
+
55
+ To run scripts locally (like `validate_dataset.py`, `estimate_cost.py`), install dependencies:
56
+ ```bash
57
+ pip install -r requirements.txt
58
+ ```
59
+
60
  ## Prerequisites Checklist
61
 
62
  Before starting any training job, verify:
 
156
 
157
  dataset = load_dataset("trl-lib/Capybara", split="train")
158
 
159
+ # Create train/eval split for monitoring
160
+ dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
161
+
162
  trainer = SFTTrainer(
163
  model="Qwen/Qwen2.5-0.5B",
164
+ train_dataset=dataset_split["train"],
165
+ eval_dataset=dataset_split["test"],
166
  peft_config=LoraConfig(r=16, lora_alpha=32),
167
  args=SFTConfig(
168
  output_dir="my-model",
169
  push_to_hub=True,
170
  hub_model_id="username/my-model",
171
  num_train_epochs=3,
172
+ eval_strategy="steps",
173
+ eval_steps=50,
174
  report_to="trackio",
175
  )
176
  )
 
401
  - DPO training (preference learning)
402
  - GRPO training (online RL)
403
 
404
+ ## Common Failure Modes
405
+
406
+ ### Out of Memory (OOM)
407
+
408
+ **Fix (try in order):**
409
+ 1. Reduce batch size: `per_device_train_batch_size=1`, increase `gradient_accumulation_steps=8`. Effective batch size is `per_device_train_batch_size` x `gradient_accumulation_steps`. For best performance keep effective batch size close to 128.
410
+ 2. Enable: `gradient_checkpointing=True`
411
+ 3. Upgrade hardware: t4-small → l4x1, a10g-small → a10g-large etc.
412
+
413
+ ### Dataset Misformatted
414
+
415
+ **Fix:**
416
+ 1. Validate first: `python scripts/validate_dataset.py --dataset name --method sft`
417
+ 2. Check required columns:
418
+ - SFT: `messages` OR `text` OR `prompt`+`completion`
419
+ - DPO: `prompt`, `chosen`, `rejected`
420
+ - GRPO: `prompt` only
421
+ 3. Apply formatting if needed:
422
+ ```python
423
+ dataset = dataset.map(lambda x: {"text": f"User: {x['input']}\nBot: {x['output']}"})
424
+ ```
425
+
426
+ ### Job Timeout
427
+
428
+ **Fix:**
429
+ 1. Check logs for actual runtime: `hf_jobs("logs", {"job_id": "..."})`
430
+ 2. Increase timeout with buffer: `"timeout": "3h"` (add 30% to estimated time)
431
+ 3. Or reduce training: lower `num_train_epochs`, use smaller dataset, enable `max_steps`
432
+ 4. Save checkpoints: `save_strategy="steps"`, `save_steps=500`, `hub_strategy="every_save"`
433
+
434
+ **Note:** Default 30min is insufficient for real training. Minimum 1-2 hours.
435
+
436
+ ### Hub Push Failures
437
+
438
+ **Fix:**
439
+ 1. Add to job: `secrets={"HF_TOKEN": "$HF_TOKEN"}`
440
+ 2. Add to config: `push_to_hub=True`, `hub_model_id="username/model-name"`
441
+ 3. Verify auth: `mcp__huggingface__hf_whoami()`
442
+ 4. Check token has write permissions and repo exists (or set `hub_private_repo=True`)
443
+
444
+ ### Missing Dependencies
445
+
446
+ **Fix:**
447
+ Add to PEP 723 header:
448
+ ```python
449
+ # /// script
450
+ # dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio", "missing-package"]
451
+ # ///
452
+ ```
453
+
454
  ## Troubleshooting
455
 
456
  **Common issues:**