File size: 7,027 Bytes
8394733
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
2025-11-29 21:58:52 - train_codegen - INFO - Logging to: logs/codegen/train_codegen_20251129_215852.log
2025-11-29 21:58:52 - train_codegen - INFO - Monitor progress: tail -f logs/codegen/train_codegen_20251129_215852.log
2025-11-29 21:58:52 - train_codegen - INFO - ============================================================
2025-11-29 21:58:52 - train_codegen - INFO - CodeGen Training
2025-11-29 21:58:52 - train_codegen - INFO - ============================================================
2025-11-29 21:58:52 - train_codegen - INFO - Using CUDA device: 0
2025-11-29 21:58:52 - train_codegen - INFO - GPU: NVIDIA GeForce RTX 5090
2025-11-29 21:58:52 - train_codegen - INFO - Configuration:
2025-11-29 21:58:52 - train_codegen - INFO -   model: Salesforce/codegen-350M-mono
2025-11-29 21:58:52 - train_codegen - INFO -   data: datasets/python
2025-11-29 21:58:52 - train_codegen - INFO -   output: model/checkpoints/run1-python-codegen
2025-11-29 21:58:52 - train_codegen - INFO -   batch_size: 10
2025-11-29 21:58:52 - train_codegen - INFO -   gradient_accumulation_steps: 4
2025-11-29 21:58:52 - train_codegen - INFO -   effective_batch_size: 40
2025-11-29 21:58:52 - train_codegen - INFO -   learning_rate: 5e-05
2025-11-29 21:58:52 - train_codegen - INFO -   epochs: 5
2025-11-29 21:58:52 - train_codegen - INFO -   max_length: 1024
2025-11-29 21:58:52 - train_codegen - INFO -   max_steps: -1
2025-11-29 21:58:52 - train_codegen - INFO -   fp16: True
2025-11-29 21:58:52 - train_codegen - INFO -   gradient_checkpointing: True
2025-11-29 21:58:52 - train_codegen - INFO -   seed: 42
2025-11-29 21:58:52 - train_codegen - INFO - Loading tokenizer and model: Salesforce/codegen-350M-mono
2025-11-29 21:59:04 - train_codegen - INFO - Loading model with gradient checkpointing enabled
2025-11-29 21:59:04 - train_codegen - INFO - Loading dataset...
2025-11-29 21:59:04 - train_codegen - INFO - Loading dataset from datasets/python
2025-11-29 21:59:05 - train_codegen - INFO - Train samples: 155411
2025-11-29 21:59:05 - train_codegen - INFO - Validation samples: 19426
2025-11-29 21:59:05 - train_codegen - INFO - ============================================================
2025-11-29 21:59:05 - train_codegen - INFO - Dataset Preprocessing
2025-11-29 21:59:05 - train_codegen - INFO - ============================================================
2025-11-29 21:59:05 - train_codegen - INFO - Preprocessing 155411 samples (optimized eager loading)...
2025-11-29 21:59:09 - train_codegen - INFO - Preprocessed 10000/155411 samples
2025-11-29 21:59:14 - train_codegen - INFO - Preprocessed 20000/155411 samples
2025-11-29 21:59:19 - train_codegen - INFO - Preprocessed 30000/155411 samples
2025-11-29 21:59:24 - train_codegen - INFO - Preprocessed 40000/155411 samples
2025-11-29 21:59:29 - train_codegen - INFO - Preprocessed 50000/155411 samples
2025-11-29 21:59:33 - train_codegen - INFO - Preprocessed 60000/155411 samples
2025-11-29 21:59:39 - train_codegen - INFO - Preprocessed 70000/155411 samples
2025-11-29 21:59:43 - train_codegen - INFO - Preprocessed 80000/155411 samples
2025-11-29 21:59:48 - train_codegen - INFO - Preprocessed 90000/155411 samples
2025-11-29 21:59:53 - train_codegen - INFO - Preprocessed 100000/155411 samples
2025-11-29 21:59:57 - train_codegen - INFO - Preprocessed 110000/155411 samples
2025-11-29 22:00:02 - train_codegen - INFO - Preprocessed 120000/155411 samples
2025-11-29 22:00:06 - train_codegen - INFO - Preprocessed 130000/155411 samples
2025-11-29 22:00:12 - train_codegen - INFO - Preprocessed 140000/155411 samples
2025-11-29 22:00:16 - train_codegen - INFO - Preprocessed 150000/155411 samples
2025-11-29 22:00:19 - train_codegen - INFO - Preprocessed 155411/155411 samples
2025-11-29 22:00:19 - train_codegen - INFO - Preprocessing complete: 155411 samples ready
2025-11-29 22:00:19 - train_codegen - INFO - Preprocessing 19426 samples (optimized eager loading)...
2025-11-29 22:00:23 - train_codegen - INFO - Preprocessed 10000/19426 samples
2025-11-29 22:00:28 - train_codegen - INFO - Preprocessed 19426/19426 samples
2025-11-29 22:00:28 - train_codegen - INFO - Preprocessing complete: 19426 samples ready
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Training Arguments
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Training log will be saved to: model/checkpoints/run1-python-codegen/training_log.csv
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Training Strategy
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Evaluation every 1000 steps (optimized for speed)
2025-11-29 22:00:28 - train_codegen - INFO - Eval batch size: 20 (2x train batch)
2025-11-29 22:00:28 - train_codegen - INFO - Eval accumulation steps: 4
2025-11-29 22:00:28 - train_codegen - INFO - Save checkpoint every 2000 steps
2025-11-29 22:00:28 - train_codegen - INFO - Gradient checkpointing: ENABLED (saves VRAM, slower training)
2025-11-29 22:00:28 - train_codegen - INFO - FP16 mixed precision enabled
2025-11-29 22:00:28 - train_codegen - INFO - Dynamic padding per batch (10-20x faster than max_length padding)
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Starting Training
2025-11-29 22:00:28 - train_codegen - INFO - ============================================================
2025-11-29 22:00:28 - train_codegen - INFO - Total training samples: 155411
2025-11-29 22:00:28 - train_codegen - INFO - Total validation samples: 19426
2025-11-29 22:00:28 - train_codegen - INFO - Starting training from scratch
2025-11-30 15:11:50 - train_codegen - INFO - Training completed successfully
2025-11-30 15:11:50 - train_codegen - INFO - ============================================================
2025-11-30 15:11:50 - train_codegen - INFO - Saving Final Model
2025-11-30 15:11:50 - train_codegen - INFO - ============================================================
2025-11-30 15:11:50 - train_codegen - INFO - Model and tokenizer saved to model/checkpoints/run1-python-codegen
2025-11-30 15:11:50 - train_codegen - INFO - ============================================================
2025-11-30 15:11:50 - train_codegen - INFO - Training Summary
2025-11-30 15:11:50 - train_codegen - INFO - ============================================================
2025-11-30 15:11:50 - train_codegen - INFO - Total steps: 19425
2025-11-30 15:11:50 - train_codegen - INFO - Best model checkpoint: model/checkpoints/run1-python-codegen/checkpoint-10000
2025-11-30 15:11:50 - train_codegen - INFO - Best eval loss: 0.7813047170639038
2025-11-30 15:11:50 - train_codegen - INFO - Done.