timestamp stringdate 2025-09-23 05:28:53 2025-09-23 05:39:48 | end_timestamp stringdate 2025-09-23 05:29:47 2025-09-23 05:40:14 | stage_name stringclasses 1 value | stage_number int64 1 1 | level stringclasses 1 value | message stringclasses 1 value | stdout_content stringclasses 4 values | stderr_content stringclasses 2 values | experiment_name stringclasses 1 value | elapsed_time_seconds float64 25.8 53.5 | stage_complete bool 1 class |
|---|---|---|---|---|---|---|---|---|---|---|
2025-09-23T05:28:53.581797 | 2025-09-23T05:29:47.090644 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-rl_1e_v2__pv_v2-rl__150
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_splits-v1-7_13_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 250 samples from countdown_2arg/test
β
Loaded 250 samples from countdown_3arg/test
β
Loaded 250 samples from countdown_4arg/test
β
Loaded 250 samples from countdown_5arg/test
β
Loaded 250 samples from countdown_6arg/test
β
Loaded 100 samples from commonsenseQA/test
β
Loaded 100 samples from gsm8k/test
β
Loaded 250 samples from longmult_2dig/test
β
Loaded 250 samples from longmult_3dig/test
β
Loaded 250 samples from longmult_4dig/test
β
Loaded 250 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 250 samples from letter_countdown_5o/test
β
Loaded 250 samples from letter_countdown_4o/test
β Evaluation failed: The features can't be aligned because the key model_responses of features {'question': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'task_config': Value(dtype='string', id=None), 'task_source': Value(dtype='string', id=None), 'prompt': [{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}], 'model_responses': [Value(dtype='string', id=None)], 'model_responses__eval_is_correct': [Value(dtype='bool', id=None)], 'all_other_columns': Value(dtype='string', id=None), 'original_split': Value(dtype='string', id=None)} has unexpected type - [Value(dtype='string', id=None)] (expected either Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) or Value("null").
[ERROR] Stage error: ValueError: The features can't be aligned because the key model_responses of features {'question': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'task_config': Value(dtype='string', id=None), 'task_source': Value(dtype='string', id=None), 'prompt': [{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}], 'model_responses': [Value(dtype='string', id=None)], 'model_responses__eval_is_correct': [Value(dtype='bool', id=None)], 'all_other_columns': Value(dtype='string', id=None), 'original_split': Value(dtype='string', id=None)} has unexpected type - [Value(dtype='string', id=None)] (expected either Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) or Value("null").
|
README.md: 0.00B [00:00, ?B/s]
README.md: 23.2kB [00:00, 177MB/s]
countdown_2arg/sft_train-00000-of-00001.(β¦): 0%| | 0.00/635k [00:00<?, ?B/s]
countdown_2arg/sft_train-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 635k/635k [00:00<00:00, 965kB/s]
countdown_2arg/sft_train-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 635k/635k [00:00<00:00, 964kB/s]
countdown_2arg/rl_train-00000-of-00001.p(β¦): 0%| | 0.00/174k [00:00<?, ?B/s]
countdown_2arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 174k/174k [00:00<00:00, 536kB/s]
countdown_2arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 174k/174k [00:00<00:00, 535kB/s]
countdown_2arg/val-00000-of-00001.parque(β¦): 0%| | 0.00/54.0k [00:00<?, ?B/s]
countdown_2arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.0k/54.0k [00:00<00:00, 179kB/s]
countdown_2arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.0k/54.0k [00:00<00:00, 179kB/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/55.1k [00:00<?, ?B/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 55.1k/55.1k [00:00<00:00, 111kB/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 55.1k/55.1k [00:00<00:00, 111kB/s]
Generating sft_train split: 0%| | 0/3713 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3713/3713 [00:00<00:00, 320593.09 examples/s]
Generating rl_train split: 0%| | 0/985 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 985/985 [00:00<00:00, 231269.00 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 72792.50 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 70535.18 examples/s]
countdown_3arg/sft_train-00000-of-00001.(β¦): 0%| | 0.00/93.5M [00:00<?, ?B/s]
countdown_3arg/sft_train-00000-of-00001.(β¦): 0%|β | 210k/93.5M [00:00<05:41, 273kB/s]
countdown_3arg/sft_train-00000-of-00001.(β¦): 28%|βββββββββββββββββββββββββββββββββ | 26.4M/93.5M [00:01<00:02, 26.7MB/s]
countdown_3arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 93.5M/93.5M [00:01<00:00, 79.8MB/s]
countdown_3arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 93.5M/93.5M [00:01<00:00, 58.5MB/s]
countdown_3arg/rl_train-00000-of-00001.p(β¦): 0%| | 0.00/23.2M [00:00<?, ?B/s]
countdown_3arg/rl_train-00000-of-00001.p(β¦): 36%|ββββββββββββββββββββββββββββββββββββββββββ | 8.39M/23.2M [00:00<00:01, 14.1MB/s]
countdown_3arg/rl_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 23.2M/23.2M [00:00<00:00, 35.2MB/s]
countdown_3arg/rl_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 23.2M/23.2M [00:00<00:00, 30.3MB/s]
countdown_3arg/val-00000-of-00001.parque(β¦): 0%| | 0.00/4.38M [00:00<?, ?B/s]
countdown_3arg/val-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.38M/4.38M [00:00<00:00, 8.43MB/s]
countdown_3arg/val-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.38M/4.38M [00:00<00:00, 8.42MB/s]
countdown_3arg/old_test-00000-of-00001.p(β¦): 0%| | 0.00/17.1M [00:00<?, ?B/s]
countdown_3arg/old_test-00000-of-00001.p(β¦): 1%|ββ | 211k/17.1M [00:00<00:28, 597kB/s]
countdown_3arg/old_test-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 17.1M/17.1M [00:00<00:00, 33.6MB/s]
countdown_3arg/old_test-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 17.1M/17.1M [00:00<00:00, 27.9MB/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/67.3k [00:00<?, ?B/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 67.3k/67.3k [00:00<00:00, 190kB/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 67.3k/67.3k [00:00<00:00, 190kB/s]
Generating sft_train split: 0%| | 0/3998 [00:00<?, ? examples/s]
Generating sft_train split: 25%|ββββββββββββββββββββββββββββββββ | 1000/3998 [00:00<00:00, 6334.07 examples/s]
Generating sft_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3998/3998 [00:00<00:00, 16122.51 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 14723.21 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 15260.23 examples/s]
Generating old_test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating old_test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 19137.22 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 64282.49 examples/s]
countdown_4arg/sft_train-00000-of-00001.(β¦): 0%| | 0.00/864k [00:00<?, ?B/s]
countdown_4arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 864k/864k [00:00<00:00, 1.81MB/s]
countdown_4arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 864k/864k [00:00<00:00, 1.81MB/s]
countdown_4arg/rl_train-00000-of-00001.p(β¦): 0%| | 0.00/218k [00:00<?, ?B/s]
countdown_4arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 218k/218k [00:00<00:00, 738kB/s]
countdown_4arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 218k/218k [00:00<00:00, 738kB/s]
countdown_4arg/val-00000-of-00001.parque(β¦): 0%| | 0.00/64.7k [00:00<?, ?B/s]
countdown_4arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 64.7k/64.7k [00:00<00:00, 119kB/s]
countdown_4arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 64.7k/64.7k [00:00<00:00, 119kB/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/70.6k [00:00<?, ?B/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 70.6k/70.6k [00:00<00:00, 220kB/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 70.6k/70.6k [00:00<00:00, 220kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 505566.25 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 239141.57 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 76360.03 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 75453.41 examples/s]
countdown_5arg/sft_train-00000-of-00001.(β¦): 0%| | 0.00/925k [00:00<?, ?B/s]
countdown_5arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 925k/925k [00:00<00:00, 2.14MB/s]
countdown_5arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 925k/925k [00:00<00:00, 2.13MB/s]
countdown_5arg/rl_train-00000-of-00001.p(β¦): 0%| | 0.00/233k [00:00<?, ?B/s]
countdown_5arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 233k/233k [00:00<00:00, 715kB/s]
countdown_5arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 233k/233k [00:00<00:00, 714kB/s]
countdown_5arg/val-00000-of-00001.parque(β¦): 0%| | 0.00/68.3k [00:00<?, ?B/s]
countdown_5arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 68.3k/68.3k [00:00<00:00, 222kB/s]
countdown_5arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 68.3k/68.3k [00:00<00:00, 222kB/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/75.0k [00:00<?, ?B/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 75.0k/75.0k [00:00<00:00, 184kB/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 75.0k/75.0k [00:00<00:00, 184kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 494976.13 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 241204.44 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 78710.10 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 75627.55 examples/s]
countdown_6arg/sft_train-00000-of-00001.(β¦): 0%| | 0.00/983k [00:00<?, ?B/s]
countdown_6arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 983k/983k [00:00<00:00, 2.02MB/s]
countdown_6arg/sft_train-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 983k/983k [00:00<00:00, 2.02MB/s]
countdown_6arg/rl_train-00000-of-00001.p(β¦): 0%| | 0.00/248k [00:00<?, ?B/s]
countdown_6arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 248k/248k [00:00<00:00, 450kB/s]
countdown_6arg/rl_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 248k/248k [00:00<00:00, 450kB/s]
countdown_6arg/val-00000-of-00001.parque(β¦): 0%| | 0.00/71.9k [00:00<?, ?B/s]
countdown_6arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 71.9k/71.9k [00:00<00:00, 220kB/s]
countdown_6arg/val-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 71.9k/71.9k [00:00<00:00, 220kB/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/72.2k [00:00<?, ?B/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 72.2k/72.2k [00:00<00:00, 171kB/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 72.2k/72.2k [00:00<00:00, 171kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 500513.60 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 236872.65 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 73765.46 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 72052.22 examples/s]
commonsenseQA/sft_train-00000-of-00001.p(β¦): 0%| | 0.00/2.34M [00:00<?, ?B/s]
commonsenseQA/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.34M/2.34M [00:00<00:00, 2.96MB/s]
commonsenseQA/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.34M/2.34M [00:00<00:00, 2.96MB/s]
commonsenseQA/rl_train-00000-of-00001.pa(β¦): 0%| | 0.00/272k [00:00<?, ?B/s]
commonsenseQA/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 272k/272k [00:00<00:00, 827kB/s]
commonsenseQA/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 272k/272k [00:00<00:00, 826kB/s]
commonsenseQA/val-00000-of-00001.parquet: 0%| | 0.00/73.5k [00:00<?, ?B/s]
commonsenseQA/val-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73.5k/73.5k [00:00<00:00, 244kB/s]
commonsenseQA/val-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73.5k/73.5k [00:00<00:00, 244kB/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 0%| | 0.00/92.7k [00:00<?, ?B/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 92.7k/92.7k [00:00<00:00, 329kB/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 92.7k/92.7k [00:00<00:00, 329kB/s]
commonsenseQA/full_test-00000-of-00001.p(β¦): 0%| | 0.00/331k [00:00<?, ?B/s]
commonsenseQA/full_test-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 331k/331k [00:00<00:00, 493kB/s]
commonsenseQA/full_test-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 331k/331k [00:00<00:00, 493kB/s]
Generating sft_train split: 0%| | 0/8741 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 8741/8741 [00:00<00:00, 796974.29 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 249765.02 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 81474.44 examples/s]
Generating test split: 0%| | 0/100 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100/100 [00:00<00:00, 27550.60 examples/s]
Generating full_test split: 0%| | 0/1221 [00:00<?, ? examples/s]
Generating full_test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1221/1221 [00:00<00:00, 269595.98 examples/s]
gsm8k/sft_train-00000-of-00001.parquet: 0%| | 0.00/3.31M [00:00<?, ?B/s]
gsm8k/sft_train-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.31M/3.31M [00:00<00:00, 3.63MB/s]
gsm8k/sft_train-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.31M/3.31M [00:00<00:00, 3.63MB/s]
gsm8k/rl_train-00000-of-00001.parquet: 0%| | 0.00/519k [00:00<?, ?B/s]
gsm8k/rl_train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 519k/519k [00:00<00:00, 1.60MB/s]
gsm8k/rl_train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 519k/519k [00:00<00:00, 1.59MB/s]
gsm8k/val-00000-of-00001.parquet: 0%| | 0.00/140k [00:00<?, ?B/s]
gsm8k/val-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 140k/140k [00:00<00:00, 431kB/s]
gsm8k/val-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 140k/140k [00:00<00:00, 430kB/s]
gsm8k/test-00000-of-00001.parquet: 0%| | 0.00/111k [00:00<?, ?B/s]
gsm8k/test-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 111k/111k [00:00<00:00, 312kB/s]
gsm8k/test-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 111k/111k [00:00<00:00, 312kB/s]
gsm8k/full_test-00000-of-00001.parquet: 0%| | 0.00/693k [00:00<?, ?B/s]
gsm8k/full_test-00000-of-00001.parquet: 21%|ββββββββββββββββββββββββββ | 144k/693k [00:00<00:02, 236kB/s]
gsm8k/full_test-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 693k/693k [00:00<00:00, 1.23MB/s]
gsm8k/full_test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 693k/693k [00:00<00:00, 974kB/s]
Generating sft_train split: 0%| | 0/6473 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6473/6473 [00:00<00:00, 464256.67 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 183149.38 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 66529.79 examples/s]
Generating test split: 0%| | 0/100 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100/100 [00:00<00:00, 26632.19 examples/s]
Generating full_test split: 0%| | 0/1319 [00:00<?, ? examples/s]
Generating full_test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1319/1319 [00:00<00:00, 234468.62 examples/s]
longmult_2dig/sft_train-00000-of-00001.p(β¦): 0%| | 0.00/15.5M [00:00<?, ?B/s]
longmult_2dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 15.5M/15.5M [00:00<00:00, 21.8MB/s]
longmult_2dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 15.5M/15.5M [00:00<00:00, 21.8MB/s]
longmult_2dig/rl_train-00000-of-00001.pa(β¦): 0%| | 0.00/45.8k [00:00<?, ?B/s]
longmult_2dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 45.8k/45.8k [00:00<00:00, 163kB/s]
longmult_2dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 45.8k/45.8k [00:00<00:00, 162kB/s]
longmult_2dig/val-00000-of-00001.parquet: 0%| | 0.00/10.3k [00:00<?, ?B/s]
longmult_2dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.3k/10.3k [00:00<00:00, 31.4kB/s]
longmult_2dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.3k/10.3k [00:00<00:00, 31.4kB/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/26.6k [00:00<?, ?B/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 26.6k/26.6k [00:00<00:00, 86.8kB/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 26.6k/26.6k [00:00<00:00, 86.7kB/s]
Generating sft_train split: 0%| | 0/4125 [00:00<?, ? examples/s]
Generating sft_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4125/4125 [00:00<00:00, 96041.03 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 260904.70 examples/s]
Generating val split: 0%| | 0/100 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100/100 [00:00<00:00, 31645.57 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 69451.32 examples/s]
longmult_3dig/sft_train-00000-of-00001.p(β¦): 0%| | 0.00/290k [00:00<?, ?B/s]
longmult_3dig/sft_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 290k/290k [00:00<00:00, 882kB/s]
longmult_3dig/sft_train-00000-of-00001.p(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 290k/290k [00:00<00:00, 881kB/s]
longmult_3dig/rl_train-00000-of-00001.pa(β¦): 0%| | 0.00/74.6k [00:00<?, ?B/s]
longmult_3dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 74.6k/74.6k [00:00<00:00, 240kB/s]
longmult_3dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 74.6k/74.6k [00:00<00:00, 240kB/s]
longmult_3dig/val-00000-of-00001.parquet: 0%| | 0.00/24.0k [00:00<?, ?B/s]
longmult_3dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 24.0k/24.0k [00:00<00:00, 77.2kB/s]
longmult_3dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 24.0k/24.0k [00:00<00:00, 77.1kB/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/29.6k [00:00<?, ?B/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.6k/29.6k [00:00<00:00, 97.3kB/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.6k/29.6k [00:00<00:00, 97.2kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 652809.96 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 253524.18 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 69867.80 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 64215.57 examples/s]
longmult_4dig/sft_train-00000-of-00001.p(β¦): 0%| | 0.00/335k [00:00<?, ?B/s]
longmult_4dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 335k/335k [00:00<00:00, 1.08MB/s]
longmult_4dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 335k/335k [00:00<00:00, 1.08MB/s]
longmult_4dig/rl_train-00000-of-00001.pa(β¦): 0%| | 0.00/85.9k [00:00<?, ?B/s]
longmult_4dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 85.9k/85.9k [00:00<00:00, 272kB/s]
longmult_4dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 85.9k/85.9k [00:00<00:00, 272kB/s]
longmult_4dig/val-00000-of-00001.parquet: 0%| | 0.00/26.6k [00:00<?, ?B/s]
longmult_4dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 26.6k/26.6k [00:00<00:00, 83.3kB/s]
longmult_4dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 26.6k/26.6k [00:00<00:00, 83.2kB/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/32.3k [00:00<?, ?B/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 32.3k/32.3k [00:00<00:00, 96.7kB/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 32.3k/32.3k [00:00<00:00, 96.6kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 671814.20 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 252319.32 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 73132.65 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 63488.50 examples/s]
longmult_5dig/sft_train-00000-of-00001.p(β¦): 0%| | 0.00/374k [00:00<?, ?B/s]
longmult_5dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 374k/374k [00:00<00:00, 1.13MB/s]
longmult_5dig/sft_train-00000-of-00001.p(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 374k/374k [00:00<00:00, 1.13MB/s]
longmult_5dig/rl_train-00000-of-00001.pa(β¦): 0%| | 0.00/95.6k [00:00<?, ?B/s]
longmult_5dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 95.6k/95.6k [00:00<00:00, 237kB/s]
longmult_5dig/rl_train-00000-of-00001.pa(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 95.6k/95.6k [00:00<00:00, 237kB/s]
longmult_5dig/val-00000-of-00001.parquet: 0%| | 0.00/29.1k [00:00<?, ?B/s]
longmult_5dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.1k/29.1k [00:00<00:00, 88.1kB/s]
longmult_5dig/val-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.1k/29.1k [00:00<00:00, 88.0kB/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/29.1k [00:00<?, ?B/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.1k/29.1k [00:00<00:00, 95.4kB/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29.1k/29.1k [00:00<00:00, 95.3kB/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 685848.09 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 267750.02 examples/s]
Generating val split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating val split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 71424.02 examples/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 71805.52 examples/s]
acronym_5o/test-00000-of-00001.parquet: 0%| | 0.00/44.4k [00:00<?, ?B/s]
acronym_5o/test-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 44.4k/44.4k [00:00<00:00, 86.3kB/s]
acronym_5o/test-00000-of-00001.parquet: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 44.4k/44.4k [00:00<00:00, 86.2kB/s]
acronym_5o/sft_train-00000-of-00001.parq(β¦): 0%| | 0.00/984k [00:00<?, ?B/s]
acronym_5o/sft_train-00000-of-00001.parq(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 984k/984k [00:00<00:00, 1.66MB/s]
acronym_5o/sft_train-00000-of-00001.parq(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 984k/984k [00:00<00:00, 1.66MB/s]
acronym_5o/rl_train-00000-of-00001.parqu(β¦): 0%| | 0.00/195k [00:00<?, ?B/s]
acronym_5o/rl_train-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 195k/195k [00:00<00:00, 371kB/s]
acronym_5o/rl_train-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 195k/195k [00:00<00:00, 371kB/s]
Generating test split: 0%| | 0/144 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 144/144 [00:00<00:00, 37042.61 examples/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 349059.92 examples/s]
Generating rl_train split: 0%| | 0/782 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 782/782 [00:00<00:00, 187543.35 examples/s]
acronym_4o/test-00000-of-00001.parquet: 0%| | 0.00/54.4k [00:00<?, ?B/s]
acronym_4o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.4k/54.4k [00:00<00:00, 135kB/s]
acronym_4o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.4k/54.4k [00:00<00:00, 134kB/s]
acronym_4o/sft_train-00000-of-00001.parq(β¦): 0%| | 0.00/718k [00:00<?, ?B/s]
acronym_4o/sft_train-00000-of-00001.parq(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 718k/718k [00:00<00:00, 1.58MB/s]
acronym_4o/sft_train-00000-of-00001.parq(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 718k/718k [00:00<00:00, 1.58MB/s]
acronym_4o/rl_train-00000-of-00001.parqu(β¦): 0%| | 0.00/213k [00:00<?, ?B/s]
acronym_4o/rl_train-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 213k/213k [00:00<00:00, 343kB/s]
acronym_4o/rl_train-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 213k/213k [00:00<00:00, 343kB/s]
Generating test split: 0%| | 0/197 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 197/197 [00:00<00:00, 41554.91 examples/s]
Generating sft_train split: 0%| | 0/3103 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3103/3103 [00:00<00:00, 416158.00 examples/s]
Generating rl_train split: 0%| | 0/926 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 926/926 [00:00<00:00, 217599.05 examples/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 0%| | 0.00/42.7k [00:00<?, ?B/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 42.7k/42.7k [00:00<00:00, 83.7kB/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 42.7k/42.7k [00:00<00:00, 83.6kB/s]
letter_countdown_5o/sft_train-00000-of-0(β¦): 0%| | 0.00/552k [00:00<?, ?B/s]
letter_countdown_5o/sft_train-00000-of-0(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 552k/552k [00:00<00:00, 843kB/s]
letter_countdown_5o/sft_train-00000-of-0(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 552k/552k [00:00<00:00, 842kB/s]
letter_countdown_5o/rl_train-00000-of-00(β¦): 0%| | 0.00/139k [00:00<?, ?B/s]
letter_countdown_5o/rl_train-00000-of-00(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 139k/139k [00:00<00:00, 302kB/s]
letter_countdown_5o/rl_train-00000-of-00(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 139k/139k [00:00<00:00, 302kB/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 59322.02 examples/s]
Generating sft_train split: 0%| | 0/4000 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4000/4000 [00:00<00:00, 569452.72 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 237947.69 examples/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 0%| | 0.00/41.7k [00:00<?, ?B/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 41.7k/41.7k [00:00<00:00, 75.3kB/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 41.7k/41.7k [00:00<00:00, 75.2kB/s]
letter_countdown_4o/sft_train-00000-of-0(β¦): 0%| | 0.00/521k [00:00<?, ?B/s]
letter_countdown_4o/sft_train-00000-of-0(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 521k/521k [00:00<00:00, 914kB/s]
letter_countdown_4o/sft_train-00000-of-0(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 521k/521k [00:00<00:00, 913kB/s]
letter_countdown_4o/rl_train-00000-of-00(β¦): 0%| | 0.00/135k [00:00<?, ?B/s]
letter_countdown_4o/rl_train-00000-of-00(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 135k/135k [00:00<00:00, 270kB/s]
letter_countdown_4o/rl_train-00000-of-00(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 135k/135k [00:00<00:00, 269kB/s]
Generating test split: 0%| | 0/250 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:00<00:00, 68655.54 examples/s]
Generating sft_train split: 0%| | 0/3895 [00:00<?, ? examples/s]
Generating sft_train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3895/3895 [00:00<00:00, 590330.78 examples/s]
Generating rl_train split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating rl_train split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 259099.58 examples/s]
| testing_new_setup | 53.508847 | true |
2025-09-23T05:34:32.148084 | 2025-09-23T05:34:58.552446 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-rl_1e_v2__pv_v2-rl__150
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_splits-v1-7_13_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 250 samples from countdown_2arg/test
β
Loaded 250 samples from countdown_3arg/test
β
Loaded 250 samples from countdown_4arg/test
β
Loaded 250 samples from countdown_5arg/test
β
Loaded 250 samples from countdown_6arg/test
β
Loaded 100 samples from commonsenseQA/test
β
Loaded 100 samples from gsm8k/test
β
Loaded 250 samples from longmult_2dig/test
β
Loaded 250 samples from longmult_3dig/test
β
Loaded 250 samples from longmult_4dig/test
β
Loaded 250 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 250 samples from letter_countdown_5o/test
β
Loaded 250 samples from letter_countdown_4o/test
π Total dataset size: 3291 samples
π§ Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'prompt__few_shot', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 22 extra columns to dataset
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π Using runtime override: host_type=slurm (config default: local)
π Discovering SLURM nodes and GPUs...
π Found 1 nodes with 1 GPU(s) each
Nodes: ['c620-001']
β οΈ Requested 2 servers but only 1 GPUs available
π§ Starting 1 servers instead
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π‘οΈ Registered SLURM cleanup signal handlers
π Starting 1 servers across 1 nodes...
π Starting servers concurrently...
Server 1/1: c620-001 GPU 0, Port 8000
π DEBUG: Starting server in SLURM environment
SLURM_JOB_ID: 374292
SLURM_PROCID: 0
SLURM_LOCALID: 0
SLURM_NODELIST: c620-001
TTY: True
TERM: xterm-256color
==ENV VARS==
{'TOKENIZERS_PARALLELISM': 'false', 'SKILLFACTORY_PROJECT_ROOT': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'OMP_NUM_THREADS': '1', 'HF_HOME': '/scratch/10286/georgetsoukalas/hf_cache', 'TRANSFORMERS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/transformers', 'HF_DATASETS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/datasets', 'TORCH_HOME': '/scratch/10286/georgetsoukalas/torch_home', 'XDG_CACHE_HOME': '/scratch/10286/georgetsoukalas/.cache', 'TORCH_COMPILE_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_compile_cache', 'TORCHDYNAMO_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_dynamo_cache', 'TRITON_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/triton', 'OUTLINES_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/outlines', 'PYTHONPATH': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'CUDA_LAUNCH_BLOCKING': '0', 'DISABLE_VERSION_CHECK': '1', 'CC': 'gcc', 'CXX': 'g++', 'FORCE_TORCHRUN': '1', 'NCCL_PROTO': 'simple', 'FI_EFA_FORK_SAFE': '1', 'FI_LOG_LEVEL': '1', 'FI_EFA_USE_DEVICE_RDMA': '1', 'NCCL_NET_GDR_LEVEL': 'SYS', 'NCCL_NET_GDR_READ': '1', 'PYTHONFAULTHANDLER': '1', 'OMPI_MCA_mtl_base_verbose': '1', 'FI_EFA_ENABLE_SHM_TRANSFER': '0', 'FI_PROVIDER': 'efa', 'FI_EFA_TX_MIN_CREDITS': '64', 'NCCL_TREE_THRESHOLD': '0', 'NCCL_DEBUG': 'INFO'}
==ENV SETUP==
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)
π§ vLLM command will be: python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
π§ Provider args: {'port': 8000}
π Starting server with environment setup: vllm
==BASH SCRIPT==
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
===============
π Starting server on c620-001 GPU 0 (port 8000)
π Log file: /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
π srun command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered bash -c
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623688 HF_HOME=/tmp/hf_home_vllm_1758623688 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
β³ Waiting 10s for srun to start...
β srun command failed for c620-001 GPU 0
Command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered... (truncated)
Return code: 1
Output: + echo '=== SLURM Server Startup Debug ==='
/usr/bin/bash: line 4: /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log: No such file or directory
srun: error: c620-001: task 0: Exited with exit code 1
β Server 1 failed to start
β³ Waiting for 0 servers to be ready...
β Evaluation failed: No servers became ready
[ERROR] Stage error: RuntimeError: No servers became ready
| testing_new_setup | 26.404362 | true | |
2025-09-23T05:37:25.948417 | 2025-09-23T05:37:52.083668 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-rl_1e_v2__pv_v2-rl__150
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_splits-v1-7_13_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 250 samples from countdown_2arg/test
β
Loaded 250 samples from countdown_3arg/test
β
Loaded 250 samples from countdown_4arg/test
β
Loaded 250 samples from countdown_5arg/test
β
Loaded 250 samples from countdown_6arg/test
β
Loaded 100 samples from commonsenseQA/test
β
Loaded 100 samples from gsm8k/test
β
Loaded 250 samples from longmult_2dig/test
β
Loaded 250 samples from longmult_3dig/test
β
Loaded 250 samples from longmult_4dig/test
β
Loaded 250 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 250 samples from letter_countdown_5o/test
β
Loaded 250 samples from letter_countdown_4o/test
π Total dataset size: 3291 samples
π§ Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'prompt__few_shot', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 22 extra columns to dataset
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π Using runtime override: host_type=slurm (config default: local)
π Discovering SLURM nodes and GPUs...
π Found 1 nodes with 1 GPU(s) each
Nodes: ['c620-001']
β οΈ Requested 2 servers but only 1 GPUs available
π§ Starting 1 servers instead
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π‘οΈ Registered SLURM cleanup signal handlers
π Starting 1 servers across 1 nodes...
π Starting servers concurrently...
Server 1/1: c620-001 GPU 0, Port 8000
π DEBUG: Starting server in SLURM environment
SLURM_JOB_ID: 374292
SLURM_PROCID: 0
SLURM_LOCALID: 0
SLURM_NODELIST: c620-001
TTY: True
TERM: xterm-256color
==ENV VARS==
{'TOKENIZERS_PARALLELISM': 'false', 'SKILLFACTORY_PROJECT_ROOT': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'OMP_NUM_THREADS': '1', 'HF_HOME': '/scratch/10286/georgetsoukalas/hf_cache', 'TRANSFORMERS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/transformers', 'HF_DATASETS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/datasets', 'TORCH_HOME': '/scratch/10286/georgetsoukalas/torch_home', 'XDG_CACHE_HOME': '/scratch/10286/georgetsoukalas/.cache', 'TORCH_COMPILE_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_compile_cache', 'TORCHDYNAMO_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_dynamo_cache', 'TRITON_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/triton', 'OUTLINES_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/outlines', 'PYTHONPATH': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'CUDA_LAUNCH_BLOCKING': '0', 'DISABLE_VERSION_CHECK': '1', 'CC': 'gcc', 'CXX': 'g++', 'FORCE_TORCHRUN': '1', 'NCCL_PROTO': 'simple', 'FI_EFA_FORK_SAFE': '1', 'FI_LOG_LEVEL': '1', 'FI_EFA_USE_DEVICE_RDMA': '1', 'NCCL_NET_GDR_LEVEL': 'SYS', 'NCCL_NET_GDR_READ': '1', 'PYTHONFAULTHANDLER': '1', 'OMPI_MCA_mtl_base_verbose': '1', 'FI_EFA_ENABLE_SHM_TRANSFER': '0', 'FI_PROVIDER': 'efa', 'FI_EFA_TX_MIN_CREDITS': '64', 'NCCL_TREE_THRESHOLD': '0', 'NCCL_DEBUG': 'INFO'}
==ENV SETUP==
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)
π§ vLLM command will be: python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
π§ Provider args: {'port': 8000}
π Starting server with environment setup: vllm
==BASH SCRIPT==
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
===============
π Starting server on c620-001 GPU 0 (port 8000)
π Log file: /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
π srun command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered bash -c
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
β³ Waiting 10s for srun to start...
β srun command failed for c620-001 GPU 0
Command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered... (truncated)
Return code: 1
Output: + echo '=== SLURM Server Startup Debug ==='
+ echo 'Node: c620-001, GPU: 0, Port: 8000'
++ date
+ echo 'Date: Tue Sep 23 05:37:42 AM CDT 2025'
++ whoami
+ echo 'User: georgetsoukalas'
++ pwd
+ echo 'Working directory: /scratch/10286/georgetsoukalas/skillfactory/skill-factory/skill_factory/analysis/scripts'
++ which python
++ alias
++ eval declare -f
+++ declare -f
++ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot python
+ echo 'Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python'
+ echo 'Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing'
++ tty
++ echo 'No TTY'
+ echo 'TTY: not a tty
No TTY'
+ echo 'SLURM_JOB_ID: 374292'
+ echo 'SLURM_PROCID: 0'
+ echo =================================
+ exec
++ tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
++ date
+ echo 'Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:37:42 AM CDT 2025'
Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:37:42 AM CDT 2025
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ export SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export OMP_NUM_THREADS=1
+ OMP_NUM_THREADS=1
+ export HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ export TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ export HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ export TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ export XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ export TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ export TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ export TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ export OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ export PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export CUDA_LAUNCH_BLOCKING=0
+ CUDA_LAUNCH_BLOCKING=0
+ export DISABLE_VERSION_CHECK=1
+ DISABLE_VERSION_CHECK=1
+ export CC=gcc
+ CC=gcc
+ export CXX=g++
+ CXX=g++
+ export FORCE_TORCHRUN=1
+ FORCE_TORCHRUN=1
+ export NCCL_PROTO=simple
+ NCCL_PROTO=simple
+ export FI_EFA_FORK_SAFE=1
+ FI_EFA_FORK_SAFE=1
+ export FI_LOG_LEVEL=1
+ FI_LOG_LEVEL=1
+ export FI_EFA_USE_DEVICE_RDMA=1
+ FI_EFA_USE_DEVICE_RDMA=1
+ export NCCL_NET_GDR_LEVEL=SYS
+ NCCL_NET_GDR_LEVEL=SYS
+ export NCCL_NET_GDR_READ=1
+ NCCL_NET_GDR_READ=1
+ export PYTHONFAULTHANDLER=1
+ PYTHONFAULTHANDLER=1
+ export OMPI_MCA_mtl_base_verbose=1
+ OMPI_MCA_mtl_base_verbose=1
+ export FI_EFA_ENABLE_SHM_TRANSFER=0
+ FI_EFA_ENABLE_SHM_TRANSFER=0
+ export FI_PROVIDER=efa
+ FI_PROVIDER=efa
+ export FI_EFA_TX_MIN_CREDITS=64
+ FI_EFA_TX_MIN_CREDITS=64
+ export NCCL_TREE_THRESHOLD=0
+ NCCL_TREE_THRESHOLD=0
+ export NCCL_DEBUG=INFO
+ NCCL_DEBUG=INFO
+ USE_BACKGROUNDING=false
+ '[' false = true ']'
+ echo 'Starting server without backgrounding for sbatch compatibility...'
Starting server without backgrounding for sbatch compatibility...
+ module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4
+ '[' -z '' ']'
+ case "$-" in
+ __lmod_sh_dbg=x
+ '[' -n x ']'
+ set +x
Shell debugging temporarily silenced: export LMOD_SH_DBG_ON=1 for Lmod's output
Shell debugging restarted
+ unset __lmod_sh_dbg
+ return 0
+ conda activate vllm
CondaError: Run 'conda init' before 'conda activate'
srun: error: c620-001: task 0: Exited with exit code 1
π Log file content:
=== SLURM Server Startup Debug ===
Node: c620-001, GPU: 0, Port: 8000
Date: Tue Sep 23 05:37:42 AM CDT 2025
User: georgetsoukalas
Working directory: /scratch/10286/georgetsoukalas/skillfactory/skill-factory/skill_factory/analysis/scripts
Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda activate vllm && echo PYTHON: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758623862 HF_HOME=/tmp/hf_home_vllm_1758623862 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
TTY: not a tty
No TTY
SLURM_JOB_ID: 374292
SLURM_PROCID: 0
=================================
++ date
+ echo 'Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:37:42 AM CDT 2025'
Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:37:42 AM CDT 2025
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ export SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export OMP_NUM_THREADS=1
+ OMP_NUM_THREADS=1
+ export HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ export TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ export HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ export TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ export XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ export TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ export TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ export TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ export OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ export PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export CUDA_LAUNCH_BLOCKING=0
+ CUDA_LAUNCH_BLOCKING=0
+ export DISABLE_VERSION_CHECK=1
+ DISABLE_VERSION_CHECK=1
+ export CC=gcc
+ CC=gcc
+ export CXX=g++
+ CXX=g++
+ export FORCE_TORCHRUN=1
+ FORCE_TORCHRUN=1
+ export NCCL_PROTO=simple
+ NCCL_PROTO=simple
+ export FI_EFA_FORK_SAFE=1
+ FI_EFA_FORK_SAFE=1
+ export FI_LOG_LEVEL=1
+ FI_LOG_LEVEL=1
+ export FI_EFA_USE_DEVICE_RDMA=1
+ FI_EFA_USE_DEVICE_RDMA=1
+ export NCCL_NET_GDR_LEVEL=SYS
+ NCCL_NET_GDR_LEVEL=SYS
+ export NCCL_NET_GDR_READ=1
+ NCCL_NET_GDR_READ=1
+ export PYTHONFAULTHANDLER=1
+ PYTHONFAULTHANDLER=1
+ export OMPI_MCA_mtl_base_verbose=1
+ OMPI_MCA_mtl_base_verbose=1
+ export FI_EFA_ENABLE_SHM_TRANSFER=0
+ FI_EFA_ENABLE_SHM_TRANSFER=0
+ export FI_PROVIDER=efa
+ FI_PROVIDER=efa
+ export FI_EFA_TX_MIN_CREDITS=64
+ FI_EFA_TX_MIN_CREDITS=64
+ export NCCL_TREE_THRESHOLD=0
+ NCCL_TREE_THRESHOLD=0
+ export NCCL_DEBUG=INFO
+ NCCL_DEBUG=INFO
+ USE_BACKGROUNDING=false
+ '[' false = true ']'
+ echo 'Starting server without backgrounding for sbatch compatibility...'
Starting server without backgrounding for sbatch compatibility...
+ module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4
+ '[' -z '' ']'
+ case "$-" in
+ __lmod_sh_dbg=x
+ '[' -n x ']'
+ set +x
Shell debugging temporarily silenced: export LMOD_SH_DBG_ON=1 for Lmod's output
Shell debugging restarted
+ unset __lmod_sh_dbg
+ return 0
+ conda activate vllm
CondaError: Run 'conda init' before 'conda activate'
β Server 1 failed to start
β³ Waiting for 0 servers to be ready...
β Evaluation failed: No servers became ready
[ERROR] Stage error: RuntimeError: No servers became ready
| testing_new_setup | 26.135251 | true | |
2025-09-23T05:39:48.691918 | 2025-09-23T05:40:14.472854 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-rl_1e_v2__pv_v2-rl__150
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_splits-v1-7_13_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 250 samples from countdown_2arg/test
β
Loaded 250 samples from countdown_3arg/test
β
Loaded 250 samples from countdown_4arg/test
β
Loaded 250 samples from countdown_5arg/test
β
Loaded 250 samples from countdown_6arg/test
β
Loaded 100 samples from commonsenseQA/test
β
Loaded 100 samples from gsm8k/test
β
Loaded 250 samples from longmult_2dig/test
β
Loaded 250 samples from longmult_3dig/test
β
Loaded 250 samples from longmult_4dig/test
β
Loaded 250 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 250 samples from letter_countdown_5o/test
β
Loaded 250 samples from letter_countdown_4o/test
π Total dataset size: 3291 samples
π§ Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'prompt__few_shot', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 22 extra columns to dataset
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π Using runtime override: host_type=slurm (config default: local)
π Discovering SLURM nodes and GPUs...
π Found 1 nodes with 1 GPU(s) each
Nodes: ['c620-001']
β οΈ Requested 2 servers but only 1 GPUs available
π§ Starting 1 servers instead
π No GPU restrictions specified for zaynes/gvista - allowing all available GPUs
π‘οΈ Registered SLURM cleanup signal handlers
π Starting 1 servers across 1 nodes...
π Starting servers concurrently...
Server 1/1: c620-001 GPU 0, Port 8000
π DEBUG: Starting server in SLURM environment
SLURM_JOB_ID: 374292
SLURM_PROCID: 0
SLURM_LOCALID: 0
SLURM_NODELIST: c620-001
TTY: True
TERM: xterm-256color
==ENV VARS==
{'TOKENIZERS_PARALLELISM': 'false', 'SKILLFACTORY_PROJECT_ROOT': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'OMP_NUM_THREADS': '1', 'HF_HOME': '/scratch/10286/georgetsoukalas/hf_cache', 'TRANSFORMERS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/transformers', 'HF_DATASETS_CACHE': '/scratch/10286/georgetsoukalas/hf_cache/datasets', 'TORCH_HOME': '/scratch/10286/georgetsoukalas/torch_home', 'XDG_CACHE_HOME': '/scratch/10286/georgetsoukalas/.cache', 'TORCH_COMPILE_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_compile_cache', 'TORCHDYNAMO_CACHE_DIR': '/scratch/10286/georgetsoukalas/torch_dynamo_cache', 'TRITON_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/triton', 'OUTLINES_CACHE_DIR': '/scratch/10286/georgetsoukalas/.cache/outlines', 'PYTHONPATH': '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory', 'CUDA_LAUNCH_BLOCKING': '0', 'DISABLE_VERSION_CHECK': '1', 'CC': 'gcc', 'CXX': 'g++', 'FORCE_TORCHRUN': '1', 'NCCL_PROTO': 'simple', 'FI_EFA_FORK_SAFE': '1', 'FI_LOG_LEVEL': '1', 'FI_EFA_USE_DEVICE_RDMA': '1', 'NCCL_NET_GDR_LEVEL': 'SYS', 'NCCL_NET_GDR_READ': '1', 'PYTHONFAULTHANDLER': '1', 'OMPI_MCA_mtl_base_verbose': '1', 'FI_EFA_ENABLE_SHM_TRANSFER': '0', 'FI_PROVIDER': 'efa', 'FI_EFA_TX_MIN_CREDITS': '64', 'NCCL_TREE_THRESHOLD': '0', 'NCCL_DEBUG': 'INFO'}
==ENV SETUP==
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python)
π§ vLLM command will be: python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
π§ Provider args: {'port': 8000}
π Starting server with environment setup: vllm
==BASH SCRIPT==
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
===============
π Starting server on c620-001 GPU 0 (port 8000)
π Log file: /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
π srun command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered bash -c
set -e
set -x
echo "=== SLURM Server Startup Debug ===" > /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Node: c620-001, GPU: 0, Port: 8000" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Date: $(date)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "User: $(whoami)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Working directory: $(pwd)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python)" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "TTY: $(tty 2>/dev/null || echo 'No TTY')" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_JOB_ID: $SLURM_JOB_ID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "SLURM_PROCID: $SLURM_PROCID" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
echo "=================================" >> /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
# Start the server with proper process management for sbatch
exec > >(tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log) 2>&1
echo "Starting vLLM server on c620-001 GPU 0 at $(date)"
# Export all environment variables
export CUDA_VISIBLE_DEVICES=0
export TOKENIZERS_PARALLELISM="false"
export SKILLFACTORY_PROJECT_ROOT="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export OMP_NUM_THREADS="1"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRANSFORMERS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/transformers"
export HF_DATASETS_CACHE="/scratch/10286/georgetsoukalas/hf_cache/datasets"
export TORCH_HOME="/scratch/10286/georgetsoukalas/torch_home"
export XDG_CACHE_HOME="/scratch/10286/georgetsoukalas/.cache"
export TORCH_COMPILE_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_compile_cache"
export TORCHDYNAMO_CACHE_DIR="/scratch/10286/georgetsoukalas/torch_dynamo_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
# Check if we should use backgrounding (may not work in sbatch)
USE_BACKGROUNDING=false
#"${USE_BACKGROUNDING:-true}"
if [ "$USE_BACKGROUNDING" = "true" ]; then
# Use nohup and proper backgrounding for sbatch compatibility
nohup bash -c 'module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing' &
SERVER_PID=$!
echo "Server started with PID $SERVER_PID"
# Write PID to file for cleanup - use temp directory from config
TEMP_DIR="/scratch/10286/georgetsoukalas/skill_factory_tmp"
echo $SERVER_PID > $TEMP_DIR/vllm_slurm_c620-001_gpu0.pid
# Wait a bit to ensure server starts
sleep 5
# Check if process is still running
if kill -0 $SERVER_PID 2>/dev/null; then
echo "Server process $SERVER_PID is running"
exit 0
else
echo "Server process $SERVER_PID failed to start"
exit 1
fi
else
# Direct execution for sbatch - no backgrounding
echo "Starting server without backgrounding for sbatch compatibility..."
# Execute the command directly
module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: $(which python) && TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
fi
β³ Waiting 10s for srun to start...
β srun command failed for c620-001 GPU 0
Command: srun --nodes=1 --ntasks=1 --nodelist=c620-001 --job-name=vllm_server --unbuffered... (truncated)
Return code: 1
Output: + echo '=== SLURM Server Startup Debug ==='
+ echo 'Node: c620-001, GPU: 0, Port: 8000'
++ date
+ echo 'Date: Tue Sep 23 05:40:04 AM CDT 2025'
++ whoami
+ echo 'User: georgetsoukalas'
++ pwd
+ echo 'Working directory: /scratch/10286/georgetsoukalas/skillfactory/skill-factory/skill_factory/analysis/scripts'
++ which python
++ alias
++ eval declare -f
+++ declare -f
++ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot python
+ echo 'Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python'
+ echo 'Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing'
++ tty
++ echo 'No TTY'
+ echo 'TTY: not a tty
No TTY'
+ echo 'SLURM_JOB_ID: 374292'
+ echo 'SLURM_PROCID: 0'
+ echo =================================
+ exec
++ tee -a /scratch/10286/georgetsoukalas/skill_factory_tmp/vllm_slurm_c620-001_gpu0.log
++ date
+ echo 'Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:40:04 AM CDT 2025'
Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:40:04 AM CDT 2025
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ export SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export OMP_NUM_THREADS=1
+ OMP_NUM_THREADS=1
+ export HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ export TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ export HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ export TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ export XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ export TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ export TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ export TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ export OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ export PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export CUDA_LAUNCH_BLOCKING=0
+ CUDA_LAUNCH_BLOCKING=0
+ export DISABLE_VERSION_CHECK=1
+ DISABLE_VERSION_CHECK=1
+ export CC=gcc
+ CC=gcc
+ export CXX=g++
+ CXX=g++
+ export FORCE_TORCHRUN=1
+ FORCE_TORCHRUN=1
+ export NCCL_PROTO=simple
+ NCCL_PROTO=simple
+ export FI_EFA_FORK_SAFE=1
+ FI_EFA_FORK_SAFE=1
+ export FI_LOG_LEVEL=1
+ FI_LOG_LEVEL=1
+ export FI_EFA_USE_DEVICE_RDMA=1
+ FI_EFA_USE_DEVICE_RDMA=1
+ export NCCL_NET_GDR_LEVEL=SYS
+ NCCL_NET_GDR_LEVEL=SYS
+ export NCCL_NET_GDR_READ=1
+ NCCL_NET_GDR_READ=1
+ export PYTHONFAULTHANDLER=1
+ PYTHONFAULTHANDLER=1
+ export OMPI_MCA_mtl_base_verbose=1
+ OMPI_MCA_mtl_base_verbose=1
+ export FI_EFA_ENABLE_SHM_TRANSFER=0
+ FI_EFA_ENABLE_SHM_TRANSFER=0
+ export FI_PROVIDER=efa
+ FI_PROVIDER=efa
+ export FI_EFA_TX_MIN_CREDITS=64
+ FI_EFA_TX_MIN_CREDITS=64
+ export NCCL_TREE_THRESHOLD=0
+ NCCL_TREE_THRESHOLD=0
+ export NCCL_DEBUG=INFO
+ NCCL_DEBUG=INFO
+ USE_BACKGROUNDING=false
+ '[' false = true ']'
+ echo 'Starting server without backgrounding for sbatch compatibility...'
Starting server without backgrounding for sbatch compatibility...
+ module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4
+ '[' -z '' ']'
+ case "$-" in
+ __lmod_sh_dbg=x
+ '[' -n x ']'
+ set +x
Shell debugging temporarily silenced: export LMOD_SH_DBG_ON=1 for Lmod's output
Shell debugging restarted
+ unset __lmod_sh_dbg
+ return 0
+ conda init
no change /home1/10286/georgetsoukalas/miniconda3/condabin/conda
no change /home1/10286/georgetsoukalas/miniconda3/bin/conda
no change /home1/10286/georgetsoukalas/miniconda3/bin/conda-env
no change /home1/10286/georgetsoukalas/miniconda3/bin/activate
no change /home1/10286/georgetsoukalas/miniconda3/bin/deactivate
no change /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.sh
no change /home1/10286/georgetsoukalas/miniconda3/etc/fish/conf.d/conda.fish
no change /home1/10286/georgetsoukalas/miniconda3/shell/condabin/Conda.psm1
no change /home1/10286/georgetsoukalas/miniconda3/shell/condabin/conda-hook.ps1
no change /home1/10286/georgetsoukalas/miniconda3/lib/python3.12/site-packages/xontrib/conda.xsh
no change /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.csh
no change /home1/10286/georgetsoukalas/.bashrc
No action taken.
+ conda activate vllm
CondaError: Run 'conda init' before 'conda activate'
srun: error: c620-001: task 0: Exited with exit code 1
π Log file content:
=== SLURM Server Startup Debug ===
Node: c620-001, GPU: 0, Port: 8000
Date: Tue Sep 23 05:40:04 AM CDT 2025
User: georgetsoukalas
Working directory: /scratch/10286/georgetsoukalas/skillfactory/skill-factory/skill_factory/analysis/scripts
Environment setup: module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4 && conda init && conda activate vllm && echo PYTHON: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
Server command: TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1758624004 HF_HOME=/tmp/hf_home_vllm_1758624004 TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server --model TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 --disable-frontend-multiprocessing
TTY: not a tty
No TTY
SLURM_JOB_ID: 374292
SLURM_PROCID: 0
=================================
++ date
+ echo 'Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:40:04 AM CDT 2025'
Starting vLLM server on c620-001 GPU 0 at Tue Sep 23 05:40:04 AM CDT 2025
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ export SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ SKILLFACTORY_PROJECT_ROOT=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export OMP_NUM_THREADS=1
+ OMP_NUM_THREADS=1
+ export HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ HF_HOME=/scratch/10286/georgetsoukalas/hf_cache
+ export TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ TRANSFORMERS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/transformers
+ export HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ HF_DATASETS_CACHE=/scratch/10286/georgetsoukalas/hf_cache/datasets
+ export TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ TORCH_HOME=/scratch/10286/georgetsoukalas/torch_home
+ export XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ XDG_CACHE_HOME=/scratch/10286/georgetsoukalas/.cache
+ export TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ TORCH_COMPILE_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_compile_cache
+ export TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ TORCHDYNAMO_CACHE_DIR=/scratch/10286/georgetsoukalas/torch_dynamo_cache
+ export TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ TRITON_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/triton
+ export OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ OUTLINES_CACHE_DIR=/scratch/10286/georgetsoukalas/.cache/outlines
+ export PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ PYTHONPATH=/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory
+ export CUDA_LAUNCH_BLOCKING=0
+ CUDA_LAUNCH_BLOCKING=0
+ export DISABLE_VERSION_CHECK=1
+ DISABLE_VERSION_CHECK=1
+ export CC=gcc
+ CC=gcc
+ export CXX=g++
+ CXX=g++
+ export FORCE_TORCHRUN=1
+ FORCE_TORCHRUN=1
+ export NCCL_PROTO=simple
+ NCCL_PROTO=simple
+ export FI_EFA_FORK_SAFE=1
+ FI_EFA_FORK_SAFE=1
+ export FI_LOG_LEVEL=1
+ FI_LOG_LEVEL=1
+ export FI_EFA_USE_DEVICE_RDMA=1
+ FI_EFA_USE_DEVICE_RDMA=1
+ export NCCL_NET_GDR_LEVEL=SYS
+ NCCL_NET_GDR_LEVEL=SYS
+ export NCCL_NET_GDR_READ=1
+ NCCL_NET_GDR_READ=1
+ export PYTHONFAULTHANDLER=1
+ PYTHONFAULTHANDLER=1
+ export OMPI_MCA_mtl_base_verbose=1
+ OMPI_MCA_mtl_base_verbose=1
+ export FI_EFA_ENABLE_SHM_TRANSFER=0
+ FI_EFA_ENABLE_SHM_TRANSFER=0
+ export FI_PROVIDER=efa
+ FI_PROVIDER=efa
+ export FI_EFA_TX_MIN_CREDITS=64
+ FI_EFA_TX_MIN_CREDITS=64
+ export NCCL_TREE_THRESHOLD=0
+ NCCL_TREE_THRESHOLD=0
+ export NCCL_DEBUG=INFO
+ NCCL_DEBUG=INFO
+ USE_BACKGROUNDING=false
+ '[' false = true ']'
+ echo 'Starting server without backgrounding for sbatch compatibility...'
Starting server without backgrounding for sbatch compatibility...
+ module load gcc/14 cuda/12.8 nvidia_math/12.4 nccl/12.4
+ '[' -z '' ']'
+ case "$-" in
+ __lmod_sh_dbg=x
+ '[' -n x ']'
+ set +x
Shell debugging temporarily silenced: export LMOD_SH_DBG_ON=1 for Lmod's output
Shell debugging restarted
+ unset __lmod_sh_dbg
+ return 0
+ conda init
no change /home1/10286/georgetsoukalas/miniconda3/condabin/conda
no change /home1/10286/georgetsoukalas/miniconda3/bin/conda
no change /home1/10286/georgetsoukalas/miniconda3/bin/conda-env
no change /home1/10286/georgetsoukalas/miniconda3/bin/activate
no change /home1/10286/georgetsoukalas/miniconda3/bin/deactivate
no change /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.sh
no change /home1/10286/georgetsoukalas/miniconda3/etc/fish/conf.d/conda.fish
no change /home1/10286/georgetsoukalas/miniconda3/shell/condabin/Conda.psm1
no change /home1/10286/georgetsoukalas/miniconda3/shell/condabin/conda-hook.ps1
no change /home1/10286/georgetsoukalas/miniconda3/lib/python3.12/site-packages/xontrib/conda.xsh
no change /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.csh
no change /home1/10286/georgetsoukalas/.bashrc
No action taken.
+ conda activate vllm
CondaError: Run 'conda init' before 'conda activate'
β Server 1 failed to start
β³ Waiting for 0 servers to be ready...
β Evaluation failed: No servers became ready
[ERROR] Stage error: RuntimeError: No servers became ready
| testing_new_setup | 25.780936 | true |
README.md exists but content is empty.
- Downloads last month
- 6