timestamp stringdate 2025-10-25 19:33:08 2025-10-25 19:35:23 | end_timestamp stringdate 2025-10-25 19:33:29 2025-10-25 19:35:34 | stage_name stringclasses 1 value | stage_number int64 1 1 | level stringclasses 1 value | message stringclasses 1 value | stdout_content stringclasses 2 values | stderr_content stringclasses 2 values | experiment_name stringclasses 1 value | elapsed_time_seconds float64 12.2 21.2 | stage_complete bool 1 class |
|---|---|---|---|---|---|---|---|---|---|---|
2025-10-25T19:33:08.506626 | 2025-10-25T19:33:29.739818 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-0903_rl_reflect__0epoch_3args__grpo_minibs32_lr1e-6_rollout16-rl
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 982 samples from countdown_2arg/test
β
Loaded 1000 samples from countdown_3arg/test
β
Loaded 1000 samples from countdown_4arg/test
β
Loaded 1000 samples from countdown_5arg/test
β
Loaded 1000 samples from countdown_6arg/test
β
Loaded 1221 samples from commonsenseQA/test
β
Loaded 1319 samples from gsm8k/test
β
Loaded 1000 samples from longmult_2dig/test
β
Loaded 1000 samples from longmult_3dig/test
β
Loaded 1000 samples from longmult_4dig/test
β
Loaded 1000 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 300 samples from letter_countdown_5o/test
β
Loaded 300 samples from letter_countdown_4o/test
π Total dataset size: 12463 samples
π§ Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 21 extra columns to dataset
π― Building annotation pipeline with types: ['best_of_n_atags']
Adding best_of_n_atags annotation (n=4)
π― Executing annotations...
π Executing annotation pipeline with 1 annotators
π Dataset size: 12463 examples
π Model URL: hosted_vllm/TAUR-dev/M-0903_rl_reflect__0epoch_3args__grpo_minibs32_lr1e-6_rollout16-rl<api_base>http://10.32.37.6:8080/v1
βοΈ Global backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
π Running step 1/1: best_of_n
π Response column: model_responses__best_of_n_atags
ποΈ Global Generation params: {}
ποΈ Generation params: {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384}
ποΈ Global Backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
ποΈ Annotator params: {'n_samples': 4}
π Final backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
π§ Final generation params (merged): {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384}
π Annotator (best_of_n_atags): 12463 prompts Γ 4 samples
[ERROR] Stage error: KeyboardInterrupt:
|
README.md: 0.00B [00:00, ?B/s]
README.md: 15.3kB [00:00, 106MB/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/179k [00:00<?, ?B/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 179k/179k [00:00<00:00, 499kB/s]
countdown_2arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 179k/179k [00:00<00:00, 498kB/s]
Generating test split: 0%| | 0/982 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 982/982 [00:00<00:00, 58239.40 examples/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/199k [00:00<?, ?B/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 167k/199k [00:00<00:00, 392kB/s]
countdown_3arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 199k/199k [00:00<00:00, 451kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 66791.47 examples/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/215k [00:00<?, ?B/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 215k/215k [00:00<00:00, 355kB/s]
countdown_4arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 215k/215k [00:00<00:00, 355kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 60192.94 examples/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/231k [00:00<?, ?B/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 231k/231k [00:00<00:00, 640kB/s]
countdown_5arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 231k/231k [00:00<00:00, 638kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 67156.14 examples/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 0%| | 0.00/245k [00:00<?, ?B/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 245k/245k [00:00<00:00, 561kB/s]
countdown_6arg/test-00000-of-00001.parqu(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 245k/245k [00:00<00:00, 560kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 71961.98 examples/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 0%| | 0.00/331k [00:00<?, ?B/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 93%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 307k/331k [00:00<00:00, 868kB/s]
commonsenseQA/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 331k/331k [00:00<00:00, 914kB/s]
Generating test split: 0%| | 0/1221 [00:00<?, ? examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1221/1221 [00:00<00:00, 109498.51 examples/s]
gsm8k/test-00000-of-00001.parquet: 0%| | 0.00/693k [00:00<?, ?B/s]
gsm8k/test-00000-of-00001.parquet: 21%|ββββββββββββββββββ | 144k/693k [00:00<00:01, 347kB/s]
gsm8k/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 693k/693k [00:00<00:00, 1.60MB/s]
Generating test split: 0%| | 0/1319 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1319/1319 [00:00<00:00, 85686.87 examples/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/62.3k [00:00<?, ?B/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 62.3k/62.3k [00:00<00:00, 239kB/s]
longmult_2dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 62.3k/62.3k [00:00<00:00, 238kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 130622.98 examples/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/74.8k [00:00<?, ?B/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 74.8k/74.8k [00:00<00:00, 229kB/s]
longmult_3dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 74.8k/74.8k [00:00<00:00, 228kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 114000.43 examples/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/86.1k [00:00<?, ?B/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 86.1k/86.1k [00:00<00:00, 223kB/s]
longmult_4dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 86.1k/86.1k [00:00<00:00, 223kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 116015.38 examples/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 0%| | 0.00/95.7k [00:00<?, ?B/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 95.7k/95.7k [00:00<00:00, 267kB/s]
longmult_5dig/test-00000-of-00001.parque(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 95.7k/95.7k [00:00<00:00, 266kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 119244.44 examples/s]
acronym_5o/test-00000-of-00001.parquet: 0%| | 0.00/44.2k [00:00<?, ?B/s]
acronym_5o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 44.2k/44.2k [00:00<00:00, 153kB/s]
acronym_5o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 44.2k/44.2k [00:00<00:00, 152kB/s]
Generating test split: 0%| | 0/144 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 144/144 [00:00<00:00, 22597.27 examples/s]
acronym_4o/test-00000-of-00001.parquet: 0%| | 0.00/54.0k [00:00<?, ?B/s]
acronym_4o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.0k/54.0k [00:00<00:00, 193kB/s]
acronym_4o/test-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.0k/54.0k [00:00<00:00, 192kB/s]
Generating test split: 0%| | 0/197 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 197/197 [00:00<00:00, 27961.08 examples/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 0%| | 0.00/49.0k [00:00<?, ?B/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 49.0k/49.0k [00:00<00:00, 177kB/s]
letter_countdown_5o/test-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 49.0k/49.0k [00:00<00:00, 177kB/s]
Generating test split: 0%| | 0/300 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 300/300 [00:00<00:00, 41982.22 examples/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 0%| | 0.00/47.7k [00:00<?, ?B/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 47.7k/47.7k [00:00<00:00, 151kB/s]
letter_countdown_4o/test-00000-of-00001.(β¦): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 47.7k/47.7k [00:00<00:00, 150kB/s]
Generating test split: 0%| | 0/300 [00:00<?, ? examples/s]
Generating test split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 300/300 [00:00<00:00, 35234.41 examples/s]
| FinEval_16k_fulleval_RLOnly | 21.233192 | true |
2025-10-25T19:35:23.554502 | 2025-10-25T19:35:35.730011 | evaluation_eval_rl | 1 | INFO | Complete log capture for stage: evaluation_eval_rl | [INFO] Starting stage: Evaluation - eval_rl
[INFO] Starting evaluation pipeline for eval_rl
[INFO] Evaluating model: TAUR-dev/M-0903_rl_reflect__0epoch_3args__grpo_minibs32_lr1e-6_rollout16-rl
[INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
π Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
β
Loaded 982 samples from countdown_2arg/test
β
Loaded 1000 samples from countdown_3arg/test
β
Loaded 1000 samples from countdown_4arg/test
β
Loaded 1000 samples from countdown_5arg/test
β
Loaded 1000 samples from countdown_6arg/test
β
Loaded 1221 samples from commonsenseQA/test
β
Loaded 1319 samples from gsm8k/test
β
Loaded 1000 samples from longmult_2dig/test
β
Loaded 1000 samples from longmult_3dig/test
β
Loaded 1000 samples from longmult_4dig/test
β
Loaded 1000 samples from longmult_5dig/test
β
Loaded 144 samples from acronym_5o/test
β
Loaded 197 samples from acronym_4o/test
β
Loaded 300 samples from letter_countdown_5o/test
β
Loaded 300 samples from letter_countdown_4o/test
π Total dataset size: 12463 samples
π§ Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 21 extra columns to dataset
π― Building annotation pipeline with types: ['best_of_n_atags']
Adding best_of_n_atags annotation (n=4)
π― Executing annotations...
π Executing annotation pipeline with 1 annotators
π Dataset size: 12463 examples
π Model URL: hosted_vllm/TAUR-dev/M-0903_rl_reflect__0epoch_3args__grpo_minibs32_lr1e-6_rollout16-rl<api_base>http://10.32.37.6:8080/v1
βοΈ Global backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
π Running step 1/1: best_of_n
π Response column: model_responses__best_of_n_atags
ποΈ Global Generation params: {}
ποΈ Generation params: {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384}
ποΈ Global Backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
ποΈ Annotator params: {'n_samples': 4}
π Final backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1}
π§ Final generation params (merged): {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384}
π Annotator (best_of_n_atags): 12463 prompts Γ 4 samples
β Failed: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error.
β Evaluation failed: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error.
[ERROR] Stage error: InternalServerError: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error.
| [2;36m[10/25/25 19:35:34][0m[2;36m [0m[34mINFO [0m Getting rate limits for model: ]8;id=714174;file:///scratch/zrs2020/skill-factory/thirdparty/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py\[2mlitellm_online_request_processor.py[0m]8;;\[2m:[0m]8;id=609952;file:///scratch/zrs2020/skill-factory/thirdparty/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py#219\[2m219[0m]8;;\
[2;36m [0m hosted_vllm/TAUR-dev/M-0903_rl_reflect__0epoch_3args__grpo_minibs32_lr1e-6_rollout16-rl [2m [0m
| FinEval_16k_fulleval_RLOnly | 12.175509 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.