Dataset Viewer
Auto-converted to Parquet Duplicate
timestamp
stringdate
2025-10-25 19:34:35
2025-10-25 19:56:56
end_timestamp
stringdate
2025-10-25 19:34:51
2025-10-25 20:02:20
stage_name
stringclasses
1 value
stage_number
int64
1
1
level
stringclasses
1 value
message
stringclasses
1 value
stdout_content
stringclasses
3 values
stderr_content
stringclasses
3 values
experiment_name
stringclasses
1 value
elapsed_time_seconds
float64
15.6
324
stage_complete
bool
1 class
2025-10-25T19:34:35.465316
2025-10-25T19:34:51.072661
evaluation_eval_rl
1
INFO
Complete log capture for stage: evaluation_eval_rl
[INFO] Starting stage: Evaluation - eval_rl [INFO] Starting evaluation pipeline for eval_rl [INFO] Evaluating model: TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 [INFO] Tasks: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] [INFO] Annotators: ['best_of_n_atags'] 🚀 Starting evaluation pipeline... Loading from TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25 with configs: ['countdown_2arg', 'countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test'] ✅ Loaded 982 samples from countdown_2arg/test ✅ Loaded 1000 samples from countdown_3arg/test ✅ Loaded 1000 samples from countdown_4arg/test ✅ Loaded 1000 samples from countdown_5arg/test ✅ Loaded 1000 samples from countdown_6arg/test ✅ Loaded 1221 samples from commonsenseQA/test ✅ Loaded 1319 samples from gsm8k/test ✅ Loaded 1000 samples from longmult_2dig/test ✅ Loaded 1000 samples from longmult_3dig/test ✅ Loaded 1000 samples from longmult_4dig/test ✅ Loaded 1000 samples from longmult_5dig/test ✅ Loaded 144 samples from acronym_5o/test ✅ Loaded 197 samples from acronym_4o/test ✅ Loaded 300 samples from letter_countdown_5o/test ✅ Loaded 300 samples from letter_countdown_4o/test 📊 Total dataset size: 12463 samples 🔧 Unpacking 'all_other_columns' JSON data... Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words'] Added 21 extra columns to dataset 🎯 Building annotation pipeline with types: ['best_of_n_atags'] Adding best_of_n_atags annotation (n=4) 🎯 Executing annotations... 🚀 Executing annotation pipeline with 1 annotators 📊 Dataset size: 12463 examples 🔗 Model URL: hosted_vllm/TAUR-dev/M-rl_1e_v2__pv_v2-rl__150<api_base>http://10.32.37.6:8080/v1 ⚙️ Global backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1} 🔄 Running step 1/1: best_of_n 📝 Response column: model_responses__best_of_n_atags 🎛️ Global Generation params: {} 🎛️ Generation params: {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384} 🎛️ Global Backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1} 🎛️ Annotator params: {'n_samples': 4} 🚀 Final backend params: {'max_requests_per_minute': 50, 'max_tokens_per_minute': 30000000, 'request_timeout': 60000, 'max_retries': 5, 'require_all_responses': False, 'max_inflight_requests_upper_multiplier': 1000000000, 'max_inflight_requests_lower_multiplier': 1} 🔧 Final generation params (merged): {'temperature': 0.7, 'repetition_penalty': 1.1, 'top_p': 0.8, 'top_k': 20, 'n': 1, 'max_tokens': 16384} 🔄 Annotator (best_of_n_atags): 12463 prompts × 4 samples ❌ Failed: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error. ❌ Evaluation failed: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error. [ERROR] Stage error: InternalServerError: litellm.InternalServerError: InternalServerError: Hosted_vllmException - Connection error.
[10/25/25 19:34:49] INFO  Getting rate limits for model: hosted_vllm/TAUR-dev/M-rl_1e_v2__pv_v2-rl__150 ]8;id=993850;file:///scratch/zrs2020/skill-factory/thirdparty/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py\litellm_online_request_processor.py]8;;\:]8;id=693800;file:///scratch/zrs2020/skill-factory/thirdparty/curator/src/bespokelabs/curator/request_processor/online/litellm_online_request_processor.py#219\219]8;;\
FinEval_16k_fulleval_3args_ours
15.607345
true
2025-10-25T19:38:30.146440
2025-10-25T19:40:49.937877
evaluation_eval_rl
1
INFO
Complete log capture for stage: evaluation_eval_rl
"[INFO] Starting stage: Evaluation - eval_rl\n[INFO] Starting evaluation pipeline for eval_rl\n[INFO(...TRUNCATED)
"\u001b[2;36m[10/25/25 19:38:39]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Getting r(...TRUNCATED)
FinEval_16k_fulleval_3args_ours
139.791437
true
2025-10-25T19:56:57.011461
2025-10-25T20:02:20.751758
evaluation_eval_rl
1
INFO
Complete log capture for stage: evaluation_eval_rl
"[INFO] Starting stage: Evaluation - eval_rl\n[INFO] Starting evaluation pipeline for eval_rl\n[INFO(...TRUNCATED)
"\u001b[2;36m[10/25/25 19:57:07]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Getting r(...TRUNCATED)
FinEval_16k_fulleval_3args_ours
323.740297
true
README.md exists but content is empty.
Downloads last month
9