experiment_name stringclasses 1 value | start_time stringdate 2025-11-01 22:13:59 2025-11-01 23:35:11 | description stringclasses 4 values | base_org stringclasses 1 value | stage_number stringclasses 2 values | stage_type stringclasses 2 values | status stringclasses 2 values |
|---|---|---|---|---|---|---|
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T22:13:59.246223 | Simple test experiment for Skill Factory workflows. | TAUR-dev | 0 | initialization | initialized |
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T22:13:59.246223 | {"stage_name": "eval_rl", "stage_number": "1", "stage_type": "evaluation", "model_repo_id": "TAUR-dev/M-rl_rlonly_AT_fixed-rl@checkpoint-step-360", "eval_repo_id": "TAUR-dev/D-EVAL__standard_eval_v3__FinEval_16k_fulleval_AT_rlonly-eval_rl", "evaluation_config": {"model": "TAUR-dev/M-rl_rlonly_AT_fixed-rl@checkpoint-step-360", "tasks": ["countdown_3arg", "countdown_4arg", "countdown_5arg", "countdown_6arg", "commonsenseQA", "gsm8k", "longmult_2dig", "longmult_3dig", "longmult_4dig", "longmult_5dig", "acronym_5o", "acronym_4o", "letter_countdown_5o", "letter_countdown_4o"], "annotators": ["best_of_n_atags"], "splits": ["test"], "dataset_url": "TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25", "stage_name": "eval_rl", "upload_to_separate_repo": true, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "skip_cleanup": false, "sample_based_bf_n_samples": 3, "sample_based_bf_max_tokens": 4096, "sample_based_bf_think_close_tag": "</think>", "sample_based_bf_starting_message": "<think>\n<sample>", "sample_based_bf_round_partial_end_sequence": "</sample>", "sample_based_bf_round_finish_response_sequence": "</sample>\n\n<reflect>\n\nWell now that I have multiple answers, maybe I should vote and see which I like best.\n\n</reflect>\n\n<vote>", "sample_based_bf_round_continuation_sequence": "</sample>\n\n<reflect>\n\nHmm... maybe this is correct, but maybe not, let me double check\n\n</reflect>\n\n<sample>", "max_requests_per_minute": 50, "max_retries": 5, "request_timeout": 60000, "api_url": null, "temperature": 0.7, "repetition_penalty": 1.1, "top_p": 0.8, "top_k": 20, "bon_atags_max_tokens": 16384, "bon_atags_n_size": 4, "greedy_max_tokens": 16384, "n": 1}, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "status": "pending", "experiment_name": "FinEval_16k_fulleval_AT_rlonly", "start_time": "2025-11-01T22:13:59.246223"} | TAUR-dev | 1 | evaluation | pending |
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T22:15:57.792066 | Simple test experiment for Skill Factory workflows. | TAUR-dev | 0 | initialization | initialized |
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T22:15:57.792066 | {"stage_name": "eval_rl", "stage_number": "1", "stage_type": "evaluation", "model_repo_id": "TAUR-dev/M-rl_ours_AT_fixed-rl@checkpoint-step-300", "eval_repo_id": "TAUR-dev/D-EVAL__standard_eval_v3__FinEval_16k_fulleval_AT_rlonly-eval_rl", "evaluation_config": {"model": "TAUR-dev/M-rl_ours_AT_fixed-rl@checkpoint-step-300", "tasks": ["countdown_3arg", "countdown_4arg", "countdown_5arg", "countdown_6arg", "commonsenseQA", "gsm8k", "longmult_2dig", "longmult_3dig", "longmult_4dig", "longmult_5dig", "acronym_5o", "acronym_4o", "letter_countdown_5o", "letter_countdown_4o"], "annotators": ["best_of_n_atags"], "splits": ["test"], "dataset_url": "TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25", "stage_name": "eval_rl", "upload_to_separate_repo": true, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "skip_cleanup": false, "sample_based_bf_n_samples": 3, "sample_based_bf_max_tokens": 4096, "sample_based_bf_think_close_tag": "</think>", "sample_based_bf_starting_message": "<think>\n<sample>", "sample_based_bf_round_partial_end_sequence": "</sample>", "sample_based_bf_round_finish_response_sequence": "</sample>\n\n<reflect>\n\nWell now that I have multiple answers, maybe I should vote and see which I like best.\n\n</reflect>\n\n<vote>", "sample_based_bf_round_continuation_sequence": "</sample>\n\n<reflect>\n\nHmm... maybe this is correct, but maybe not, let me double check\n\n</reflect>\n\n<sample>", "max_requests_per_minute": 50, "max_retries": 5, "request_timeout": 60000, "api_url": null, "temperature": 0.7, "repetition_penalty": 1.1, "top_p": 0.8, "top_k": 20, "bon_atags_max_tokens": 16384, "bon_atags_n_size": 4, "greedy_max_tokens": 16384, "n": 1}, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "status": "pending", "experiment_name": "FinEval_16k_fulleval_AT_rlonly", "start_time": "2025-11-01T22:15:57.792066"} | TAUR-dev | 1 | evaluation | pending |
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T23:35:11.624495 | Simple test experiment for Skill Factory workflows. | TAUR-dev | 0 | initialization | initialized |
FinEval_16k_fulleval_AT_rlonly | 2025-11-01T23:35:11.624495 | {"stage_name": "eval_rl", "stage_number": "1", "stage_type": "evaluation", "model_repo_id": "TAUR-dev/M-rl_rlonly_AT_fixed-rl@checkpoint-step-360", "eval_repo_id": "TAUR-dev/D-EVAL__standard_eval_v3__FinEval_16k_fulleval_AT_rlonly-eval_rl", "evaluation_config": {"model": "TAUR-dev/M-rl_rlonly_AT_fixed-rl@checkpoint-step-360", "tasks": ["countdown_3arg", "countdown_4arg", "countdown_5arg", "countdown_6arg", "commonsenseQA", "gsm8k", "longmult_2dig", "longmult_3dig", "longmult_4dig", "longmult_5dig", "acronym_5o", "acronym_4o", "letter_countdown_5o", "letter_countdown_4o"], "annotators": ["best_of_n_atags"], "splits": ["test"], "dataset_url": "TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25", "stage_name": "eval_rl", "upload_to_separate_repo": true, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "skip_cleanup": false, "sample_based_bf_n_samples": 3, "sample_based_bf_max_tokens": 4096, "sample_based_bf_think_close_tag": "</think>", "sample_based_bf_starting_message": "<think>\n<sample>", "sample_based_bf_round_partial_end_sequence": "</sample>", "sample_based_bf_round_finish_response_sequence": "</sample>\n\n<reflect>\n\nWell now that I have multiple answers, maybe I should vote and see which I like best.\n\n</reflect>\n\n<vote>", "sample_based_bf_round_continuation_sequence": "</sample>\n\n<reflect>\n\nHmm... maybe this is correct, but maybe not, let me double check\n\n</reflect>\n\n<sample>", "max_requests_per_minute": 500, "max_retries": 5, "request_timeout": 60000, "api_url": null, "temperature": 0.7, "repetition_penalty": 1.1, "top_p": 0.8, "top_k": 20, "bon_atags_max_tokens": 16384, "bon_atags_n_size": 4, "greedy_max_tokens": 16384, "n": 1}, "mutate_prompt_for_answer_tags": false, "checkpoints": false, "status": "pending", "experiment_name": "FinEval_16k_fulleval_AT_rlonly", "start_time": "2025-11-01T23:35:11.624495"} | TAUR-dev | 1 | evaluation | pending |
README.md exists but content is empty.
- Downloads last month
- 7