| # Description | |
| `./test/0-1k`, `./merge_bench/` and `./merge_bench1/` have same eval data. | |
| The data split includes math_tasks and mcq_tasks. | |
| ``` | |
| math_tasks = ["mm|aime24|0", "mm|math_500|0", "mm|gsm8k|0"] | |
| mcq_tasks = ["mm|mmlu_pro|0", "mm|truthfulqa|0", "mm|commonsenseqa|0", "mm|arc_easy|0", "mm|arc_challenge|0", "mm|gpqa_diamond|0"] | |
| ``` | |
| And those only contain data samples whose generation length < 1k from respective reasoning model, e.g. DS-R1-Llama3 and Phi4-mini-reasoning. But currently all sample is from phi4-mini-reasoning | |
| The difference between `./merge_bench/` and `./merge_bench1/` is `./merge_bench1/` merged all layers of Phi4, while `./merge_bench/` missed `lm_head`. | |
| Note that the series of Llama in `./merge_bench/` is reaasonable, since those are merged by `mergekit`. |