Upload variant-d eval logs (gen test + 5-shot MC + 5-shot GEN)
Browse files- variant-d/eval_5shot_gen.log +43 -18
- variant-d/eval_5shot_mc.log +65 -18
- variant-d/full_log.txt +11 -0
- variant-d/gen_test.log +20 -21
- variant-d/master_output.log +2 -0
- variant-d/master_output2.log +194 -0
variant-d/eval_5shot_gen.log
CHANGED
|
@@ -1,17 +1,23 @@
|
|
| 1 |
Loading model for 5-shot GEN eval...
|
| 2 |
-
2026-02-23:09:
|
| 3 |
-
W0223 09:
|
| 4 |
-
2026-02-23:09:
|
| 5 |
-
W0223 09:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
Running 5-shot GEN eval on 12 tasks...
|
| 7 |
-
2026-02-23:09:
|
| 8 |
-
I0223 09:
|
| 9 |
-
2026-02-23:09:
|
| 10 |
-
I0223 09:
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
| 13 |
Traceback (most recent call last):
|
| 14 |
-
File "/workspace/eval_5shot_gen.py", line
|
| 15 |
results = evaluator.simple_evaluate(
|
| 16 |
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 17 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
|
@@ -25,10 +31,29 @@ Traceback (most recent call last):
|
|
| 25 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 26 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 27 |
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 28 |
-
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
Loading model for 5-shot GEN eval...
|
| 2 |
+
2026-02-23:09:24:44,330 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 3 |
+
W0223 09:24:44.330928 32496 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 4 |
+
2026-02-23:09:24:44,336 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 5 |
+
W0223 09:24:44.336212 32496 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 6 |
+
2026-02-23:09:24:44,369 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 7 |
+
I0223 09:24:44.369267 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 8 |
+
2026-02-23:09:24:48,809 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 9 |
+
I0223 09:24:48.809722 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 10 |
Running 5-shot GEN eval on 12 tasks...
|
| 11 |
+
2026-02-23:09:24:52,382 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 12 |
+
I0223 09:24:52.382735 32496 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 13 |
+
2026-02-23:09:24:52,382 INFO [evaluator.py:214] Using pre-initialized model
|
| 14 |
+
I0223 09:24:52.382833 32496 evaluator.py:214] Using pre-initialized model
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
Traceback (most recent call last):
|
| 20 |
+
File "/workspace/eval_5shot_gen.py", line 51, in <module>
|
| 21 |
results = evaluator.simple_evaluate(
|
| 22 |
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 23 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
|
|
|
| 31 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 32 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 33 |
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 34 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group
|
| 35 |
+
return _load_task(task_config, task=name_or_config)
|
| 36 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 37 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
|
| 38 |
+
task_object = ConfigurableTask(config=config)
|
| 39 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 40 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
|
| 41 |
+
self.download(self.config.dataset_kwargs)
|
| 42 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
|
| 43 |
+
self.dataset = datasets.load_dataset(
|
| 44 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 45 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
|
| 46 |
+
builder_instance = load_dataset_builder(
|
| 47 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 48 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
|
| 49 |
+
builder_instance: DatasetBuilder = builder_cls(
|
| 50 |
+
^^^^^^^^^^^^
|
| 51 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
|
| 52 |
+
self.config, self.config_id = self._create_builder_config(
|
| 53 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 54 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
|
| 55 |
+
raise ValueError(
|
| 56 |
+
ValueError: Config name is missing.
|
| 57 |
+
Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
|
| 58 |
+
Example of usage:
|
| 59 |
+
`load_dataset('facebook/belebele', 'acm_Arab')`
|
variant-d/eval_5shot_mc.log
CHANGED
|
@@ -1,17 +1,45 @@
|
|
| 1 |
Loading model for 5-shot MC eval...
|
| 2 |
-
2026-02-23:09:
|
| 3 |
-
W0223 09:
|
| 4 |
-
2026-02-23:09:
|
| 5 |
-
W0223 09:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
Running 5-shot MC eval on 10 tasks...
|
| 7 |
-
2026-02-23:09:
|
| 8 |
-
I0223 09:
|
| 9 |
-
2026-02-23:09:
|
| 10 |
-
I0223 09:
|
| 11 |
-
2026-02-23:09:
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
Traceback (most recent call last):
|
| 14 |
-
File "/workspace/eval_5shot_mc.py", line
|
| 15 |
results = evaluator.simple_evaluate(
|
| 16 |
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 17 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
|
@@ -25,10 +53,29 @@ Traceback (most recent call last):
|
|
| 25 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 26 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 27 |
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 28 |
-
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
Loading model for 5-shot MC eval...
|
| 2 |
+
2026-02-23:09:24:12,524 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 3 |
+
W0223 09:24:12.524312 32235 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 4 |
+
2026-02-23:09:24:12,529 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 5 |
+
W0223 09:24:12.529462 32235 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 6 |
+
2026-02-23:09:24:12,562 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 7 |
+
I0223 09:24:12.562284 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 8 |
+
2026-02-23:09:24:16,999 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 9 |
+
I0223 09:24:16.999919 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 10 |
Running 5-shot MC eval on 10 tasks...
|
| 11 |
+
2026-02-23:09:24:20,565 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 12 |
+
I0223 09:24:20.565726 32235 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 13 |
+
2026-02-23:09:24:20,565 INFO [evaluator.py:214] Using pre-initialized model
|
| 14 |
+
I0223 09:24:20.565819 32235 evaluator.py:214] Using pre-initialized model
|
| 15 |
+
2026-02-23:09:24:20,567 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 16 |
+
W0223 09:24:20.567207 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
2026-02-23:09:24:24,760 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 25 |
+
W0223 09:24:24.760432 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
|
| 41 |
Traceback (most recent call last):
|
| 42 |
+
File "/workspace/eval_5shot_mc.py", line 47, in <module>
|
| 43 |
results = evaluator.simple_evaluate(
|
| 44 |
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 45 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
|
|
|
| 53 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 54 |
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 55 |
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 56 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group
|
| 57 |
+
return _load_task(task_config, task=name_or_config)
|
| 58 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 59 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
|
| 60 |
+
task_object = ConfigurableTask(config=config)
|
| 61 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 62 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
|
| 63 |
+
self.download(self.config.dataset_kwargs)
|
| 64 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
|
| 65 |
+
self.dataset = datasets.load_dataset(
|
| 66 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 67 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
|
| 68 |
+
builder_instance = load_dataset_builder(
|
| 69 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 70 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
|
| 71 |
+
builder_instance: DatasetBuilder = builder_cls(
|
| 72 |
+
^^^^^^^^^^^^
|
| 73 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
|
| 74 |
+
self.config, self.config_id = self._create_builder_config(
|
| 75 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 76 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
|
| 77 |
+
raise ValueError(
|
| 78 |
+
ValueError: Config name is missing.
|
| 79 |
+
Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
|
| 80 |
+
Example of usage:
|
| 81 |
+
`load_dataset('facebook/belebele', 'acm_Arab')`
|
variant-d/full_log.txt
CHANGED
|
@@ -3837,3 +3837,14 @@ All results uploaded
|
|
| 3837 |
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 3838 |
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 3839 |
Repo Jakubrd4/bielik-q2-sharp ready
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3837 |
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 3838 |
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 3839 |
Repo Jakubrd4/bielik-q2-sharp ready
|
| 3840 |
+
Upload complete!
|
| 3841 |
+
=== 2026-02-23 09:17:25 UTC === ALL DONE ===
|
| 3842 |
+
=== 2026-02-23 09:22:00 UTC === MASTER SCRIPT START ===
|
| 3843 |
+
=== 2026-02-23 09:22:00 UTC === STEP 1: GENERATION TEST ===
|
| 3844 |
+
=== 2026-02-23 09:24:03 UTC === STEP 1 EXIT CODE: 0 ===
|
| 3845 |
+
=== 2026-02-23 09:24:03 UTC === STEP 2: EVAL 5-shot MC ===
|
| 3846 |
+
=== 2026-02-23 09:24:35 UTC === STEP 2 EXIT CODE: 0 ===
|
| 3847 |
+
=== 2026-02-23 09:24:35 UTC === STEP 3: EVAL 5-shot GEN ===
|
| 3848 |
+
=== 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
|
| 3849 |
+
=== 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
|
| 3850 |
+
Repo Jakubrd4/bielik-q2-sharp ready
|
variant-d/gen_test.log
CHANGED
|
@@ -1,45 +1,44 @@
|
|
| 1 |
-
I0223 09:
|
| 2 |
-
I0223 09:
|
| 3 |
-
I0223 09:
|
| 4 |
-
I0223 09:
|
| 5 |
Loading quantized model...
|
| 6 |
|
| 7 |
============================================================
|
| 8 |
GENERATION TEST RESULTS
|
| 9 |
============================================================
|
| 10 |
-
W0223 09:
|
| 11 |
-
W0223 09:14
|
| 12 |
|
| 13 |
--- Prompt 1: Stolica Polski to ---
|
| 14 |
-
Response: Stolica Polski
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
|
| 20 |
--- Prompt 2: Opisz krotko historie Krakowa ---
|
| 21 |
-
Response: Opisz krotko historie Krakowa
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
rozwój miasta w okresie panowania Kazimierza Wielkiego,
|
| 25 |
-
rozwój handlu i rzemiosła w okresie panowania Jagiellon
|
| 26 |
|
| 27 |
|
| 28 |
--- Prompt 3: Kim byl Mikolaj Kopernik? ---
|
| 29 |
Response: Kim byl Mikolaj Kopernik?
|
| 30 |
-
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
|
| 35 |
--- Prompt 4: Wymien 3 polskie rzeki ---
|
| 36 |
-
Response: Wymien 3 polskie rzeki, które
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
|
| 42 |
-
2. Odra - druga co do wielkości rzeka w Polsce
|
| 43 |
|
| 44 |
============================================================
|
| 45 |
GENERATION TEST COMPLETE
|
|
|
|
| 1 |
+
I0223 09:22:01.549311 31825 utils.py:148] Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
|
| 2 |
+
I0223 09:22:01.549440 31825 utils.py:151] Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
|
| 3 |
+
I0223 09:22:01.549479 31825 utils.py:164] NumExpr defaulting to 16 threads.
|
| 4 |
+
I0223 09:22:01.805066 31825 config.py:58] PyTorch version 2.10.0 available.
|
| 5 |
Loading quantized model...
|
| 6 |
|
| 7 |
============================================================
|
| 8 |
GENERATION TEST RESULTS
|
| 9 |
============================================================
|
| 10 |
+
W0223 09:22:08.218047 31825 logging.py:328] We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
|
| 11 |
+
W0223 09:22:14.646492 31825 logging.py:328] Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
|
| 12 |
|
| 13 |
--- Prompt 1: Stolica Polski to ---
|
| 14 |
+
Response: Stolica Polski to턴aszańska kawiarnia, restauracja i sala koncertowa, która od lat cieszy się uznaniem wśród miłośników muzyki i kultury.
|
| 15 |
+
Na ścianach wiszą obrazy znanych artystów, a wnętrze zdobią stylowe meble i piękne lampy. W menu znajdziemy przede wszystkim dania kuchni polskiej, ale też kilka międzynarodowych pozycji.
|
| 16 |
|
| 17 |
+
S
|
| 18 |
|
| 19 |
|
| 20 |
--- Prompt 2: Opisz krotko historie Krakowa ---
|
| 21 |
+
Response: Opisz krotko historie Krakowa.oś czasów przedchrześcijańskich
|
| 22 |
+
|
| 23 |
+
Kraków jest jednym z najstarszych miast w Polsce. W okresie przedchrześcijańskim był to ośrodek kulturowy i polityczny. Znajdowała się tu osada ludności kultury łużyckiej, a później celtyckiej. W III wieku p.n.e. istniał tu gród ludności kultury lateńskiej. W V-VI wieku
|
|
|
|
|
|
|
| 24 |
|
| 25 |
|
| 26 |
--- Prompt 3: Kim byl Mikolaj Kopernik? ---
|
| 27 |
Response: Kim byl Mikolaj Kopernik?
|
| 28 |
+
Ζe vědcí vzkřesil starověkou teorii o Zemi jako sférickém tělese a o Slunci jako středu vesmíru. W roku 1609 Galileo Galilei dokázal, že sférická Země je vlastně kulový globus.
|
| 29 |
|
| 30 |
+
Země je sférická, což je v souladu s geometrií, ale je také kulový globus, což je v souladu s geofyzik
|
| 31 |
|
| 32 |
|
| 33 |
--- Prompt 4: Wymien 3 polskie rzeki ---
|
| 34 |
+
Response: Wymien 3 polskie rzeki, które przepływają przez duże miasta.
|
| 35 |
+
향 3 정부 정하의 광대:
|
| 36 |
|
| 37 |
+
1. Wisła - Warszawa
|
| 38 |
+
2. Odra - Wrocław
|
| 39 |
+
3. Warta - Poznań
|
| 40 |
|
| 41 |
+
Ich przepływ przez te miasta jest istotny dla rozwoju gospodarczego i kulturowego tych regionów.
|
|
|
|
| 42 |
|
| 43 |
============================================================
|
| 44 |
GENERATION TEST COMPLETE
|
variant-d/master_output.log
CHANGED
|
@@ -121,3 +121,5 @@ KeyError: 'polemo2_in_generative'
|
|
| 121 |
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 122 |
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 123 |
Repo Jakubrd4/bielik-q2-sharp ready
|
|
|
|
|
|
|
|
|
| 121 |
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 122 |
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 123 |
Repo Jakubrd4/bielik-q2-sharp ready
|
| 124 |
+
Upload complete!
|
| 125 |
+
=== 2026-02-23 09:17:25 UTC === ALL DONE ===
|
variant-d/master_output2.log
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
=== 2026-02-23 09:22:00 UTC === MASTER SCRIPT START ===
|
| 2 |
+
=== 2026-02-23 09:22:00 UTC === STEP 1: GENERATION TEST ===
|
| 3 |
+
I0223 09:22:01.549311 31825 utils.py:148] Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
|
| 4 |
+
I0223 09:22:01.549440 31825 utils.py:151] Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
|
| 5 |
+
I0223 09:22:01.549479 31825 utils.py:164] NumExpr defaulting to 16 threads.
|
| 6 |
+
I0223 09:22:01.805066 31825 config.py:58] PyTorch version 2.10.0 available.
|
| 7 |
+
Loading quantized model...
|
| 8 |
+
|
| 9 |
+
============================================================
|
| 10 |
+
GENERATION TEST RESULTS
|
| 11 |
+
============================================================
|
| 12 |
+
W0223 09:22:08.218047 31825 logging.py:328] We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
|
| 13 |
+
W0223 09:22:14.646492 31825 logging.py:328] Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
|
| 14 |
+
|
| 15 |
+
--- Prompt 1: Stolica Polski to ---
|
| 16 |
+
Response: Stolica Polski to턴aszańska kawiarnia, restauracja i sala koncertowa, która od lat cieszy się uznaniem wśród miłośników muzyki i kultury.
|
| 17 |
+
Na ścianach wiszą obrazy znanych artystów, a wnętrze zdobią stylowe meble i piękne lampy. W menu znajdziemy przede wszystkim dania kuchni polskiej, ale też kilka międzynarodowych pozycji.
|
| 18 |
+
|
| 19 |
+
S
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
--- Prompt 2: Opisz krotko historie Krakowa ---
|
| 23 |
+
Response: Opisz krotko historie Krakowa.oś czasów przedchrześcijańskich
|
| 24 |
+
|
| 25 |
+
Kraków jest jednym z najstarszych miast w Polsce. W okresie przedchrześcijańskim był to ośrodek kulturowy i polityczny. Znajdowała się tu osada ludności kultury łużyckiej, a później celtyckiej. W III wieku p.n.e. istniał tu gród ludności kultury lateńskiej. W V-VI wieku
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
--- Prompt 3: Kim byl Mikolaj Kopernik? ---
|
| 29 |
+
Response: Kim byl Mikolaj Kopernik?
|
| 30 |
+
Ζe vědcí vzkřesil starověkou teorii o Zemi jako sférickém tělese a o Slunci jako středu vesmíru. W roku 1609 Galileo Galilei dokázal, že sférická Země je vlastně kulový globus.
|
| 31 |
+
|
| 32 |
+
Země je sférická, což je v souladu s geometrií, ale je také kulový globus, což je v souladu s geofyzik
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
--- Prompt 4: Wymien 3 polskie rzeki ---
|
| 36 |
+
Response: Wymien 3 polskie rzeki, które przepływają przez duże miasta.
|
| 37 |
+
향 3 정부 정하의 광대:
|
| 38 |
+
|
| 39 |
+
1. Wisła - Warszawa
|
| 40 |
+
2. Odra - Wrocław
|
| 41 |
+
3. Warta - Poznań
|
| 42 |
+
|
| 43 |
+
Ich przepływ przez te miasta jest istotny dla rozwoju gospodarczego i kulturowego tych regionów.
|
| 44 |
+
|
| 45 |
+
============================================================
|
| 46 |
+
GENERATION TEST COMPLETE
|
| 47 |
+
============================================================
|
| 48 |
+
=== 2026-02-23 09:24:03 UTC === STEP 1 EXIT CODE: 0 ===
|
| 49 |
+
=== 2026-02-23 09:24:03 UTC === STEP 2: EVAL 5-shot MC ===
|
| 50 |
+
Loading model for 5-shot MC eval...
|
| 51 |
+
2026-02-23:09:24:12,524 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 52 |
+
W0223 09:24:12.524312 32235 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 53 |
+
2026-02-23:09:24:12,529 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 54 |
+
W0223 09:24:12.529462 32235 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 55 |
+
2026-02-23:09:24:12,562 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 56 |
+
I0223 09:24:12.562284 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 57 |
+
2026-02-23:09:24:16,999 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 58 |
+
I0223 09:24:16.999919 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 59 |
+
Running 5-shot MC eval on 10 tasks...
|
| 60 |
+
2026-02-23:09:24:20,565 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 61 |
+
I0223 09:24:20.565726 32235 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 62 |
+
2026-02-23:09:24:20,565 INFO [evaluator.py:214] Using pre-initialized model
|
| 63 |
+
I0223 09:24:20.565819 32235 evaluator.py:214] Using pre-initialized model
|
| 64 |
+
2026-02-23:09:24:20,567 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 65 |
+
W0223 09:24:20.567207 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
2026-02-23:09:24:24,760 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 74 |
+
W0223 09:24:24.760432 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
Traceback (most recent call last):
|
| 91 |
+
File "/workspace/eval_5shot_mc.py", line 47, in <module>
|
| 92 |
+
results = evaluator.simple_evaluate(
|
| 93 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 94 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 95 |
+
return fn(*args, **kwargs)
|
| 96 |
+
^^^^^^^^^^^^^^^^^^^
|
| 97 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 98 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 99 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 100 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 101 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 102 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 103 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 104 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 105 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group
|
| 106 |
+
return _load_task(task_config, task=name_or_config)
|
| 107 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 108 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
|
| 109 |
+
task_object = ConfigurableTask(config=config)
|
| 110 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 111 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
|
| 112 |
+
self.download(self.config.dataset_kwargs)
|
| 113 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
|
| 114 |
+
self.dataset = datasets.load_dataset(
|
| 115 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 116 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
|
| 117 |
+
builder_instance = load_dataset_builder(
|
| 118 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 119 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
|
| 120 |
+
builder_instance: DatasetBuilder = builder_cls(
|
| 121 |
+
^^^^^^^^^^^^
|
| 122 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
|
| 123 |
+
self.config, self.config_id = self._create_builder_config(
|
| 124 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 125 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
|
| 126 |
+
raise ValueError(
|
| 127 |
+
ValueError: Config name is missing.
|
| 128 |
+
Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
|
| 129 |
+
Example of usage:
|
| 130 |
+
`load_dataset('facebook/belebele', 'acm_Arab')`
|
| 131 |
+
=== 2026-02-23 09:24:35 UTC === STEP 2 EXIT CODE: 0 ===
|
| 132 |
+
=== 2026-02-23 09:24:35 UTC === STEP 3: EVAL 5-shot GEN ===
|
| 133 |
+
Loading model for 5-shot GEN eval...
|
| 134 |
+
2026-02-23:09:24:44,330 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 135 |
+
W0223 09:24:44.330928 32496 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 136 |
+
2026-02-23:09:24:44,336 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 137 |
+
W0223 09:24:44.336212 32496 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 138 |
+
2026-02-23:09:24:44,369 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 139 |
+
I0223 09:24:44.369267 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 140 |
+
2026-02-23:09:24:48,809 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 141 |
+
I0223 09:24:48.809722 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 142 |
+
Running 5-shot GEN eval on 12 tasks...
|
| 143 |
+
2026-02-23:09:24:52,382 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 144 |
+
I0223 09:24:52.382735 32496 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 145 |
+
2026-02-23:09:24:52,382 INFO [evaluator.py:214] Using pre-initialized model
|
| 146 |
+
I0223 09:24:52.382833 32496 evaluator.py:214] Using pre-initialized model
|
| 147 |
+
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
Traceback (most recent call last):
|
| 152 |
+
File "/workspace/eval_5shot_gen.py", line 51, in <module>
|
| 153 |
+
results = evaluator.simple_evaluate(
|
| 154 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 155 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 156 |
+
return fn(*args, **kwargs)
|
| 157 |
+
^^^^^^^^^^^^^^^^^^^
|
| 158 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 159 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 160 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 161 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 162 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 163 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 164 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 165 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 166 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group
|
| 167 |
+
return _load_task(task_config, task=name_or_config)
|
| 168 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 169 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
|
| 170 |
+
task_object = ConfigurableTask(config=config)
|
| 171 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 172 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
|
| 173 |
+
self.download(self.config.dataset_kwargs)
|
| 174 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
|
| 175 |
+
self.dataset = datasets.load_dataset(
|
| 176 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 177 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
|
| 178 |
+
builder_instance = load_dataset_builder(
|
| 179 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 180 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
|
| 181 |
+
builder_instance: DatasetBuilder = builder_cls(
|
| 182 |
+
^^^^^^^^^^^^
|
| 183 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
|
| 184 |
+
self.config, self.config_id = self._create_builder_config(
|
| 185 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 186 |
+
File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
|
| 187 |
+
raise ValueError(
|
| 188 |
+
ValueError: Config name is missing.
|
| 189 |
+
Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
|
| 190 |
+
Example of usage:
|
| 191 |
+
`load_dataset('facebook/belebele', 'acm_Arab')`
|
| 192 |
+
=== 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
|
| 193 |
+
=== 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
|
| 194 |
+
Repo Jakubrd4/bielik-q2-sharp ready
|