Jakubrd4 commited on
Commit
de020da
·
verified ·
1 Parent(s): adaa2b1

Upload variant-d eval logs (gen test + 5-shot MC + 5-shot GEN)

Browse files
variant-d/eval_5shot_gen.log CHANGED
@@ -1,21 +1,19 @@
1
  Loading model for 5-shot GEN eval...
2
- 2026-02-23:09:24:44,330 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
3
- W0223 09:24:44.330928 32496 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
4
- 2026-02-23:09:24:44,336 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
5
- W0223 09:24:44.336212 32496 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
6
- 2026-02-23:09:24:44,369 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
7
- I0223 09:24:44.369267 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
8
- 2026-02-23:09:24:48,809 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
9
- I0223 09:24:48.809722 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
10
  Running 5-shot GEN eval on 12 tasks...
11
- 2026-02-23:09:24:52,382 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
12
- I0223 09:24:52.382735 32496 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
13
- 2026-02-23:09:24:52,382 INFO [evaluator.py:214] Using pre-initialized model
14
- I0223 09:24:52.382833 32496 evaluator.py:214] Using pre-initialized model
15
-
16
-
17
-
18
-
19
  Traceback (most recent call last):
20
  File "/workspace/eval_5shot_gen.py", line 51, in <module>
21
  results = evaluator.simple_evaluate(
@@ -37,23 +35,19 @@ Traceback (most recent call last):
37
  File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
38
  task_object = ConfigurableTask(config=config)
39
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
- File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
41
- self.download(self.config.dataset_kwargs)
42
- File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
43
- self.dataset = datasets.load_dataset(
44
- ^^^^^^^^^^^^^^^^^^^^^^
45
- File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
46
- builder_instance = load_dataset_builder(
47
- ^^^^^^^^^^^^^^^^^^^^^
48
- File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
49
- builder_instance: DatasetBuilder = builder_cls(
50
- ^^^^^^^^^^^^
51
- File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
52
- self.config, self.config_id = self._create_builder_config(
53
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
54
- File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
55
- raise ValueError(
56
- ValueError: Config name is missing.
57
- Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
58
- Example of usage:
59
- `load_dataset('facebook/belebele', 'acm_Arab')`
 
1
  Loading model for 5-shot GEN eval...
2
+ 2026-02-23:09:26:25,320 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
3
+ W0223 09:26:25.320180 33098 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
4
+ 2026-02-23:09:26:25,325 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
5
+ W0223 09:26:25.325494 33098 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
6
+ 2026-02-23:09:26:25,359 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
7
+ I0223 09:26:25.359172 33098 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
8
+ 2026-02-23:09:26:29,801 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
9
+ I0223 09:26:29.801751 33098 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
10
  Running 5-shot GEN eval on 12 tasks...
11
+ 2026-02-23:09:26:33,384 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
12
+ I0223 09:26:33.384130 33098 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
13
+ 2026-02-23:09:26:33,384 INFO [evaluator.py:214] Using pre-initialized model
14
+ I0223 09:26:33.384235 33098 evaluator.py:214] Using pre-initialized model
15
+ 2026-02-23:09:26:45,387 WARNING [task.py:337] [Task: polish_belebele_regex] has_training_docs and has_validation_docs are False, using test_docs as fewshot_docs but this is not recommended.
16
+ W0223 09:26:45.387838 33098 task.py:337] [Task: polish_belebele_regex] has_training_docs and has_validation_docs are False, using test_docs as fewshot_docs but this is not recommended.
 
 
17
  Traceback (most recent call last):
18
  File "/workspace/eval_5shot_gen.py", line 51, in <module>
19
  results = evaluator.simple_evaluate(
 
35
  File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
36
  task_object = ConfigurableTask(config=config)
37
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
38
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 850, in __init__
39
+ if self.fewshot_docs() is not None:
40
+ ^^^^^^^^^^^^^^^^^^^
41
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 999, in fewshot_docs
42
+ return super().fewshot_docs()
43
+ ^^^^^^^^^^^^^^^^^^^^^^
44
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 341, in fewshot_docs
45
+ return self.test_docs()
46
+ ^^^^^^^^^^^^^^^^
47
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 973, in test_docs
48
+ return self.dataset[self.config.test_split]
49
+ ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
50
+ File "/workspace/venv/lib/python3.12/site-packages/datasets/dataset_dict.py", line 75, in __getitem__
51
+ return super().__getitem__(k)
52
+ ^^^^^^^^^^^^^^^^^^^^^^
53
+ KeyError: 'pol_Latn'
 
 
 
 
variant-d/eval_5shot_mc.log CHANGED
@@ -1,43 +1,23 @@
1
  Loading model for 5-shot MC eval...
2
- 2026-02-23:09:24:12,524 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
3
- W0223 09:24:12.524312 32235 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
4
- 2026-02-23:09:24:12,529 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
5
- W0223 09:24:12.529462 32235 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
6
- 2026-02-23:09:24:12,562 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
7
- I0223 09:24:12.562284 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
8
- 2026-02-23:09:24:16,999 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
9
- I0223 09:24:16.999919 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
10
  Running 5-shot MC eval on 10 tasks...
11
- 2026-02-23:09:24:20,565 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
12
- I0223 09:24:20.565726 32235 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
13
- 2026-02-23:09:24:20,565 INFO [evaluator.py:214] Using pre-initialized model
14
- I0223 09:24:20.565819 32235 evaluator.py:214] Using pre-initialized model
15
- 2026-02-23:09:24:20,567 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
16
- W0223 09:24:20.567207 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
- 2026-02-23:09:24:24,760 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
25
- W0223 09:24:24.760432 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
26
-
27
-
28
-
29
-
30
-
31
-
32
-
33
-
34
-
35
-
36
-
37
-
38
-
39
-
40
-
41
  Traceback (most recent call last):
42
  File "/workspace/eval_5shot_mc.py", line 47, in <module>
43
  results = evaluator.simple_evaluate(
@@ -59,23 +39,13 @@ Traceback (most recent call last):
59
  File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
60
  task_object = ConfigurableTask(config=config)
61
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62
- File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__
63
- self.download(self.config.dataset_kwargs)
64
- File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download
65
- self.dataset = datasets.load_dataset(
66
- ^^^^^^^^^^^^^^^^^^^^^^
67
- File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset
68
- builder_instance = load_dataset_builder(
69
- ^^^^^^^^^^^^^^^^^^^^^
70
- File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder
71
- builder_instance: DatasetBuilder = builder_cls(
72
- ^^^^^^^^^^^^
73
- File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__
74
- self.config, self.config_id = self._create_builder_config(
75
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76
- File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config
77
- raise ValueError(
78
- ValueError: Config name is missing.
79
- Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn']
80
- Example of usage:
81
- `load_dataset('facebook/belebele', 'acm_Arab')`
 
1
  Loading model for 5-shot MC eval...
2
+ 2026-02-23:09:25:57,811 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
3
+ W0223 09:25:57.811420 32827 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
4
+ 2026-02-23:09:25:57,816 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
5
+ W0223 09:25:57.816669 32827 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
6
+ 2026-02-23:09:25:57,849 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
7
+ I0223 09:25:57.849538 32827 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
8
+ 2026-02-23:09:26:02,329 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
9
+ I0223 09:26:02.329556 32827 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
10
  Running 5-shot MC eval on 10 tasks...
11
+ 2026-02-23:09:26:05,914 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
12
+ I0223 09:26:05.914060 32827 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
13
+ 2026-02-23:09:26:05,914 INFO [evaluator.py:214] Using pre-initialized model
14
+ I0223 09:26:05.914158 32827 evaluator.py:214] Using pre-initialized model
15
+ 2026-02-23:09:26:05,915 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
16
+ W0223 09:26:05.915537 32827 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
17
+ 2026-02-23:09:26:08,086 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
18
+ W0223 09:26:08.086309 32827 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information.
19
+
20
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  Traceback (most recent call last):
22
  File "/workspace/eval_5shot_mc.py", line 47, in <module>
23
  results = evaluator.simple_evaluate(
 
39
  File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task
40
  task_object = ConfigurableTask(config=config)
41
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
42
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 850, in __init__
43
+ if self.fewshot_docs() is not None:
44
+ ^^^^^^^^^^^^^^^^^^^
45
+ File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 979, in fewshot_docs
46
+ return self.dataset[self.config.fewshot_split]
47
+ ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
48
+ File "/workspace/venv/lib/python3.12/site-packages/datasets/dataset_dict.py", line 75, in __getitem__
49
+ return super().__getitem__(k)
50
+ ^^^^^^^^^^^^^^^^^^^^^^
51
+ KeyError: 'pol_Latn'
 
 
 
 
 
 
 
 
 
 
variant-d/full_log.txt CHANGED
@@ -3848,3 +3848,5 @@ Upload complete!
3848
  === 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
3849
  === 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
3850
  Repo Jakubrd4/bielik-q2-sharp ready
 
 
 
3848
  === 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
3849
  === 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
3850
  Repo Jakubrd4/bielik-q2-sharp ready
3851
+ Upload complete!
3852
+ === 2026-02-23 09:25:09 UTC === ALL DONE ===
variant-d/master_output2.log CHANGED
@@ -192,3 +192,5 @@ Example of usage:
192
  === 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
193
  === 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
194
  Repo Jakubrd4/bielik-q2-sharp ready
 
 
 
192
  === 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 ===
193
  === 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS ===
194
  Repo Jakubrd4/bielik-q2-sharp ready
195
+ Upload complete!
196
+ === 2026-02-23 09:25:09 UTC === ALL DONE ===