| === 2026-02-23 09:22:00 UTC === MASTER SCRIPT START === |
| === 2026-02-23 09:22:00 UTC === STEP 1: GENERATION TEST === |
| I0223 09:22:01.549311 31825 utils.py:148] Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable. |
| I0223 09:22:01.549440 31825 utils.py:151] Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16. |
| I0223 09:22:01.549479 31825 utils.py:164] NumExpr defaulting to 16 threads. |
| I0223 09:22:01.805066 31825 config.py:58] PyTorch version 2.10.0 available. |
| Loading quantized model... |
|
|
| ============================================================ |
| GENERATION TEST RESULTS |
| ============================================================ |
| W0223 09:22:08.218047 31825 logging.py:328] We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format) |
| W0223 09:22:14.646492 31825 logging.py:328] Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32) |
|
|
| --- Prompt 1: Stolica Polski to --- |
| Response: Stolica Polski to턴aszańska kawiarnia, restauracja i sala koncertowa, która od lat cieszy się uznaniem wśród miłośników muzyki i kultury. |
| Na ścianach wiszą obrazy znanych artystów, a wnętrze zdobią stylowe meble i piękne lampy. W menu znajdziemy przede wszystkim dania kuchni polskiej, ale też kilka międzynarodowych pozycji. |
|
|
| S |
|
|
|
|
| --- Prompt 2: Opisz krotko historie Krakowa --- |
| Response: Opisz krotko historie Krakowa.oś czasów przedchrześcijańskich |
|
|
| Kraków jest jednym z najstarszych miast w Polsce. W okresie przedchrześcijańskim był to ośrodek kulturowy i polityczny. Znajdowała się tu osada ludności kultury łużyckiej, a później celtyckiej. W III wieku p.n.e. istniał tu gród ludności kultury lateńskiej. W V-VI wieku |
|
|
|
|
| --- Prompt 3: Kim byl Mikolaj Kopernik? --- |
| Response: Kim byl Mikolaj Kopernik? |
| Ζe vědcí vzkřesil starověkou teorii o Zemi jako sférickém tělese a o Slunci jako středu vesmíru. W roku 1609 Galileo Galilei dokázal, že sférická Země je vlastně kulový globus. |
|
|
| Země je sférická, což je v souladu s geometrií, ale je také kulový globus, což je v souladu s geofyzik |
|
|
|
|
| --- Prompt 4: Wymien 3 polskie rzeki --- |
| Response: Wymien 3 polskie rzeki, które przepływają przez duże miasta. |
| 향 3 정부 정하의 광대: |
|
|
| 1. Wisła - Warszawa |
| 2. Odra - Wrocław |
| 3. Warta - Poznań |
|
|
| Ich przepływ przez te miasta jest istotny dla rozwoju gospodarczego i kulturowego tych regionów. |
|
|
| ============================================================ |
| GENERATION TEST COMPLETE |
| ============================================================ |
| === 2026-02-23 09:24:03 UTC === STEP 1 EXIT CODE: 0 === |
| === 2026-02-23 09:24:03 UTC === STEP 2: EVAL 5-shot MC === |
| Loading model for 5-shot MC eval... |
| 2026-02-23:09:24:12,524 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way. |
| W0223 09:24:12.524312 32235 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way. |
| 2026-02-23:09:24:12,529 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration |
| W0223 09:24:12.529462 32235 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration |
| 2026-02-23:09:24:12,562 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| I0223 09:24:12.562284 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| 2026-02-23:09:24:16,999 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| I0223 09:24:16.999919 32235 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| Running 5-shot MC eval on 10 tasks... |
| 2026-02-23:09:24:20,565 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 |
| I0223 09:24:20.565726 32235 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 |
| 2026-02-23:09:24:20,565 INFO [evaluator.py:214] Using pre-initialized model |
| I0223 09:24:20.565819 32235 evaluator.py:214] Using pre-initialized model |
| 2026-02-23:09:24:20,567 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information. |
| W0223 09:24:20.567207 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information. |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 6.01kB [00:00, 25.7MB/s] |
|
Downloading data: 0%| | 0.00/4.88M [00:00<?, ?B/s]
Downloading data: 0%| | 7.87k/4.88M [00:00<05:20, 15.2kB/s]
Downloading data: 31%|███▏ | 1.53M/4.88M [00:00<00:01, 3.30MB/s]
Downloading data: 63%|██████▎ | 3.06M/4.88M [00:00<00:00, 6.05MB/s]
Downloading data: 97%|█████████▋| 4.71M/4.88M [00:00<00:00, 8.59MB/s]
Downloading data: 100%|██████████| 4.88M/4.88M [00:00<00:00, 5.86MB/s] |
|
Downloading data: 0%| | 0.00/602k [00:00<?, ?B/s]
Downloading data: 1%|▏ | 8.19k/602k [00:00<00:20, 29.6kB/s]
Downloading data: 100%|██████████| 602k/602k [00:00<00:00, 1.91MB/s] |
|
Downloading data: 0%| | 0.00/591k [00:00<?, ?B/s]
Downloading data: 1%|▏ | 8.49k/591k [00:00<00:17, 33.5kB/s]
Downloading data: 100%|██████████| 591k/591k [00:00<00:00, 2.03MB/s] |
|
Generating train split: 0%| | 0/5783 [00:00<?, ? examples/s]
Generating train split: 100%|██████████| 5783/5783 [00:00<00:00, 80494.00 examples/s] |
|
Generating validation split: 0%| | 0/723 [00:00<?, ? examples/s]
Generating validation split: 100%|██████████| 723/723 [00:00<00:00, 73991.85 examples/s] |
|
Generating test split: 0%| | 0/722 [00:00<?, ? examples/s]
Generating test split: 100%|██████████| 722/722 [00:00<00:00, 83695.97 examples/s] |
| 2026-02-23:09:24:24,760 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information. |
| W0223 09:24:24.760432 32235 task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information. |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 6.29kB [00:00, 21.3MB/s] |
|
Downloading data: 0%| | 0.00/4.88M [00:00<?, ?B/s]
Downloading data: 1%| | 30.0k/4.88M [00:00<00:44, 108kB/s]
Downloading data: 32%|███▏ | 1.58M/4.88M [00:00<00:00, 5.23MB/s]
Downloading data: 64%|██████▍ | 3.13M/4.88M [00:00<00:00, 8.36MB/s]
Downloading data: 94%|█████████▎| 4.57M/4.88M [00:00<00:00, 10.2MB/s]
Downloading data: 100%|██████████| 4.88M/4.88M [00:00<00:00, 8.06MB/s] |
|
Downloading data: 0%| | 0.00/317k [00:00<?, ?B/s]
Downloading data: 2%|▏ | 7.56k/317k [00:00<00:10, 28.9kB/s]
Downloading data: 100%|██████████| 317k/317k [00:00<00:00, 1.13MB/s] |
|
Downloading data: 0%| | 0.00/315k [00:00<?, ?B/s]
Downloading data: 2%|▏ | 7.52k/315k [00:00<00:10, 30.0kB/s]
Downloading data: 100%|██████████| 315k/315k [00:00<00:00, 1.19MB/s] |
|
Generating train split: 0%| | 0/5783 [00:00<?, ? examples/s]
Generating train split: 100%|██████████| 5783/5783 [00:00<00:00, 71138.48 examples/s] |
|
Generating validation split: 0%| | 0/494 [00:00<?, ? examples/s]
Generating validation split: 100%|██████████| 494/494 [00:00<00:00, 47588.11 examples/s] |
|
Generating test split: 0%| | 0/494 [00:00<?, ? examples/s]
Generating test split: 100%|██████████| 494/494 [00:00<00:00, 77988.04 examples/s] |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 2.32kB [00:00, 12.7MB/s] |
|
Downloading data: 0%| | 0.00/4.53M [00:00<?, ?B/s]
Downloading data: 1%| | 40.9k/4.53M [00:00<00:28, 156kB/s]
Downloading data: 100%|██████████| 4.53M/4.53M [00:00<00:00, 13.0MB/s] |
|
Downloading data: 0%| | 0.00/563k [00:00<?, ?B/s]
Downloading data: 11%|█ | 59.7k/563k [00:00<00:02, 249kB/s]
Downloading data: 100%|██████████| 563k/563k [00:00<00:00, 2.27MB/s] |
|
Downloading data: 0%| | 0.00/500k [00:00<?, ?B/s]
Downloading data: 2%|▏ | 8.83k/500k [00:00<00:14, 34.5kB/s]
Downloading data: 77%|███████▋ | 386k/500k [00:00<00:00, 1.36MB/s]
Downloading data: 100%|██████████| 500k/500k [00:00<00:00, 1.35MB/s] |
|
Generating train split: 0%| | 0/40001 [00:00<?, ? examples/s]
Generating train split: 100%|██████████| 40001/40001 [00:00<00:00, 663218.42 examples/s] |
|
Generating validation split: 0%| | 0/5000 [00:00<?, ? examples/s]
Generating validation split: 100%|██████████| 5000/5000 [00:00<00:00, 730714.98 examples/s] |
|
Generating test split: 0%| | 0/4372 [00:00<?, ? examples/s]
Generating test split: 100%|██████████| 4372/4372 [00:00<00:00, 714856.43 examples/s] |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 25.5kB [00:00, 60.8MB/s] |
| Traceback (most recent call last): |
| File "/workspace/eval_5shot_mc.py", line 47, in <module> |
| results = evaluator.simple_evaluate( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper |
| return fn(*args, **kwargs) |
| ^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate |
| task_dict = get_task_dict(tasks, task_manager) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict |
| task_name_from_string_dict = task_manager.load_task_or_group( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group |
| collections.ChainMap(*map(self._load_individual_task_or_group, task_list)) |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group |
| return _load_task(task_config, task=name_or_config) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task |
| task_object = ConfigurableTask(config=config) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__ |
| self.download(self.config.dataset_kwargs) |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download |
| self.dataset = datasets.load_dataset( |
| ^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset |
| builder_instance = load_dataset_builder( |
| ^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder |
| builder_instance: DatasetBuilder = builder_cls( |
| ^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__ |
| self.config, self.config_id = self._create_builder_config( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config |
| raise ValueError( |
| ValueError: Config name is missing. |
| Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn'] |
| Example of usage: |
| `load_dataset('facebook/belebele', 'acm_Arab')` |
| === 2026-02-23 09:24:35 UTC === STEP 2 EXIT CODE: 0 === |
| === 2026-02-23 09:24:35 UTC === STEP 3: EVAL 5-shot GEN === |
| Loading model for 5-shot GEN eval... |
| 2026-02-23:09:24:44,330 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way. |
| W0223 09:24:44.330928 32496 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way. |
| 2026-02-23:09:24:44,336 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration |
| W0223 09:24:44.336212 32496 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration |
| 2026-02-23:09:24:44,369 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| I0223 09:24:44.369267 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| 2026-02-23:09:24:48,809 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| I0223 09:24:48.809722 32496 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. |
| Running 5-shot GEN eval on 12 tasks... |
| 2026-02-23:09:24:52,382 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 |
| I0223 09:24:52.382735 32496 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 |
| 2026-02-23:09:24:52,382 INFO [evaluator.py:214] Using pre-initialized model |
| I0223 09:24:52.382833 32496 evaluator.py:214] Using pre-initialized model |
|
Downloading builder script: 0.00B [00:00, ?B/s]
Downloading builder script: 6.79kB [00:00, 17.7MB/s] |
|
Downloading builder script: 0.00B [00:00, ?B/s]
Downloading builder script: 4.20kB [00:00, 10.5MB/s] |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 2.32kB [00:00, 12.5MB/s] |
|
Downloading readme: 0.00B [00:00, ?B/s]
Downloading readme: 25.5kB [00:00, 51.7MB/s] |
| Traceback (most recent call last): |
| File "/workspace/eval_5shot_gen.py", line 51, in <module> |
| results = evaluator.simple_evaluate( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper |
| return fn(*args, **kwargs) |
| ^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate |
| task_dict = get_task_dict(tasks, task_manager) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict |
| task_name_from_string_dict = task_manager.load_task_or_group( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group |
| collections.ChainMap(*map(self._load_individual_task_or_group, task_list)) |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 310, in _load_individual_task_or_group |
| return _load_task(task_config, task=name_or_config) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 276, in _load_task |
| task_object = ConfigurableTask(config=config) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 822, in __init__ |
| self.download(self.config.dataset_kwargs) |
| File "/workspace/venv/lib/python3.12/site-packages/lm_eval/api/task.py", line 929, in download |
| self.dataset = datasets.load_dataset( |
| ^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2594, in load_dataset |
| builder_instance = load_dataset_builder( |
| ^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/load.py", line 2303, in load_dataset_builder |
| builder_instance: DatasetBuilder = builder_cls( |
| ^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 374, in __init__ |
| self.config, self.config_id = self._create_builder_config( |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| File "/workspace/venv/lib/python3.12/site-packages/datasets/builder.py", line 586, in _create_builder_config |
| raise ValueError( |
| ValueError: Config name is missing. |
| Please pick one among the available configs: ['acm_Arab', 'arz_Arab', 'ceb_Latn', 'fin_Latn', 'hin_Deva', 'ita_Latn', 'khm_Khmr', 'lvs_Latn', 'npi_Deva', 'pol_Latn', 'slv_Latn', 'swe_Latn', 'tso_Latn', 'xho_Latn', 'afr_Latn', 'asm_Beng', 'ces_Latn', 'fra_Latn', 'hin_Latn', 'jav_Latn', 'kin_Latn', 'mal_Mlym', 'npi_Latn', 'por_Latn', 'sna_Latn', 'swh_Latn', 'tur_Latn', 'yor_Latn', 'als_Latn', 'azj_Latn', 'ckb_Arab', 'fuv_Latn', 'hrv_Latn', 'jpn_Jpan', 'kir_Cyrl', 'mar_Deva', 'nso_Latn', 'snd_Arab', 'tam_Taml', 'ukr_Cyrl', 'zho_Hans', 'amh_Ethi', 'bam_Latn', 'dan_Latn', 'gaz_Latn', 'hun_Latn', 'kac_Latn', 'kor_Hang', 'mkd_Cyrl', 'nya_Latn', 'ron_Latn', 'som_Latn', 'tel_Telu', 'urd_Arab', 'zho_Hant', 'apc_Arab', 'ben_Beng', 'deu_Latn', 'grn_Latn', 'hye_Armn', 'kan_Knda', 'lao_Laoo', 'mlt_Latn', 'ory_Orya', 'rus_Cyrl', 'sot_Latn', 'tgk_Cyrl', 'urd_Latn', 'zsm_Latn', 'arb_Arab', 'ben_Latn', 'ell_Grek', 'guj_Gujr', 'ibo_Latn', 'kat_Geor', 'lin_Latn', 'mri_Latn', 'pan_Guru', 'shn_Mymr', 'spa_Latn', 'tgl_Latn', 'uzn_Latn', 'zul_Latn', 'arb_Latn', 'bod_Tibt', 'eng_Latn', 'hat_Latn', 'ilo_Latn', 'kaz_Cyrl', 'lit_Latn', 'mya_Mymr', 'pbt_Arab', 'sin_Latn', 'srp_Cyrl', 'tha_Thai', 'vie_Latn', 'ars_Arab', 'bul_Cyrl', 'est_Latn', 'hau_Latn', 'ind_Latn', 'kea_Latn', 'lug_Latn', 'nld_Latn', 'pes_Arab', 'sin_Sinh', 'ssw_Latn', 'tir_Ethi', 'war_Latn', 'ary_Arab', 'cat_Latn', 'eus_Latn', 'heb_Hebr', 'isl_Latn', 'khk_Cyrl', 'luo_Latn', 'nob_Latn', 'plt_Latn', 'slk_Latn', 'sun_Latn', 'tsn_Latn', 'wol_Latn'] |
| Example of usage: |
| `load_dataset('facebook/belebele', 'acm_Arab')` |
| === 2026-02-23 09:25:06 UTC === STEP 3 EXIT CODE: 0 === |
| === 2026-02-23 09:25:06 UTC === STEP 4: UPLOAD LOGS === |
| Repo Jakubrd4/bielik-q2-sharp ready |
| Upload complete! |
| === 2026-02-23 09:25:09 UTC === ALL DONE === |
|
|