Upload variant-d eval logs (gen test + 5-shot MC + 5-shot GEN)
Browse files- variant-d/eval_5shot_gen.log +33 -11
- variant-d/eval_5shot_mc.log +33 -11
- variant-d/full_log.txt +11 -0
- variant-d/gen_test.log +46 -0
- variant-d/master_output.log +123 -0
variant-d/eval_5shot_gen.log
CHANGED
|
@@ -1,12 +1,34 @@
|
|
| 1 |
-
|
| 2 |
-
2026-02-23:09:
|
| 3 |
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
Traceback (most recent call last):
|
| 6 |
-
File "
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Loading model for 5-shot GEN eval...
|
| 2 |
+
2026-02-23:09:17:17,047 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 3 |
+
W0223 09:17:17.047090 31003 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 4 |
+
2026-02-23:09:17:17,052 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 5 |
+
W0223 09:17:17.052284 31003 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 6 |
+
Running 5-shot GEN eval on 12 tasks...
|
| 7 |
+
2026-02-23:09:17:17,053 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 8 |
+
I0223 09:17:17.053538 31003 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 9 |
+
2026-02-23:09:17:17,053 INFO [evaluator.py:214] Using pre-initialized model
|
| 10 |
+
I0223 09:17:17.053598 31003 evaluator.py:214] Using pre-initialized model
|
| 11 |
+
2026-02-23:09:17:17,086 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 12 |
+
I0223 09:17:17.086985 31003 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 13 |
Traceback (most recent call last):
|
| 14 |
+
File "/workspace/eval_5shot_gen.py", line 44, in <module>
|
| 15 |
+
results = evaluator.simple_evaluate(
|
| 16 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 17 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 18 |
+
return fn(*args, **kwargs)
|
| 19 |
+
^^^^^^^^^^^^^^^^^^^
|
| 20 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 21 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 22 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 23 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 24 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 25 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 26 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 27 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 28 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 312, in _load_individual_task_or_group
|
| 29 |
+
subtask_list = self._get_tasklist(name_or_config)
|
| 30 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 31 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 232, in _get_tasklist
|
| 32 |
+
return self.task_index[name]["task"]
|
| 33 |
+
~~~~~~~~~~~~~~~^^^^^^
|
| 34 |
+
KeyError: 'polemo2_in_generative'
|
variant-d/eval_5shot_mc.log
CHANGED
|
@@ -1,12 +1,34 @@
|
|
| 1 |
-
|
| 2 |
-
2026-02-23:09:
|
| 3 |
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
Traceback (most recent call last):
|
| 6 |
-
File "
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Loading model for 5-shot MC eval...
|
| 2 |
+
2026-02-23:09:17:02,835 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 3 |
+
W0223 09:17:02.835658 30784 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 4 |
+
2026-02-23:09:17:02,840 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 5 |
+
W0223 09:17:02.840851 30784 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 6 |
+
Running 5-shot MC eval on 10 tasks...
|
| 7 |
+
2026-02-23:09:17:02,842 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 8 |
+
I0223 09:17:02.842089 30784 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 9 |
+
2026-02-23:09:17:02,842 INFO [evaluator.py:214] Using pre-initialized model
|
| 10 |
+
I0223 09:17:02.842154 30784 evaluator.py:214] Using pre-initialized model
|
| 11 |
+
2026-02-23:09:17:02,874 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 12 |
+
I0223 09:17:02.874746 30784 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 13 |
Traceback (most recent call last):
|
| 14 |
+
File "/workspace/eval_5shot_mc.py", line 42, in <module>
|
| 15 |
+
results = evaluator.simple_evaluate(
|
| 16 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 17 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 18 |
+
return fn(*args, **kwargs)
|
| 19 |
+
^^^^^^^^^^^^^^^^^^^
|
| 20 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 21 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 22 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 23 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 24 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 25 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 26 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 27 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 28 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 312, in _load_individual_task_or_group
|
| 29 |
+
subtask_list = self._get_tasklist(name_or_config)
|
| 30 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 31 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 232, in _get_tasklist
|
| 32 |
+
return self.task_index[name]["task"]
|
| 33 |
+
~~~~~~~~~~~~~~~^^^^^^
|
| 34 |
+
KeyError: 'polemo2_in_multiple_choice'
|
variant-d/full_log.txt
CHANGED
|
@@ -3826,3 +3826,14 @@ Logs uploaded
|
|
| 3826 |
=== 2026-02-23T09:11:36 UTC === WATCHDOG: EVAL RUN 2/2: 5-shot GEN ===
|
| 3827 |
=== 2026-02-23T09:11:44 UTC === WATCHDOG: EVAL RUN 2/2 DONE ===
|
| 3828 |
=== 2026-02-23T09:11:44 UTC === WATCHDOG: Final upload of all results ===
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3826 |
=== 2026-02-23T09:11:36 UTC === WATCHDOG: EVAL RUN 2/2: 5-shot GEN ===
|
| 3827 |
=== 2026-02-23T09:11:44 UTC === WATCHDOG: EVAL RUN 2/2 DONE ===
|
| 3828 |
=== 2026-02-23T09:11:44 UTC === WATCHDOG: Final upload of all results ===
|
| 3829 |
+
All results uploaded
|
| 3830 |
+
=== 2026-02-23T09:11:47 UTC === WATCHDOG: ALL TASKS COMPLETE - WAITING FOR USER (machine NOT stopped) ===
|
| 3831 |
+
=== 2026-02-23 09:14:37 UTC === MASTER SCRIPT START ===
|
| 3832 |
+
=== 2026-02-23 09:14:37 UTC === STEP 1: GENERATION TEST ===
|
| 3833 |
+
=== 2026-02-23 09:16:53 UTC === STEP 1 EXIT CODE: 0 ===
|
| 3834 |
+
=== 2026-02-23 09:16:53 UTC === STEP 2: EVAL 5-shot MC ===
|
| 3835 |
+
=== 2026-02-23 09:17:08 UTC === STEP 2 EXIT CODE: 0 ===
|
| 3836 |
+
=== 2026-02-23 09:17:08 UTC === STEP 3: EVAL 5-shot GEN ===
|
| 3837 |
+
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 3838 |
+
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 3839 |
+
Repo Jakubrd4/bielik-q2-sharp ready
|
variant-d/gen_test.log
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I0223 09:14:39.256926 30373 utils.py:148] Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
|
| 2 |
+
I0223 09:14:39.257061 30373 utils.py:151] Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
|
| 3 |
+
I0223 09:14:39.257103 30373 utils.py:164] NumExpr defaulting to 16 threads.
|
| 4 |
+
I0223 09:14:39.517254 30373 config.py:58] PyTorch version 2.10.0 available.
|
| 5 |
+
Loading quantized model...
|
| 6 |
+
|
| 7 |
+
============================================================
|
| 8 |
+
GENERATION TEST RESULTS
|
| 9 |
+
============================================================
|
| 10 |
+
W0223 09:14:45.684412 30373 logging.py:328] We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
|
| 11 |
+
W0223 09:14:52.370406 30373 logging.py:328] Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
|
| 12 |
+
|
| 13 |
+
--- Prompt 1: Stolica Polski to ---
|
| 14 |
+
Response: Stolica Polski toķe 2010 r. posiadała tylko 33% udziałów w kapitale spółki, podczas gdy obecnie ma ich 55%. Wartość akcji spółki wzrosła ze 137 mln zł do 610 mln zł w 2009 r. i 796 mln zł w 2010 r.
|
| 15 |
+
ódło: raport zarządu spółki.
|
| 16 |
+
|
| 17 |
+
W 2010 r. spółka
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
--- Prompt 2: Opisz krotko historie Krakowa ---
|
| 21 |
+
Response: Opisz krotko historie Krakowa.. W czasie swojego istnienia Kraków przeżył wiele wydarzeń i zmian. Najważniejsze z nich można streścić w następujący sposób:
|
| 22 |
+
opponenci i zwolennicy lokacji miasta na Wawelu,
|
| 23 |
+
powstanie Akademii Krakowskiej,
|
| 24 |
+
rozwój miasta w okresie panowania Kazimierza Wielkiego,
|
| 25 |
+
rozwój handlu i rzemiosła w okresie panowania Jagiellon
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
--- Prompt 3: Kim byl Mikolaj Kopernik? ---
|
| 29 |
+
Response: Kim byl Mikolaj Kopernik?
|
| 30 |
+
typen: vědecký pracovník
|
| 31 |
+
|
| 32 |
+
Mikolaj Kopernik (asi 1474–1543) byl polský astronom, matematik a lékař. Je považován za jednoho z nejvýznamnějších vědců renesance. Jeho teorie heliocentrické soustavy, vytvořená v polovině 16. století, odrážela revoluční převrat v astronomii a stala se
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
--- Prompt 4: Wymien 3 polskie rzeki ---
|
| 36 |
+
Response: Wymien 3 polskie rzeki, które płyną do Morza Bałtyckiego.
|
| 37 |
+
ibrary. Wymień 3 polskie rzeki, które płyną do Morza Bałtyckiego.
|
| 38 |
+
|
| 39 |
+
Odpowiedź:
|
| 40 |
+
|
| 41 |
+
1. Wisła - największa rzeka w Polsce, płynąca przez wiele miast, w tym Warszawę i Gdańsk, zanim wpada do Morza Bałtyckiego.
|
| 42 |
+
2. Odra - druga co do wielkości rzeka w Polsce
|
| 43 |
+
|
| 44 |
+
============================================================
|
| 45 |
+
GENERATION TEST COMPLETE
|
| 46 |
+
============================================================
|
variant-d/master_output.log
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
=== 2026-02-23 09:14:37 UTC === MASTER SCRIPT START ===
|
| 2 |
+
=== 2026-02-23 09:14:37 UTC === STEP 1: GENERATION TEST ===
|
| 3 |
+
I0223 09:14:39.256926 30373 utils.py:148] Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
|
| 4 |
+
I0223 09:14:39.257061 30373 utils.py:151] Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
|
| 5 |
+
I0223 09:14:39.257103 30373 utils.py:164] NumExpr defaulting to 16 threads.
|
| 6 |
+
I0223 09:14:39.517254 30373 config.py:58] PyTorch version 2.10.0 available.
|
| 7 |
+
Loading quantized model...
|
| 8 |
+
|
| 9 |
+
============================================================
|
| 10 |
+
GENERATION TEST RESULTS
|
| 11 |
+
============================================================
|
| 12 |
+
W0223 09:14:45.684412 30373 logging.py:328] We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class (https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)
|
| 13 |
+
W0223 09:14:52.370406 30373 logging.py:328] Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
|
| 14 |
+
|
| 15 |
+
--- Prompt 1: Stolica Polski to ---
|
| 16 |
+
Response: Stolica Polski toķe 2010 r. posiadała tylko 33% udziałów w kapitale spółki, podczas gdy obecnie ma ich 55%. Wartość akcji spółki wzrosła ze 137 mln zł do 610 mln zł w 2009 r. i 796 mln zł w 2010 r.
|
| 17 |
+
ódło: raport zarządu spółki.
|
| 18 |
+
|
| 19 |
+
W 2010 r. spółka
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
--- Prompt 2: Opisz krotko historie Krakowa ---
|
| 23 |
+
Response: Opisz krotko historie Krakowa.. W czasie swojego istnienia Kraków przeżył wiele wydarzeń i zmian. Najważniejsze z nich można streścić w następujący sposób:
|
| 24 |
+
opponenci i zwolennicy lokacji miasta na Wawelu,
|
| 25 |
+
powstanie Akademii Krakowskiej,
|
| 26 |
+
rozwój miasta w okresie panowania Kazimierza Wielkiego,
|
| 27 |
+
rozwój handlu i rzemiosła w okresie panowania Jagiellon
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
--- Prompt 3: Kim byl Mikolaj Kopernik? ---
|
| 31 |
+
Response: Kim byl Mikolaj Kopernik?
|
| 32 |
+
typen: vědecký pracovník
|
| 33 |
+
|
| 34 |
+
Mikolaj Kopernik (asi 1474–1543) byl polský astronom, matematik a lékař. Je považován za jednoho z nejvýznamnějších vědců renesance. Jeho teorie heliocentrické soustavy, vytvořená v polovině 16. století, odrážela revoluční převrat v astronomii a stala se
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
--- Prompt 4: Wymien 3 polskie rzeki ---
|
| 38 |
+
Response: Wymien 3 polskie rzeki, które płyną do Morza Bałtyckiego.
|
| 39 |
+
ibrary. Wymień 3 polskie rzeki, które płyną do Morza Bałtyckiego.
|
| 40 |
+
|
| 41 |
+
Odpowiedź:
|
| 42 |
+
|
| 43 |
+
1. Wisła - największa rzeka w Polsce, płynąca przez wiele miast, w tym Warszawę i Gdańsk, zanim wpada do Morza Bałtyckiego.
|
| 44 |
+
2. Odra - druga co do wielkości rzeka w Polsce
|
| 45 |
+
|
| 46 |
+
============================================================
|
| 47 |
+
GENERATION TEST COMPLETE
|
| 48 |
+
============================================================
|
| 49 |
+
=== 2026-02-23 09:16:53 UTC === STEP 1 EXIT CODE: 0 ===
|
| 50 |
+
=== 2026-02-23 09:16:53 UTC === STEP 2: EVAL 5-shot MC ===
|
| 51 |
+
Loading model for 5-shot MC eval...
|
| 52 |
+
2026-02-23:09:17:02,835 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 53 |
+
W0223 09:17:02.835658 30784 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 54 |
+
2026-02-23:09:17:02,840 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 55 |
+
W0223 09:17:02.840851 30784 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 56 |
+
Running 5-shot MC eval on 10 tasks...
|
| 57 |
+
2026-02-23:09:17:02,842 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 58 |
+
I0223 09:17:02.842089 30784 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 59 |
+
2026-02-23:09:17:02,842 INFO [evaluator.py:214] Using pre-initialized model
|
| 60 |
+
I0223 09:17:02.842154 30784 evaluator.py:214] Using pre-initialized model
|
| 61 |
+
2026-02-23:09:17:02,874 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 62 |
+
I0223 09:17:02.874746 30784 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 63 |
+
Traceback (most recent call last):
|
| 64 |
+
File "/workspace/eval_5shot_mc.py", line 42, in <module>
|
| 65 |
+
results = evaluator.simple_evaluate(
|
| 66 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 67 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 68 |
+
return fn(*args, **kwargs)
|
| 69 |
+
^^^^^^^^^^^^^^^^^^^
|
| 70 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 71 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 72 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 73 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 74 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 75 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 76 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 77 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 78 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 312, in _load_individual_task_or_group
|
| 79 |
+
subtask_list = self._get_tasklist(name_or_config)
|
| 80 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 81 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 232, in _get_tasklist
|
| 82 |
+
return self.task_index[name]["task"]
|
| 83 |
+
~~~~~~~~~~~~~~~^^^^^^
|
| 84 |
+
KeyError: 'polemo2_in_multiple_choice'
|
| 85 |
+
=== 2026-02-23 09:17:08 UTC === STEP 2 EXIT CODE: 0 ===
|
| 86 |
+
=== 2026-02-23 09:17:08 UTC === STEP 3: EVAL 5-shot GEN ===
|
| 87 |
+
Loading model for 5-shot GEN eval...
|
| 88 |
+
2026-02-23:09:17:17,047 WARNING [huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 89 |
+
W0223 09:17:17.047090 31003 huggingface.py:96] `pretrained` model kwarg is not of type `str`. Many other model arguments may be ignored. Please do not launch via accelerate or use `parallelize=True` if passing an existing model this way.
|
| 90 |
+
2026-02-23:09:17:17,052 WARNING [huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 91 |
+
W0223 09:17:17.052284 31003 huggingface.py:276] Passed an already-initialized model through `pretrained`, assuming single-process call to evaluate() or custom distributed integration
|
| 92 |
+
Running 5-shot GEN eval on 12 tasks...
|
| 93 |
+
2026-02-23:09:17:17,053 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 94 |
+
I0223 09:17:17.053538 31003 evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
|
| 95 |
+
2026-02-23:09:17:17,053 INFO [evaluator.py:214] Using pre-initialized model
|
| 96 |
+
I0223 09:17:17.053598 31003 evaluator.py:214] Using pre-initialized model
|
| 97 |
+
2026-02-23:09:17:17,086 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 98 |
+
I0223 09:17:17.086985 31003 __init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
|
| 99 |
+
Traceback (most recent call last):
|
| 100 |
+
File "/workspace/eval_5shot_gen.py", line 44, in <module>
|
| 101 |
+
results = evaluator.simple_evaluate(
|
| 102 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 103 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/utils.py", line 397, in _wrapper
|
| 104 |
+
return fn(*args, **kwargs)
|
| 105 |
+
^^^^^^^^^^^^^^^^^^^
|
| 106 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/evaluator.py", line 232, in simple_evaluate
|
| 107 |
+
task_dict = get_task_dict(tasks, task_manager)
|
| 108 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 109 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 618, in get_task_dict
|
| 110 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
| 111 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 112 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 410, in load_task_or_group
|
| 113 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
| 114 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 312, in _load_individual_task_or_group
|
| 115 |
+
subtask_list = self._get_tasklist(name_or_config)
|
| 116 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 117 |
+
File "/workspace/venv/lib/python3.12/site-packages/lm_eval/tasks/__init__.py", line 232, in _get_tasklist
|
| 118 |
+
return self.task_index[name]["task"]
|
| 119 |
+
~~~~~~~~~~~~~~~^^^^^^
|
| 120 |
+
KeyError: 'polemo2_in_generative'
|
| 121 |
+
=== 2026-02-23 09:17:22 UTC === STEP 3 EXIT CODE: 0 ===
|
| 122 |
+
=== 2026-02-23 09:17:22 UTC === STEP 4: UPLOAD LOGS ===
|
| 123 |
+
Repo Jakubrd4/bielik-q2-sharp ready
|