/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. import pynvml # type: ignore[import] 2026-02-23:16:26:56,052 INFO [__main__.py:279] Verbosity set to INFO 2026-02-23:16:26:56,053 INFO [__main__.py:303] Including path: /workspace/repos/speakleash-tasks/lm_eval/tasks 2026-02-23:16:26:56,080 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. 2026-02-23:16:26:59,815 INFO [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information. 2026-02-23:16:27:01,628 INFO [__main__.py:376] Selected Tasks: ['polemo2_in_multiple_choice', 'polemo2_out_multiple_choice', 'polish_8tags_multiple_choice', 'polish_belebele_mc', 'polish_cbd_multiple_choice', 'polish_dyk_multiple_choice', 'polish_klej_ner_multiple_choice', 'polish_polqa_reranking_multiple_choice', 'polish_ppc_multiple_choice', 'polish_psc_multiple_choice'] 2026-02-23:16:27:01,632 INFO [evaluator.py:161] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 2026-02-23:16:27:01,632 INFO [evaluator.py:198] Initializing hf model, with arguments: {'pretrained': '/workspace/packed_model/', 'trust_remote_code': True} 2026-02-23:16:27:01,999 INFO [huggingface.py:130] Using device 'cuda' 2026-02-23:16:27:02,261 INFO [huggingface.py:366] Model parallel was set to False, max memory was not set, and device map was set to {'': 'cuda'} Successfully loaded VPTQ CUDA kernels. 2026-02-23:16:27:04,597 WARNING [task.py:101] A task YAML file was found to contain a `group` key. Groups which provide aggregate scores over several subtasks now require a separate config file--if not aggregating, you may want to use the `tag` config option instead within your config. Setting `group` within a TaskConfig will be deprecated in v0.4.4. Please see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md for more information. Downloading readme: 0%| | 0.00/6.01k [00:00