Jakubrd4's picture
Upload variant_a/logs/auto_next.log with huggingface_hub
b31fa49 verified
[2026-02-21 14:32:38] Waiting for current eval (PID 7303) to finish...
[2026-02-21 16:04:39] Current eval finished!
[2026-02-21 16:04:39] ============================================
[2026-02-21 16:04:39] RESULTS FROM RUN 1 (core tasks)
[2026-02-21 16:04:39] ============================================
Traceback (most recent call last):
File "/dev/shm/eval/print_results.py", line 6, in <module>
with open(path) as f:
^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/dev/shm/eval/results_a/full_results.json'
[2026-02-21 16:04:40] ============================================
[2026-02-21 16:04:40] STARTING RUN 2 (remaining tasks)
[2026-02-21 16:04:40] ============================================
I0221 16:04:41.885763 9305 utils.py:148] Note: detected 192 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
I0221 16:04:41.885838 9305 utils.py:151] Note: NumExpr detected 192 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
I0221 16:04:41.885877 9305 utils.py:164] NumExpr defaulting to 16 threads.
I0221 16:04:41.966932 9305 config.py:58] PyTorch version 2.10.0+cu126 available.
W0221 16:04:42.093261 9305 warnings.py:112] /dev/shm/eval/quip-sharp/lib/codebook/__init__.py:6: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("quip_lib::decode_matvec_e8p")
W0221 16:04:42.130918 9305 warnings.py:112] /dev/shm/eval/quip-sharp/lib/codebook/__init__.py:25: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("quip_lib::decompress_packed_e8p")
W0221 16:04:42.294118 9305 warnings.py:112] /dev/shm/eval/quip-sharp/lib/utils/matmul_had.py:96: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("quip_lib::hadamard")
W0221 16:05:06.883764 9305 warnings.py:112] /dev/shm/eval/lm-evaluation-harness/lm_eval/filters/extraction.py:98: SyntaxWarning: invalid escape sequence '\s'
- step 2 : We parse the choice with regex :[\s]*([A-?]), where ? varies by number of choices.
W0221 16:05:06.884044 9305 warnings.py:112] /dev/shm/eval/lm-evaluation-harness/lm_eval/filters/extraction.py:168: SyntaxWarning: invalid escape sequence '\s'
f":[\s]*({without_paren_fallback_regex})"
[16:05:06] ============================================================
[16:05:06] Polish LLM Leaderboard Eval
[16:05:06] Model: QuIP# Bielik-Q2-Sharp Variant A
[16:05:06] lm-eval API: new
[16:05:06] Tasks: ['polish_belebele_mc', 'polish_belebele_regex', 'polish_dyk_multiple_choice', 'polish_dyk_regex', 'polish_klej_ner_multiple_choice', 'polish_klej_ner_regex', 'polish_polqa_reranking_multiple_choice', 'polish_polqa_open_book', 'polish_polqa_closed_book', 'polish_poquad_open_book', 'polish_eq_bench', 'polish_eq_bench_first_turn', 'polish_poleval2018_task3_test_10k']
[16:05:06] Few-shot: 5
[16:05:06] ============================================================
[16:05:06] Loading QuIP# model from /dev/shm/eval/model...
I0221 16:05:08.792412 9305 modeling.py:987] We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
[16:05:10] Model loaded in 3.7s
Terminated
[2026-02-21 16:05:12] ============================================
[2026-02-21 16:05:12] ALL DONE - both runs complete
[2026-02-21 16:05:12] ============================================
[2026-02-21 16:05:12] Results 1: /dev/shm/eval/results_a/full_results.json
[2026-02-21 16:05:12] Results 2: /dev/shm/eval/results_a_remaining/full_results.json