bielik-q2-sharp / variant-e /environment.txt
Jakubrd4's picture
Upload variant-e full results (quant logs + eval 5-shot/0-shot MC/GEN)
acc64cf verified
=== VARIANT E: VPTQ Quantization Environment ===
Date: 2026-02-22 19:39:48 UTC
GPU Info:
Sun Feb 22 19:39:48 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.211.01 Driver Version: 570.211.01 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H200 On | 00000000:CB:00.0 Off | 0 |
| N/A 30C P0 78W / 700W | 0MiB / 143771MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Python: Python 3.12.3
PyTorch: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
2.6.0+cu124
cuML: 26.02.000
Transformers: /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
5.2.0
Model FP16 size:
21G /workspace/models/bielik-11b-instruct/
Hessians size:
24G /workspace/hessians/quip-format/hessians/