dongwookkwon commited on
Commit
3594a47
·
verified ·
1 Parent(s): 94362c1

Upload evaluation_results.json with huggingface_hub

Browse files
Files changed (1) hide show
  1. evaluation_results.json +8 -8
evaluation_results.json CHANGED
@@ -2,10 +2,10 @@
2
  "results": {
3
  "gsm8k": {
4
  "alias": "gsm8k",
5
- "exact_match,strict-match": 0.33434420015163,
6
- "exact_match_stderr,strict-match": 0.012994634003332766,
7
- "exact_match,flexible-extract": 0.3366186504927976,
8
- "exact_match_stderr,flexible-extract": 0.01301646367998336
9
  }
10
  },
11
  "group_subtasks": {
@@ -125,7 +125,7 @@
125
  "fewshot_seed": 1234
126
  },
127
  "git_hash": "3c710e0",
128
- "date": 1762166557.6260943,
129
  "pretty_env_info": "PyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: Could not collect\nCMake version: version 3.22.1\nLibc version: glibc-2.35\n\nPython version: 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-79-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 11.5.119\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation\nNvidia driver version: 580.82.07\ncuDNN version: Probably one of the following:\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen Threadripper PRO 5955WX 16-Cores\nCPU family: 25\nModel: 8\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 2\nFrequency boost: enabled\nCPU max MHz: 4000.0000\nCPU min MHz: 1800.0000\nBogoMIPS: 7985.05\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap\nVirtualization: AMD-V\nL1d cache: 512 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 8 MiB (16 instances)\nL3 cache: 64 MiB (2 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.3\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] torch==2.8.0\n[pip3] torchaudio==2.8.0\n[pip3] torchvision==0.23.0\n[pip3] triton==3.4.0\n[conda] numpy 2.2.6 pypi_0 pypi\n[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi\n[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi\n[conda] nvidia-cufft-cu12 11.3.3.83 pypi_0 pypi\n[conda] nvidia-curand-cu12 10.3.9.90 pypi_0 pypi\n[conda] nvidia-cusolver-cu12 11.7.3.90 pypi_0 pypi\n[conda] nvidia-cusparse-cu12 12.5.8.93 pypi_0 pypi\n[conda] nvidia-cusparselt-cu12 0.7.1 pypi_0 pypi\n[conda] nvidia-nccl-cu12 2.27.3 pypi_0 pypi\n[conda] nvidia-nvjitlink-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-nvtx-cu12 12.8.90 pypi_0 pypi\n[conda] torch 2.8.0 pypi_0 pypi\n[conda] torchaudio 2.8.0 pypi_0 pypi\n[conda] torchvision 0.23.0 pypi_0 pypi\n[conda] triton 3.4.0 pypi_0 pypi",
130
  "transformers_version": "4.57.1",
131
  "lm_eval_version": "0.4.9.1",
@@ -153,7 +153,7 @@
153
  "fewshot_as_multiturn": false,
154
  "chat_template": null,
155
  "chat_template_sha": null,
156
- "start_time": 4828492.248598479,
157
- "end_time": 4828605.615266261,
158
- "total_evaluation_time_seconds": "113.3666677819565"
159
  }
 
2
  "results": {
3
  "gsm8k": {
4
  "alias": "gsm8k",
5
+ "exact_match,strict-match": 0.34950720242608035,
6
+ "exact_match_stderr,strict-match": 0.013133836511705991,
7
+ "exact_match,flexible-extract": 0.3555724033358605,
8
+ "exact_match_stderr,flexible-extract": 0.013185402252713849
9
  }
10
  },
11
  "group_subtasks": {
 
125
  "fewshot_seed": 1234
126
  },
127
  "git_hash": "3c710e0",
128
+ "date": 1762181122.5749974,
129
  "pretty_env_info": "PyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version: Could not collect\nCMake version: version 3.22.1\nLibc version: glibc-2.35\n\nPython version: 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)\nPython platform: Linux-6.8.0-79-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 11.5.119\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation\nNvidia driver version: 580.82.07\ncuDNN version: Probably one of the following:\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7\n/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7\n/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen Threadripper PRO 5955WX 16-Cores\nCPU family: 25\nModel: 8\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 2\nFrequency boost: enabled\nCPU max MHz: 4000.0000\nCPU min MHz: 1800.0000\nBogoMIPS: 7985.05\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap\nVirtualization: AMD-V\nL1d cache: 512 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 8 MiB (16 instances)\nL3 cache: 64 MiB (2 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-nccl-cu12==2.27.3\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] torch==2.8.0\n[pip3] torchaudio==2.8.0\n[pip3] torchvision==0.23.0\n[pip3] triton==3.4.0\n[conda] numpy 2.2.6 pypi_0 pypi\n[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi\n[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi\n[conda] nvidia-cufft-cu12 11.3.3.83 pypi_0 pypi\n[conda] nvidia-curand-cu12 10.3.9.90 pypi_0 pypi\n[conda] nvidia-cusolver-cu12 11.7.3.90 pypi_0 pypi\n[conda] nvidia-cusparse-cu12 12.5.8.93 pypi_0 pypi\n[conda] nvidia-cusparselt-cu12 0.7.1 pypi_0 pypi\n[conda] nvidia-nccl-cu12 2.27.3 pypi_0 pypi\n[conda] nvidia-nvjitlink-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-nvtx-cu12 12.8.90 pypi_0 pypi\n[conda] torch 2.8.0 pypi_0 pypi\n[conda] torchaudio 2.8.0 pypi_0 pypi\n[conda] torchvision 0.23.0 pypi_0 pypi\n[conda] triton 3.4.0 pypi_0 pypi",
130
  "transformers_version": "4.57.1",
131
  "lm_eval_version": "0.4.9.1",
 
153
  "fewshot_as_multiturn": false,
154
  "chat_template": null,
155
  "chat_template_sha": null,
156
+ "start_time": 4843057.177305116,
157
+ "end_time": 4843164.274402437,
158
+ "total_evaluation_time_seconds": "107.09709732048213"
159
  }