Dataset Viewer
Auto-converted to Parquet Duplicate
model_name
large_stringlengths
7
45
model_id
large_stringlengths
5
45
params
large_stringclasses
20 values
quant
large_stringclasses
2 values
runtime
large_stringclasses
2 values
chip
large_stringclasses
3 values
cpu_cores
int64
8
12
gpu_cores
int64
7
30
ram_gb
int64
16
32
os_version
large_stringclasses
2 values
pp128_toks
float64
39.2
4.87k
pp256_toks
float64
38.7
5.88k
pp512_toks
float64
37
6.19k
tg128_toks
float64
2.46
259
tg256_toks
float64
2.51
271
peak_memory_gb
float64
0.57
20.7
humaneval_plus_pass1
float64
0.09
0.9
humaneval_base_pass1
float64
0.1
0.95
perplexity
float64
eval_framework_version
large_stringclasses
1 value
timestamp
large_stringdate
2026-04-05 16:30:46
2026-04-20 09:02:14
DeepSeek R1 Distill 7B
deepseek-r1-distill-7b
7B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
106.7
106.12
100.59
11.47
11.13
4.47
null
null
null
null
2026-04-08T07:45:09Z
Gemma 3 12B
gemma-3-12b
12B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
56.68
59.3
57.18
6.71
6.61
6.99
null
null
null
null
2026-04-08T08:24:52Z
Gemma 3 1B
gemma-3-1b
1B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
885.47
936.04
950.78
41.96
42.83
0.85
null
null
null
null
2026-04-07T08:00:18Z
Gemma 3 4B
gemma-3-4b
4B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
231.13
235.02
235.34
21.34
20.69
2.45
null
null
null
null
2026-04-07T20:27:08Z
Gemma 4 E2B
gemma-4-e2b
2B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
369.8
375.22
374.9
27.43
27.32
3.39
null
null
null
null
2026-04-07T08:56:44Z
Gemma 4 E4B
gemma-4-e4b
4B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
182.41
185.01
183.18
15.9
15.25
5.21
null
null
null
null
2026-04-07T09:36:14Z
Llama 3.2 1B Instruct
llama-3.2-1b
1B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
753.71
776.93
772.9
64.83
65.13
0.86
null
null
null
null
2026-04-07T09:04:28Z
Llama 3.2 3B Instruct
llama-3.2-3b
3B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
266.02
270.02
268.17
27.04
25.95
2.01
null
null
null
null
2026-04-07T09:12:05Z
Phi 4 Mini Reasoning 3.8B
phi-4-mini-reasoning
3.8B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
210.79
210.29
208.99
19.86
18.39
2.45
null
null
null
null
2026-04-07T20:41:11Z
Phi 4 Mini 3.8B
phi-4-mini
3.8B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
226.71
226.6
220.16
20.75
19.89
2.45
null
null
null
null
2026-04-07T20:34:32Z
Qwen 3 0.6B
qwen3-0.6b
0.6B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
1,392.65
1,436.2
1,415.1
83.32
83.35
0.57
null
null
null
null
2026-04-07T09:15:40Z
Qwen 3 1.7B
qwen3-1.7b
1.7B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
507.75
520.43
516.74
43.44
43.66
1.31
null
null
null
null
2026-04-07T09:20:35Z
Qwen 3 4B
qwen3-4b
4B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
205.64
206.91
206.17
20.4
19.38
2.45
null
null
null
null
2026-04-07T21:09:23Z
Qwen 3 8B
qwen3-8b
8B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
98.23
97.57
94.13
11.02
10.83
4.81
null
null
null
null
2026-04-07T22:24:28Z
Qwen 3.5 4B
qwen3.5-4b
4B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
182.04
185.31
178.75
14.05
13.49
2.74
null
null
null
null
2026-04-07T21:32:38Z
Qwen 3.5 9B
qwen3.5-9b
9B
Q4_K_M
llama.cpp
Apple M1
8
7
16
26.3.1
91.59
91.29
86.85
8.34
8.19
5.48
null
null
null
null
2026-04-07T22:03:43Z
Qwen3-0.6B-4bit
Qwen3-0.6B-4bit
0.6B
4bit
llama.cpp
Apple M1
8
7
16
26.3.1
832.2
890.29
944.75
114.71
120.45
0.824
null
null
null
null
2026-04-07T08:29:46Z
Qwen3-8B-4bit
Qwen3-8B-4bit
8B
4bit
llama.cpp
Apple M1
8
7
16
26.3.1
65.53
68.08
67.77
12.49
12.81
5.123
null
null
null
null
2026-04-07T08:20:15Z
Qwen3.5-4B-4bit
Qwen3.5-4B-4bit
4B
4bit
llama.cpp
Apple M1
8
7
16
26.3.1
118.02
125.22
128.7
23.22
23.32
3.162
null
null
null
null
2026-04-07T08:44:13Z
gemma-3-1b-it-4bit
gemma-3-1b-it-4bit
?
4bit
llama.cpp
Apple M1
8
7
16
26.3.1
854.23
962.75
1,042.42
81.38
81.31
1.082
null
null
null
null
2026-04-07T08:33:16Z
DeepSeek R1 Distill 14B
deepseek-r1-distill-14b
14B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
238.04
217.4
222.52
18.85
18.28
8.51
null
null
null
null
2026-04-07T09:10:10Z
DeepSeek R1 Distill 32B
deepseek-r1-distill-32b
32B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
98.51
95.56
95.54
8.64
8.61
18.64
null
null
null
null
2026-04-07T09:19:50Z
DeepSeek R1 Distill 7B
deepseek-r1-distill-7b
7B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
498.95
455.19
455.28
37.44
37.77
4.46
null
null
null
null
2026-04-07T09:22:01Z
Devstral Small 24B
devstral-small-24b
24B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
140.88
137.39
138.47
12.34
12.34
13.49
null
null
null
null
2026-04-07T09:28:44Z
Gemma 3 12B
gemma-3-12b
12B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
295.24
269.78
276.19
23.3
23.23
6.99
null
null
null
null
2026-04-07T09:32:15Z
Gemma 3 1B
gemma-3-1b
1B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
3,286.02
3,976.51
4,136.97
151.27
150.99
0.85
null
null
null
null
2026-04-07T09:32:47Z
Gemma 3 27B
gemma-3-27b
27B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
121.07
117.51
118.11
10.23
10.26
15.63
null
null
null
null
2026-04-07T09:40:46Z
Gemma 3 4B
gemma-3-4b
4B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
956.72
947.22
915.84
65.02
64.46
2.45
null
null
null
null
2026-04-07T09:42:02Z
Gemma 4 26B-A4B MoE
gemma-4-26b-a4b
26B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
594.58
719.99
745.93
51.61
51.57
16.08
null
null
null
null
2026-04-07T09:43:44Z
Gemma 4 31B
gemma-4-31b
31B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
102.01
99.62
100.05
8.45
8.43
18.64
null
null
null
null
2026-04-07T09:53:25Z
Gemma 4 E2B
gemma-4-e2b
2B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
1,403.61
1,602.34
1,628.75
88.89
87
3.39
null
null
null
null
2026-04-07T09:54:22Z
Gemma 4 E4B
gemma-4-e4b
4B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
740.45
764.27
737.39
50.2
49.76
5.21
null
null
null
null
2026-04-07T09:56:00Z
Llama 3.1 8B Instruct
llama-3.1-8b
8B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
469.35
418.56
428.32
37.47
37.29
4.72
null
null
null
null
2026-04-07T09:58:13Z
Llama 3.2 1B Instruct
llama-3.2-1b
1B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
3,101.96
3,260.53
3,072.36
205.86
198.92
0.86
null
null
null
null
2026-04-07T09:58:38Z
Llama 3.2 3B Instruct
llama-3.2-3b
3B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
1,121.09
1,097.62
1,059.88
84.7
83.76
2.01
null
null
null
null
2026-04-07T09:59:38Z
Mistral 7B Instruct v0.3
mistral-7b
7B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
461.35
434.89
438.05
39.79
39.6
4.16
null
null
null
null
2026-04-07T10:01:45Z
Mistral Nemo 12B
mistral-nemo-12b
12B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
281.06
275.14
279.2
24.7
24.46
7.1
null
null
null
null
2026-04-07T10:05:09Z
Mistral Small 3.1 24B
mistral-small-3.1-24b
24B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
138.86
137.21
138.15
12.4
12.38
13.49
null
null
null
null
2026-04-07T10:11:49Z
Phi 4 Mini Reasoning 3.8B
phi-4-mini-reasoning
3.8B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
955.79
983.04
911.67
68.54
66.94
2.46
null
null
null
null
2026-04-07T10:18:31Z
Phi 4 Mini 3.8B
phi-4-mini
3.8B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
959.24
1,013.63
880.58
67.76
67.15
2.45
null
null
null
null
2026-04-07T10:17:18Z
Phi 4 Reasoning Plus 14B
phi-4-reasoning
14B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
229.46
217.79
219.97
19.43
19.31
8.56
null
null
null
null
2026-04-07T10:22:47Z
Phi 4 14B
phi-4
14B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
227.57
218.75
219.96
19.45
19.32
8.56
null
null
null
null
2026-04-07T10:16:05Z
Qwen 2.5 Coder 14B
qwen2.5-coder-14b
14B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
241.73
225.47
229.76
20.36
20.21
8.51
null
null
null
null
2026-04-07T10:26:53Z
Qwen 2.5 Coder 32B
qwen2.5-coder-32b
32B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
99.03
97.27
97.23
8.93
8.91
18.64
null
null
null
null
2026-04-07T10:36:12Z
Qwen 2.5 Coder 7B
qwen2.5-coder-7b
7B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
526.32
450.79
473.06
39.13
38.88
4.46
null
null
null
null
2026-04-07T10:38:19Z
Qwen 3 0.6B
qwen3-0.6b
0.6B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
4,869.7
5,877.61
6,194.51
256.12
251.51
0.57
null
null
null
null
2026-04-07T10:38:40Z
Qwen 3 1.7B
qwen3-1.7b
1.7B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
2,049.91
2,165.23
2,235.29
144.45
138.88
1.31
null
null
null
null
2026-04-07T10:39:15Z
Qwen 3 14B
qwen3-14b
14B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
235.22
231.63
229.63
20.26
20.13
8.52
null
null
null
null
2026-04-07T10:43:20Z
Qwen 3 30B-A3B MoE
qwen3-30b-a3b
30B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
516.73
647.21
735.4
66.03
65.34
17.47
null
null
null
null
2026-04-07T10:44:49Z
Qwen 3 32B
qwen3-32b
32B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
99.33
96.83
96.76
9.03
9
18.56
null
null
null
null
2026-04-07T10:54:05Z
Qwen 3 4B
qwen3-4b
4B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
861.75
903.72
785.54
65.17
64.85
2.45
null
null
null
null
2026-04-07T10:55:21Z
Qwen 3 8B
qwen3-8b
8B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
464.43
411.79
437.5
37.19
36.97
4.81
null
null
null
null
2026-04-07T10:57:35Z
Qwen 3.5 27B
qwen3.5-27b
27B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
116.68
113.95
115.04
9.4
9.39
15.92
null
null
null
null
2026-04-07T11:06:12Z
Qwen 3.5 35B-A3B MoE
qwen3.5-35b-a3b
35B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
522.2
687.16
726.3
45.42
45.2
20.72
null
null
null
null
2026-04-07T11:08:10Z
Qwen 3.5 4B
qwen3.5-4b
4B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
742.35
742.43
743.3
48.35
48.07
2.74
null
null
null
null
2026-04-07T11:09:49Z
Qwen 3.5 9B
qwen3.5-9b
9B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
404.32
405
402.92
30.16
30.04
5.48
null
null
null
null
2026-04-07T11:12:29Z
QwQ 32B
qwq-32b
32B
Q4_K_M
llama.cpp
Apple M2 Max
12
30
32
26.3.1
98.8
96.95
97.49
9.03
8.99
18.64
null
null
null
null
2026-04-07T11:21:43Z
Qwen3-8B-4bit
Qwen3-8B-4bit
8B
4bit
llama.cpp
Apple M2 Max
12
30
32
26.3.1
257.11
273.38
290.37
48.1
48.13
5.285
null
null
null
null
2026-04-07T14:16:56Z
Qwen3-Coder-30B-A3B-Instruct-4bit
Qwen3-Coder-30B-A3B-Instruct-4bit
30B
4bit
llama.cpp
Apple M2 Max
12
30
32
26.3.1
275.15
359.85
441.34
77.99
77.42
17.774
null
null
null
null
2026-04-08T20:56:02Z
DeepSeek R1 Distill 14B
deepseek-r1-distill-14b
14B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
97.26
96.46
89.62
5.62
5.69
8.52
null
null
null
null
2026-04-06T10:33:14Z
DeepSeek R1 Distill 32B
deepseek-r1-distill-32b
32B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
42.79
39.96
38.8
2.64
2.55
18.65
null
null
null
null
2026-04-06T11:10:53Z
DeepSeek R1 Distill 7B
deepseek-r1-distill-7b
7B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
195.55
190.75
194.1
11.39
12.12
4.47
null
null
null
null
2026-04-06T10:09:48Z
Devstral Small 24B
devstral-small-24b
24B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
57.11
56.15
55.5
3.55
3.56
13.5
0.8171
0.8598
null
0.3.1
2026-04-06T13:53:36Z
Gemma 3 12B
gemma-3-12b
12B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
101.98
101.41
99.77
5.71
5.45
7
0.7561
0.8354
null
0.3.1
2026-04-05T16:53:52Z
Gemma 3 1B
gemma-3-1b
1B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
1,753.85
1,431.07
1,488.78
46.61
46.57
0.86
0.3415
0.3659
null
0.3.1
2026-04-05T16:30:46Z
Gemma 3 27B
gemma-3-27b
27B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
48.6
50.15
50.69
3.05
2.92
15.64
0.7866
0.8659
null
0.3.1
2026-04-05T17:20:41Z
Gemma 3 4B
gemma-3-4b
4B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
326.1
317.98
304.47
16.51
16.61
2.46
0.6463
0.7256
null
0.3.1
2026-04-05T16:37:21Z
Gemma 4 26B-A4B MoE
gemma-4-26b-a4b
26B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
227.19
269.01
300.8
16.18
16.05
16.09
0.122
0.122
null
0.3.1
2026-04-06T17:21:53Z
Gemma 4 31B
gemma-4-31b
31B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
102.56
101.44
94.11
5.54
4.33
18.64
0.311
0.311
null
0.3.1
2026-04-06T17:52:19Z
Gemma 4 E2B
gemma-4-e2b
2B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
930.58
652.99
628.27
29.18
29.06
3.39
0.0915
0.0976
null
0.3.1
2026-04-06T16:59:28Z
Gemma 4 E4B
gemma-4-e4b
4B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
715.49
720.68
716.05
36.67
35.87
5.22
0.1463
0.1524
null
0.3.1
2026-04-06T19:39:11Z
Llama 3.1 8B Instruct
llama-3.1-8b
8B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
185.06
180.56
179.32
10.75
10.68
4.72
0.6098
0.6707
null
0.3.1
2026-04-06T14:30:04Z
Llama 3.2 1B Instruct
llama-3.2-1b
1B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
2,324.56
1,377.23
1,310.42
59.39
59.09
0.87
0.3293
0.3659
null
0.3.1
2026-04-06T14:03:09Z
Llama 3.2 3B Instruct
llama-3.2-3b
3B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
565.29
439.86
455.31
24.11
23.22
2.02
0.6037
0.6585
null
0.3.1
2026-04-06T14:18:09Z
Mistral 7B Instruct v0.3
mistral-7b
7B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
185.59
182.85
182.91
11.49
11.55
4.16
0.372
0.4268
null
0.3.1
2026-04-06T13:12:23Z
Mistral Nemo 12B
mistral-nemo-12b
12B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
117.57
117.31
108.27
6.93
7.05
7.11
0.6463
0.6829
null
0.3.1
2026-04-06T13:25:57Z
Mistral Small 3.1 24B
mistral-small-3.1-24b
24B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
60.37
59.51
54.32
3.57
3.57
13.5
0.7561
0.811
null
0.3.1
2026-04-06T12:48:51Z
Phi 4 Mini Reasoning 3.8B
phi-4-mini-reasoning
3.8B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
447.24
392.94
380.73
19.45
19.48
2.46
null
null
null
null
2026-04-06T11:25:44Z
Phi 4 Mini 3.8B
phi-4-mini
3.8B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
378.92
384.86
380.26
19.63
19.56
2.46
0.7073
0.7805
null
0.3.1
2026-04-06T11:20:16Z
Phi 4 Reasoning Plus 14B
phi-4-reasoning
14B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
92.06
92.75
87.47
5.67
5.69
8.56
null
null
null
null
2026-04-06T12:19:54Z
Phi 4 14B
phi-4
14B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
86.85
83.7
79.71
5.34
5.42
8.56
0.8232
0.872
null
0.3.1
2026-04-06T11:59:13Z
Qwen 2.5 Coder 14B
qwen2.5-coder-14b
14B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
95.6
97.34
95.08
5.88
5.85
8.52
0.8659
0.9085
null
0.3.1
2026-04-06T15:13:35Z
Qwen 2.5 Coder 32B
qwen2.5-coder-32b
32B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
39.23
38.7
37.01
2.46
2.51
18.65
0.872
0.9146
null
0.3.1
2026-04-06T15:56:51Z
Qwen 2.5 Coder 7B
qwen2.5-coder-7b
7B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
195.35
196.35
194.56
11.31
11.23
4.47
0.8415
0.878
null
0.3.1
2026-04-06T14:55:34Z
Qwen 3 0.6B
qwen3-0.6b
0.6B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
4,840.17
2,012.94
1,977.75
91.9
89.82
0.57
null
null
null
null
2026-04-05T19:53:22Z
Qwen 3 1.7B
qwen3-1.7b
1.7B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
900.85
773.61
774.49
37.34
37.51
1.32
null
null
null
null
2026-04-05T19:56:10Z
Qwen 3 14B
qwen3-14b
14B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
84.45
84.79
94.37
5.76
5.59
8.52
null
null
null
null
2026-04-05T20:29:47Z
Qwen 3 30B-A3B MoE
qwen3-30b-a3b
30B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
217
282.62
340.27
23.12
23.51
17.48
null
null
null
null
2026-04-06T01:40:21Z
Qwen 3 32B
qwen3-32b
32B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
42.63
42.06
41.35
2.47
2.56
18.57
null
null
null
null
2026-04-05T21:06:53Z
Qwen 3 4B
qwen3-4b
4B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
296.46
290.79
283.84
16.47
16.41
2.46
null
null
null
null
2026-04-05T20:01:48Z
Qwen 3 8B
qwen3-8b
8B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
156.99
153.03
153.78
9.14
9.23
4.81
null
null
null
null
2026-04-05T20:12:10Z
Qwen 3.5 27B
qwen3.5-27b
27B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
70.7
71.45
70.78
4.36
4.31
15.93
null
null
null
null
2026-04-06T18:25:44Z
Qwen 3.5 35B-A3B MoE
qwen3.5-35b-a3b
35B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
471.67
572.55
640.43
31.3
25.55
20.73
null
null
null
null
2026-04-06T18:35:56Z
Qwen 3.5 4B
qwen3.5-4b
4B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
631.69
631.2
628.01
29.35
21.87
2.74
null
null
null
null
2026-04-06T17:56:45Z
Qwen 3.5 9B
qwen3.5-9b
9B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
199.55
225.83
229.55
13.25
13.54
5.48
null
null
null
null
2026-04-06T18:03:42Z
Qwen 3.6 35B-A3B
qwen3.6-35b-a3b
35B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
192.11
245.05
274.95
16.73
16.37
20.14
0.8963
0.9451
null
0.3.1
2026-04-20T08:46:19Z
QwQ 32B
qwq-32b
32B
Q4_K_M
llama.cpp
Apple M5
10
10
32
26.4
39.44
38.87
38.41
2.57
2.55
18.65
null
null
null
null
2026-04-06T16:35:30Z
DeepSeek-R1-Distill-Qwen-14B-4bit
DeepSeek-R1-Distill-Qwen-14B-4bit
14B
4bit
llama.cpp
Apple M5
10
10
32
26.4
226.8
190.65
219.04
10.81
10.82
8.803
null
null
null
null
2026-04-07T18:51:30Z
DeepSeek-R1-Distill-Qwen-32B-MLX-4Bit
DeepSeek-R1-Distill-Qwen-32B-MLX-4Bit
32B
4bit
llama.cpp
Apple M5
10
10
32
26.4
76.25
86.04
94.53
4.84
4.83
18.955
null
null
null
null
2026-04-07T21:24:10Z
DeepSeek-R1-Distill-Qwen-7B-4bit-mlx
DeepSeek-R1-Distill-Qwen-7B-4bit-mlx
7B
4bit
llama.cpp
Apple M5
10
10
32
26.4
585.41
679.84
462.27
26.17
29.7
4.705
null
null
null
null
2026-04-07T17:41:23Z
End of preview. Expand in Data Studio

Mac Coding Bench Results v1 — Speed + Code Quality Benchmarks on Apple Silicon

Speed and code quality benchmarks for quantized LLMs running locally on Apple Silicon Macs. The dataset pairs inference speed measurements (tokens/sec) with HumanEval+ functional correctness scores for 21 models, across three hardware configurations (M1, M2 Max, M5) totaling 123 benchmark results.

Key Highlights

  • Qwen 3.6 35B-A3B achieves 89.6% HumanEval+ pass@1 at 16.7 tok/s — best quality score in the dataset, and fast thanks to MoE architecture
  • Qwen 2.5 Coder 7B hits 84.2% at 11.3 tok/s — best quality-to-speed ratio for a dense model
  • Phi 4 Mini 3.8B reaches 70.7% at 19.6 tok/s — strong for its size
  • Gemma 4 family scores 9-31% on HumanEval+, significantly underperforming Gemma 3 (34-79%)
  • 21 models evaluated for code quality, 123 total speed benchmark results across 3 chips
  • All models quantized to Q4_K_M (GGUF) or 4-bit (MLX)

Dataset Description

Each row represents one model benchmarked on one hardware configuration. Speed metrics come from llama-bench (GGUF) or mlx_lm.benchmark (MLX). Code quality metrics come from EvalPlus HumanEval+ evaluation (164 problems). Quality scores are available for 21 models on the M5 configuration; the remaining 102 rows have speed-only data across M1, M2 Max, and M5.

Data Fields

Field Type Description
model_name string Human-readable model name
model_id string Slug identifier for the model
params string Parameter count (e.g., "7B", "35B")
quant string Quantization method (Q4_K_M or 4bit)
runtime string Inference runtime (llama.cpp or mlx-lm)
chip string Apple Silicon chip (M1, M2 Max, M5)
cpu_cores int Number of CPU cores
gpu_cores int Number of GPU cores
ram_gb int Total system RAM in GB
os_version string macOS kernel version
pp128_toks float Prompt processing speed, 128 tokens (tok/s)
pp256_toks float Prompt processing speed, 256 tokens (tok/s)
pp512_toks float Prompt processing speed, 512 tokens (tok/s)
tg128_toks float Text generation speed, 128 tokens (tok/s)
tg256_toks float Text generation speed, 256 tokens (tok/s)
peak_memory_gb float Peak RSS memory usage in GB
humaneval_plus_pass1 float HumanEval+ pass@1 score (0-1), null if not evaluated
humaneval_base_pass1 float HumanEval base pass@1 score (0-1), null if not evaluated
perplexity float Perplexity score, null if not evaluated
eval_framework_version string EvalPlus version used
timestamp string ISO 8601 timestamp of the benchmark run

Hardware Configurations

Chip CPU Cores GPU Cores RAM Rows
Apple M1 8 7 16 GB 20
Apple M2 Max 12 38 32 GB 39
Apple M5 10 10 32 GB 64

Code quality evaluations (HumanEval+) were run on the M5 configuration only.

Benchmark Methodology

Speed benchmarks:

  • GGUF models: llama-bench with flash attention enabled, all layers offloaded to GPU (-ngl 99)
  • MLX models: mlx_lm.benchmark
  • Prompt processing measured at 128, 256, and 512 input tokens
  • Text generation measured at 128 and 256 output tokens

Code quality benchmarks:

  • Framework: EvalPlus HumanEval+
  • 164 problems with 80x test coverage over the original HumanEval test suite
  • Greedy decoding (temperature=0)
  • Reasoning models evaluated with --no-think flag
  • Quantization: Q4_K_M (GGUF), 4-bit (MLX)

Models with Code Quality Scores

All results on Apple M5, 32 GB, Q4_K_M quantization.

Model Params HumanEval+ pass@1 tg128 (tok/s)
Qwen 3.6 35B-A3B 35B 89.6% 16.7
Qwen 2.5 Coder 32B 32B 87.2% 2.5
Qwen 2.5 Coder 14B 14B 86.6% 5.9
Qwen 2.5 Coder 7B 7B 84.2% 11.3
Phi 4 14B 14B 82.3% 5.3
Devstral Small 24B 24B 81.7% 3.6
Gemma 3 27B 27B 78.7% 3.1
Gemma 3 12B 12B 75.6% 5.7
Mistral Small 3.1 24B 24B 75.6% 3.6
Phi 4 Mini 3.8B 3.8B 70.7% 19.6
Mistral Nemo 12B 12B 64.6% 6.9
Gemma 3 4B 4B 64.6% 16.5
Llama 3.1 8B Instruct 8B 61.0% 10.8
Llama 3.2 3B Instruct 3B 60.4% 24.1
Mistral 7B Instruct v0.3 7B 37.2% 11.5
Gemma 3 1B 1B 34.2% 46.6
Llama 3.2 1B Instruct 1B 32.9% 59.4
Gemma 4 31B 31B 31.1% 5.5
Gemma 4 E4B 4B 14.6% 36.7
Gemma 4 26B-A4B MoE 26B 12.2% 16.2
Gemma 4 E2B 2B 9.2% 29.2

Limitations

  • Single quantization level: All GGUF models use Q4_K_M; only one MLX model included. Results may differ at other quantization levels.
  • Limited hardware configs: Three Apple Silicon chips (M1 16GB, M2 Max 32GB, M5 32GB). No M3/M4 Pro/Ultra data.
  • Greedy decoding only: All code quality evaluations use temperature=0. Sampling-based pass@k scores would differ.
  • Reasoning models: Models like DeepSeek R1 Distill were tested with --no-think, which disables their chain-of-thought reasoning and may understate their capability.
  • Quality scores on M5 only: HumanEval+ evaluations were run on a single hardware config. Scores should be hardware-independent, but inference artifacts from quantization on different memory configurations could vary.
  • Gemma 4 scores: The Gemma 4 models score unusually low. This may reflect early quantization issues, prompt template incompatibilities, or model behavior at Q4_K_M precision.

Links

Citation

@dataset{mac_coding_bench_v1,
  title   = {Mac Coding Bench Results v1},
  author  = {Enes Cingoz},
  year    = {2026},
  url     = {https://huggingface.co/datasets/enescingoz/humaneval-apple-silicon},
  note    = {Speed and code quality benchmarks for quantized LLMs on Apple Silicon}
}

License

MIT

Downloads last month
135

Collection including enescingoz/humaneval-apple-silicon