Snider Virgil commited on
Commit
0d3a2ba
·
1 Parent(s): 2751c6b

feat: add quant field — (name, type, quant) is the composite target key

Browse files

With the LEM-Eval schema settling, the next axis we need to benchmark
is quantization: does Q4 give the same LEK delta as BF16? Q8_0 vs Q4_K_M?
These are first-class variants that need their own canons so per-round
stats don't blend across precision levels.

targets.yaml now carries a 'quant' field on each row. Lemer expands to
four working entries:

lemer / mlx / Q4 → LetheanNetwork/lemer-mlx vs lthn/lemer-mlx
lemer / mlx / 8bit → LetheanNetwork/lemer-mlx-8bit vs lthn/lemer-mlx-8bit
lemer / mlx / BF16 → LetheanNetwork/lemer-mlx-bf16 vs lthn/lemer-mlx-bf16
lemer / gguf / Q4_K_M → hf.co/LetheanNetwork/lemer:Q4_K_M vs lthn/lemer

Q8_0 / BF16 gguf entries pending LetheanNetwork/lemer having the matching
base gguf quants uploaded (llama-quantize + HfApi follow-up).

eval.py changes:

resolve_target()
takes quant_filter; (name, type, quant) is now the unique key.
Multiple matches → error asks for --type / --quant.

--quant CLI flag
optional. Auto-pick when only one match exists for (name, type),
required when multiple quants share a (name, type).

_canon_stem(task, type, quant) helper
builds the filename stem consistently. Examples:
mmlu_pro.mlx.Q4.parquet
mmlu_pro.mlx.BF16.parquet
mmlu_pro.gguf.Q4_K_M.parquet
The quant suffix is mandatory on the gguf side (all quants share one
repo), kept on mlx for uniformity.

_compute_next_offset, append_to_canon
both thread target_quant through to _canon_stem, so each (type, quant)
combo has its own progression and its own yaml+md views.

_run_once, main()
propagate target_quant; the run banner shows target/quant together.

derive_repo_id() helper strips 'hf.co/' and ':<tag>' from a this: field to
yield the canonical HF repo id — used later by lem-eval.sh / install.sh
for workspace clone deduplication (gguf quants share a repo).

_print_target_table gains a quant column.

Smoke verified locally on all three mlx quants (Q4, 8bit, BF16) —
each cleared end-to-end with its own canon file:
/tmp/lemer_Q4/mmlu_pro.mlx.Q4.parquet
/tmp/lemer_8bit/mmlu_pro.mlx.8bit.parquet
/tmp/lemer_BF16/mmlu_pro.mlx.BF16.parquet

Each returned different stochastic answers, proving each separate HF
repo was really loaded rather than a shared cache.

Co-Authored-By: Virgil <virgil@lethean.io>

Files changed (2) hide show
  1. eval.py +105 -37
  2. targets.yaml +31 -9
eval.py CHANGED
@@ -92,16 +92,16 @@ def load_targets():
92
  return _yaml.safe_load(TARGETS_YAML_PATH.read_text())
93
 
94
 
95
- def resolve_target(name, cfg=None, type_filter=None):
96
- """Look up a target by (name, type) in targets.yaml.
97
 
98
- Multiple entries can share a name if they have different types — e.g.
99
- the same model family evaluated via mlx and via gguf. `type_filter`
100
- disambiguates:
101
  - None and one match → return the match
102
- - None and multiple matches → error, ask for --type
103
- - set/iterable and one matching entry return it
104
- - set/iterable and zero/multipleerror
105
  """
106
  if cfg is None:
107
  cfg = load_targets()
@@ -113,20 +113,45 @@ def resolve_target(name, cfg=None, type_filter=None):
113
  if type_filter is not None:
114
  allowed = set(type_filter) if not isinstance(type_filter, str) else {type_filter}
115
  candidates = [t for t in candidates if t.get("type") in allowed]
116
-
117
  if not candidates:
118
  raise KeyError(
119
  f"target {name!r} has no entry matching type filter {type_filter!r}. "
120
  f"Use --type to pick one."
121
  )
 
 
 
 
 
 
 
 
 
122
  if len(candidates) > 1:
123
- types = [t.get("type", "?") for t in candidates]
124
  raise KeyError(
125
- f"target {name!r} has multiple entries ({types}). Pass --type to disambiguate."
 
126
  )
127
  return candidates[0]
128
 
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  # --- Wrapper routing --------------------------------------------------------
131
  #
132
  # Lighteval's custom-model loader wants to be pointed at a single file that
@@ -575,17 +600,36 @@ def _render_canon_md(stats, task, dataset_id="TIGER-Lab/MMLU-Pro"):
575
  return "\n".join(lines)
576
 
577
 
578
- def append_to_canon(task, eval_results_dir, new_rows, target_type=None):
579
- """Append new rows to .eval_results/<task>[.<type>].parquet (the canon).
580
-
581
- When target_type is given, the canon filename is task-and-type-scoped
582
- (e.g. mmlu_pro.mlx.parquet, mmlu_pro.gguf.parquet) so runs on the same
583
- model family via different inference backends live in disjoint canons
584
- and their stats don't conflate.
585
 
586
- Reads the existing canon (if any), concatenates the new rows, dedupes on
587
- the composite key, writes back the parquet, and regenerates the yaml + md
588
- views. Returns the merged DataFrame.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
589
  """
590
  import io
591
  import pandas as pd
@@ -594,7 +638,7 @@ def append_to_canon(task, eval_results_dir, new_rows, target_type=None):
594
  eval_results_dir = Path(eval_results_dir).resolve()
595
  eval_results_dir.mkdir(parents=True, exist_ok=True)
596
 
597
- stem = f"{task}.{target_type}" if target_type else task
598
  canon_path = eval_results_dir / f"{stem}.parquet"
599
 
600
  new_df = pd.DataFrame(new_rows)
@@ -637,7 +681,7 @@ def append_to_canon(task, eval_results_dir, new_rows, target_type=None):
637
 
638
  # --- Main -------------------------------------------------------------------
639
 
640
- def _compute_next_offset(task, eval_results_dir, target_type=None):
641
  """Derive the next samples_start offset from the existing canonical parquet.
642
 
643
  Returns max(canon.question_index) + 1 if the canon exists and has rows,
@@ -645,11 +689,11 @@ def _compute_next_offset(task, eval_results_dir, target_type=None):
645
  run picks up where the last one finished without the caller tracking
646
  state externally.
647
 
648
- When target_type is given, reads the type-scoped canon so mlx runs
649
- progress independently from gguf runs on the same model family.
650
  """
651
  import pandas as pd
652
- stem = f"{task}.{target_type}" if target_type else task
653
  canon_path = Path(eval_results_dir) / f"{stem}.parquet"
654
  if not canon_path.exists():
655
  return 0
@@ -670,6 +714,7 @@ def _run_once(
670
  eval_results_dir,
671
  tmp_dir,
672
  target_name=None,
 
673
  lem_benchmarks_dir=None,
674
  wrapper_file=None,
675
  ):
@@ -689,7 +734,7 @@ def _run_once(
689
  dedup against its own existing state).
690
  """
691
  print(f"\n{'='*78}")
692
- print(f" LEM-Eval 8-PAC run — target: {target_name}")
693
  print(f" this model: {THIS_MODEL}")
694
  print(f" base model: {BASE_MODEL}")
695
  print(f" task: {task}")
@@ -725,12 +770,12 @@ def _run_once(
725
 
726
  print(f"\n[4/4] appending {len(rows)} rows to canon(s)...")
727
  print(" primary (model repo):")
728
- append_to_canon(task, eval_results_dir, rows, target_type=target_type)
729
 
730
  if lem_benchmarks_dir and target_name:
731
  agg_dir = lem_benchmarks_dir / "results" / target_name
732
  print(f" aggregator (lthn/LEM-benchmarks):")
733
- append_to_canon(task, agg_dir, rows, target_type=target_type)
734
 
735
  # Clean up per-run lighteval scratch — the canons now have everything we need
736
  shutil.rmtree(tmp_dir, ignore_errors=True)
@@ -780,11 +825,17 @@ def detect_default_types():
780
 
781
  def _print_target_table(targets, highlight_types=None):
782
  highlight_types = set(highlight_types or [])
783
- print(f"{'name':<18} {'type':<6} {'base':<42} {'this':<24}")
784
- print("-" * 94)
785
  for t in targets:
786
  mark = " *" if (t.get("type") in highlight_types) else ""
787
- print(f"{t['name']:<18} {t.get('type', '?'):<6} {t['base']:<42} {t['this']:<24}{mark}")
 
 
 
 
 
 
788
 
789
 
790
  def main():
@@ -798,6 +849,10 @@ def main():
798
  parser.add_argument("--type", default=None,
799
  help="Restrict to targets of this type (mlx|gguf). "
800
  "Defaults to capability detection (mlx on Apple Silicon).")
 
 
 
 
801
  parser.add_argument("--n-questions", type=int, default=DEFAULT_N_QUESTIONS)
802
  parser.add_argument("--rounds", type=int, default=DEFAULT_ROUNDS)
803
  parser.add_argument("--task", default=None,
@@ -852,13 +907,19 @@ def main():
852
  if not args.target:
853
  parser.error("--target is required (or use --list-targets / --my-targets)")
854
 
855
- # Let --type / LEM_TYPES / capability detection disambiguate when the
856
- # same target name exists with multiple types in targets.yaml.
857
  try:
858
- target = resolve_target(args.target, cfg, type_filter=allowed_types)
 
 
 
 
 
859
  except KeyError as e:
860
  parser.error(str(e))
861
  target_type = target.get("type")
 
862
  if target_type not in SUPPORTED_TYPES:
863
  parser.error(f"target {args.target!r} has unknown type {target_type!r}")
864
 
@@ -890,8 +951,14 @@ def main():
890
  lem_benchmarks_dir = Path(args.lem_benchmarks_dir).resolve() if args.lem_benchmarks_dir else None
891
 
892
  if args.samples_start == "auto":
893
- samples_start = _compute_next_offset(task, eval_results_dir, target_type=target_type)
894
- print(f"[auto] canon progression ({target_type}) → samples_start = {samples_start}", flush=True)
 
 
 
 
 
 
895
  else:
896
  try:
897
  samples_start = int(args.samples_start)
@@ -912,6 +979,7 @@ def main():
912
  eval_results_dir=eval_results_dir,
913
  tmp_dir=tmp_dir,
914
  target_name=args.target,
 
915
  lem_benchmarks_dir=lem_benchmarks_dir,
916
  wrapper_file=wrapper_file,
917
  )
 
92
  return _yaml.safe_load(TARGETS_YAML_PATH.read_text())
93
 
94
 
95
+ def resolve_target(name, cfg=None, type_filter=None, quant_filter=None):
96
+ """Look up a target by (name, type, quant) in targets.yaml.
97
 
98
+ Multiple entries can share a name if they differ by type or quant — e.g.
99
+ the same model family evaluated via mlx at Q4, 8bit, and BF16. Filters
100
+ narrow the candidate set:
101
  - None and one match → return the match
102
+ - None and multiple matches → error, ask for --type / --quant
103
+ - filter matches zeroerror
104
+ - filter matches onereturn it
105
  """
106
  if cfg is None:
107
  cfg = load_targets()
 
113
  if type_filter is not None:
114
  allowed = set(type_filter) if not isinstance(type_filter, str) else {type_filter}
115
  candidates = [t for t in candidates if t.get("type") in allowed]
 
116
  if not candidates:
117
  raise KeyError(
118
  f"target {name!r} has no entry matching type filter {type_filter!r}. "
119
  f"Use --type to pick one."
120
  )
121
+
122
+ if quant_filter is not None:
123
+ candidates = [t for t in candidates if t.get("quant") == quant_filter]
124
+ if not candidates:
125
+ raise KeyError(
126
+ f"target {name!r} has no entry matching quant {quant_filter!r}. "
127
+ f"Check targets.yaml or use --list-targets to see what exists."
128
+ )
129
+
130
  if len(candidates) > 1:
131
+ combos = [(t.get("type", "?"), t.get("quant", "?")) for t in candidates]
132
  raise KeyError(
133
+ f"target {name!r} has multiple entries {combos}. "
134
+ f"Pass --type and/or --quant to disambiguate."
135
  )
136
  return candidates[0]
137
 
138
 
139
+ def derive_repo_id(this_ref):
140
+ """Strip Ollama / transport prefixes from a `this:` reference to yield
141
+ the underlying HF repo id.
142
+
143
+ Examples:
144
+ lthn/lemer-mlx → lthn/lemer-mlx
145
+ hf.co/lthn/lemer:Q4_K_M → lthn/lemer
146
+ """
147
+ if this_ref.startswith("hf.co/"):
148
+ base = this_ref[len("hf.co/"):]
149
+ if ":" in base:
150
+ base = base.split(":", 1)[0]
151
+ return base
152
+ return this_ref
153
+
154
+
155
  # --- Wrapper routing --------------------------------------------------------
156
  #
157
  # Lighteval's custom-model loader wants to be pointed at a single file that
 
600
  return "\n".join(lines)
601
 
602
 
603
+ def _canon_stem(task, target_type=None, target_quant=None):
604
+ """Canon filename stem task, then optional type, then optional quant.
 
 
 
 
 
605
 
606
+ Examples:
607
+ mmlu_pro (neither type nor quant)
608
+ mmlu_pro.mlx (type only)
609
+ mmlu_pro.mlx.BF16 (type + quant)
610
+ mmlu_pro.gguf.Q4_K_M (gguf case — quant required because
611
+ multiple quants share the same repo)
612
+ """
613
+ parts = [task]
614
+ if target_type:
615
+ parts.append(target_type)
616
+ if target_quant:
617
+ parts.append(target_quant)
618
+ return ".".join(parts)
619
+
620
+
621
+ def append_to_canon(task, eval_results_dir, new_rows, target_type=None, target_quant=None):
622
+ """Append new rows to .eval_results/<task>[.<type>[.<quant>]].parquet (the canon).
623
+
624
+ Type and quant scope the canon filename so variants of the same model
625
+ family don't conflate — different mlx quants live in different HF repos
626
+ already, but gguf variants share one repo and NEED the quant suffix to
627
+ stay separate. Uniform naming across both backends keeps the filesystem
628
+ layout predictable.
629
+
630
+ Reads the existing canon (if any), concatenates new rows, dedupes on the
631
+ composite key, writes back the parquet, regenerates the yaml + md views.
632
+ Returns the merged DataFrame.
633
  """
634
  import io
635
  import pandas as pd
 
638
  eval_results_dir = Path(eval_results_dir).resolve()
639
  eval_results_dir.mkdir(parents=True, exist_ok=True)
640
 
641
+ stem = _canon_stem(task, target_type, target_quant)
642
  canon_path = eval_results_dir / f"{stem}.parquet"
643
 
644
  new_df = pd.DataFrame(new_rows)
 
681
 
682
  # --- Main -------------------------------------------------------------------
683
 
684
+ def _compute_next_offset(task, eval_results_dir, target_type=None, target_quant=None):
685
  """Derive the next samples_start offset from the existing canonical parquet.
686
 
687
  Returns max(canon.question_index) + 1 if the canon exists and has rows,
 
689
  run picks up where the last one finished without the caller tracking
690
  state externally.
691
 
692
+ Type + quant scoping means each (type, quant) combination has its own
693
+ progression, so the mlx-BF16 canon advances independently from gguf-Q4_K_M.
694
  """
695
  import pandas as pd
696
+ stem = _canon_stem(task, target_type, target_quant)
697
  canon_path = Path(eval_results_dir) / f"{stem}.parquet"
698
  if not canon_path.exists():
699
  return 0
 
714
  eval_results_dir,
715
  tmp_dir,
716
  target_name=None,
717
+ target_quant=None,
718
  lem_benchmarks_dir=None,
719
  wrapper_file=None,
720
  ):
 
734
  dedup against its own existing state).
735
  """
736
  print(f"\n{'='*78}")
737
+ print(f" LEM-Eval 8-PAC run — target: {target_name} ({target_quant or '?'})")
738
  print(f" this model: {THIS_MODEL}")
739
  print(f" base model: {BASE_MODEL}")
740
  print(f" task: {task}")
 
770
 
771
  print(f"\n[4/4] appending {len(rows)} rows to canon(s)...")
772
  print(" primary (model repo):")
773
+ append_to_canon(task, eval_results_dir, rows, target_type=target_type, target_quant=target_quant)
774
 
775
  if lem_benchmarks_dir and target_name:
776
  agg_dir = lem_benchmarks_dir / "results" / target_name
777
  print(f" aggregator (lthn/LEM-benchmarks):")
778
+ append_to_canon(task, agg_dir, rows, target_type=target_type, target_quant=target_quant)
779
 
780
  # Clean up per-run lighteval scratch — the canons now have everything we need
781
  shutil.rmtree(tmp_dir, ignore_errors=True)
 
825
 
826
  def _print_target_table(targets, highlight_types=None):
827
  highlight_types = set(highlight_types or [])
828
+ print(f"{'name':<14} {'type':<6} {'quant':<8} {'base':<42} {'this':<28}")
829
+ print("-" * 106)
830
  for t in targets:
831
  mark = " *" if (t.get("type") in highlight_types) else ""
832
+ print(
833
+ f"{t['name']:<14} "
834
+ f"{t.get('type', '?'):<6} "
835
+ f"{t.get('quant', '?'):<8} "
836
+ f"{t['base']:<42} "
837
+ f"{t['this']:<28}{mark}"
838
+ )
839
 
840
 
841
  def main():
 
849
  parser.add_argument("--type", default=None,
850
  help="Restrict to targets of this type (mlx|gguf). "
851
  "Defaults to capability detection (mlx on Apple Silicon).")
852
+ parser.add_argument("--quant", default=None,
853
+ help="Disambiguate targets by quant identifier "
854
+ "(e.g. Q4, 8bit, BF16, Q4_K_M, Q8_0). Required when "
855
+ "a (name, type) pair has multiple quant variants.")
856
  parser.add_argument("--n-questions", type=int, default=DEFAULT_N_QUESTIONS)
857
  parser.add_argument("--rounds", type=int, default=DEFAULT_ROUNDS)
858
  parser.add_argument("--task", default=None,
 
907
  if not args.target:
908
  parser.error("--target is required (or use --list-targets / --my-targets)")
909
 
910
+ # Let --type / LEM_TYPES / capability detection + --quant disambiguate
911
+ # when the same target name exists with multiple (type, quant) combos.
912
  try:
913
+ target = resolve_target(
914
+ args.target,
915
+ cfg,
916
+ type_filter=allowed_types,
917
+ quant_filter=args.quant,
918
+ )
919
  except KeyError as e:
920
  parser.error(str(e))
921
  target_type = target.get("type")
922
+ target_quant = target.get("quant")
923
  if target_type not in SUPPORTED_TYPES:
924
  parser.error(f"target {args.target!r} has unknown type {target_type!r}")
925
 
 
951
  lem_benchmarks_dir = Path(args.lem_benchmarks_dir).resolve() if args.lem_benchmarks_dir else None
952
 
953
  if args.samples_start == "auto":
954
+ samples_start = _compute_next_offset(
955
+ task, eval_results_dir, target_type=target_type, target_quant=target_quant
956
+ )
957
+ print(
958
+ f"[auto] canon progression ({target_type}/{target_quant}) → "
959
+ f"samples_start = {samples_start}",
960
+ flush=True,
961
+ )
962
  else:
963
  try:
964
  samples_start = int(args.samples_start)
 
979
  eval_results_dir=eval_results_dir,
980
  tmp_dir=tmp_dir,
981
  target_name=args.target,
982
+ target_quant=target_quant,
983
  lem_benchmarks_dir=lem_benchmarks_dir,
984
  wrapper_file=wrapper_file,
985
  )
targets.yaml CHANGED
@@ -23,32 +23,54 @@ defaults:
23
 
24
  targets:
25
 
 
 
 
 
26
  - name: lemer
27
  type: mlx
 
28
  base: LetheanNetwork/lemer-mlx
29
- this: lthn/lemer
30
- notes: Gemma 4 E2B — paired mlx via LetheanNetwork/lemer-mlx (4bit, our own quant)
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  - name: lemer
33
  type: gguf
 
34
  base: hf.co/LetheanNetwork/lemer:Q4_K_M
35
  this: hf.co/lthn/lemer:Q4_K_M
36
- notes: Gemma 4 E2B — Q4_K_M via Ollama
 
 
37
 
38
  - name: lemma
39
  type: mlx
40
- base: mlx-community/gemma-4-e4b-it-4bit
 
41
  this: lthn/lemma
42
- notes: Gemma 4 E4B
43
 
44
  - name: lemmy
45
  type: gguf
46
- base: mlx-community/gemma-4-26b-a4b-it-4bit
 
47
  this: lthn/lemmy
48
- notes: Gemma 4 26B A4B MoE — runs via GGUF on charon (Ollama endpoint)
49
 
50
  - name: lemrd
51
  type: gguf
52
- base: mlx-community/gemma-4-31b-it-4bit
 
53
  this: lthn/lemrd
54
- notes: Gemma 4 31B — runs via GGUF on charon (Ollama endpoint)
 
23
 
24
  targets:
25
 
26
+ # Lemer — Gemma 4 E2B. Three mlx quants + gguf Q4_K_M live and paired
27
+ # end-to-end. Q8_0 / BF16 gguf will be added once LetheanNetwork/lemer
28
+ # has the matching base quants (TODO: llama-quantize + upload).
29
+
30
  - name: lemer
31
  type: mlx
32
+ quant: Q4
33
  base: LetheanNetwork/lemer-mlx
34
+ this: lthn/lemer-mlx
35
+
36
+ - name: lemer
37
+ type: mlx
38
+ quant: 8bit
39
+ base: LetheanNetwork/lemer-mlx-8bit
40
+ this: lthn/lemer-mlx-8bit
41
+
42
+ - name: lemer
43
+ type: mlx
44
+ quant: BF16
45
+ base: LetheanNetwork/lemer-mlx-bf16
46
+ this: lthn/lemer-mlx-bf16
47
 
48
  - name: lemer
49
  type: gguf
50
+ quant: Q4_K_M
51
  base: hf.co/LetheanNetwork/lemer:Q4_K_M
52
  this: hf.co/lthn/lemer:Q4_K_M
53
+
54
+ # Lemma / Lemmy / Lemrd — pending the same treatment as lemer.
55
+ # (LetheanNetwork bases in matching formats, naming alignment, etc.)
56
 
57
  - name: lemma
58
  type: mlx
59
+ quant: Q4
60
+ base: mlx-community/gemma-4-e4b-it-4bit # TODO: LetheanNetwork/lemma-mlx
61
  this: lthn/lemma
62
+ notes: Gemma 4 E4B — pending family rollout
63
 
64
  - name: lemmy
65
  type: gguf
66
+ quant: Q4_K_M
67
+ base: mlx-community/gemma-4-26b-a4b-it-4bit # TODO: LetheanNetwork/lemmy gguf
68
  this: lthn/lemmy
69
+ notes: Gemma 4 26B A4B MoE — pending family rollout
70
 
71
  - name: lemrd
72
  type: gguf
73
+ quant: Q4_K_M
74
+ base: mlx-community/gemma-4-31b-it-4bit # TODO: LetheanNetwork/lemrd gguf
75
  this: lthn/lemrd
76
+ notes: Gemma 4 31B — pending family rollout