The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
LEM-Eval
The 8-PAC benchmark runner for the Lemma model family.
A HuggingFace dataset repo used as a tool-shaped "github" — the entire
scorer lives here, anyone clones it, installs once, and the worker
machines chug along advancing per-model canons in lockstep. Multiple
workers farm different targets in parallel — each target declares a
type (mlx or gguf) in targets.yaml, and workers filter by the
backends they can actually run (capability probe or LEM_TYPES env).
Partition falls out of what the hardware can do, not hostnames.
What it does
For each declared target (a base + LEK-merged model pair), runs a paired 8-PAC benchmark:
- 8 independent rounds per question using Google-calibrated Gemma 4
sampling (
temp=1.0, top_p=0.95, top_k=64, enable_thinking=True) - Both models see the exact same question set from the seed-42 shuffled test split — the only variable is the weights
- Auto-offset progression: each run advances the canon by
n_questionsvia lighteval's--samples-startflag, so consecutive runs naturally cover contiguous non-overlapping windows
Results are written to two canonical destinations per run:
- The target model repo's
.eval_results/<task>.parquet— primary, per-model scorecard, drives the HF model-card eval_results rendering lthn/LEM-benchmarks/results/<target>/<task>.parquet— aggregated, fleet-wide, grows as more machines contribute observations
Same row data, two locations. Dedup on
(machine, iter_timestamp, question_index, round, model_side) on both
canons, so the same machine re-running the same slice is idempotent but
different machines contribute additive rows to the aggregator.
Layout
LEM-Eval/
├── eval.py # target-driven runner (PEP 723 — uv run it)
├── mlx_lm_wrapper.py # lighteval custom model backend
├── targets.yaml # declarative fleet spec (base, this, type)
├── install.sh # bootstrap: clone model repos + lem-benchmarks
├── lem-eval.sh # service script (once | maintain | loop)
├── cron/
│ ├── submit.cron # */30 * * * * lem-eval.sh once
│ └── maintain.cron # 15 * * * * lem-eval.sh maintain
└── workspaces/ # local clones of target model repos (gitignored)
├── lemer/
├── lemma/
└── ...
Quick start (worker)
export HF_TOKEN=hf_... # or: huggingface-cli login
git clone https://huggingface.co/datasets/lthn/LEM-Eval
cd LEM-Eval
./install.sh # clones lem-benchmarks + your owned model repos
./lem-eval.sh once # run one pass manually to verify
# Install the continuous cron
crontab -l | cat - cron/submit.cron cron/maintain.cron | crontab -
Add a new machine: install LEM-Eval on it, the worker's backend probe
decides which targets it can run (mlx on Apple Silicon, gguf where an
Ollama endpoint is reachable). Override with LEM_TYPES=mlx,gguf in
the cron env if you want explicit control. Workers pick up targets.yaml
edits via the maintain cron's hourly git pull.
gguf wrapper status: not yet implemented. gguf targets (lemmy,
lemrd) sit in targets.yaml waiting for gguf_wrapper.py — will be
an OpenAI-SDK wrapper pointing at a local Ollama/llama.cpp server.
Until then, gguf targets list but don't run.
Quick start (manual / dev)
uv run eval.py --list-targets # show the fleet
uv run eval.py --my-targets # show targets owned by $(hostname)
uv run eval.py --target lemer --n-questions 1 --rounds 8
uv run eval.py --target lemer --loop 8 # 8 back-to-back advances
PEP 723 inline metadata declares all dependencies — no venv setup
needed. uv creates one automatically, caches it, and pulls
lighteval from our fork
which carries benchmark-stability patches (MMLU-Pro template fix,
--samples-start offset).
Related
lthn/LEM-benchmarks— aggregated results storeLetheanNetwork/lighteval— benchmark-stability forklthn/lemer,lemma,lemmy,lemrd— target models
License
EUPL-1.2.
- Downloads last month
- 79