Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Poor Paul's Benchmark Results
This dataset is the open community results database for Poor Paul's Benchmark (PPB), a benchmarking framework focused on cost-conscious AI infrastructure, local LLM inference, homelabs, and small business deployments.
The goal of this dataset is to provide an open, reproducible, and community-driven record of benchmark results across different hardware, models, runtimes, and settings.
What this dataset contains
Each row is a normalized benchmark result submitted by a PPB user. Rows may include:
- Benchmark configuration, for example runner type, context size, batch size, and concurrency.
- Hardware metadata, for example CPU model, GPU model, VRAM, RAM, OS, and Python version.
- Performance metrics, for example throughput in tokens/sec, TTFT, ITL, and related latency summaries.
- Submission provenance fields, for example schema version, benchmark version, timestamps, and deterministic fingerprints.
The dataset is intended to support:
- Interactive filtering and sorting on Hugging Face.
- Downloading into pandas, DuckDB, Excel, or other analysis tools.
- Community dashboards such as
poorpaul.dev. - Long-term comparison of inference tradeoffs across real-world systems.
Important policy: raw ledger, not final leaderboard
This dataset should be understood as a raw append-only submission ledger, not a pre-deduplicated leaderboard.
We intentionally preserve raw submissions for transparency and auditability. As a result:
- Duplicate submissions may exist.
- Accidental re-uploads may exist.
- Multiple runs from the same machine and same config may exist.
- Many users with nearly identical hardware may submit similar results.
This is expected behavior.
Consumers of this dataset should apply deduplication and aggregation rules appropriate to their use case.
How duplicates are handled
PPB does not currently reject uploads at submission time. Instead, each uploaded row includes provenance and fingerprint fields to support downstream deduplication.
Relevant fields may include:
submission_id: unique identifier for one upload batch.row_id: unique identifier for one row.submitted_at: upload timestamp in UTC.schema_version: normalized schema version.benchmark_version: PPB version or code version.machine_fingerprint: anonymous deterministic hash of stable hardware identity fields.run_fingerprint: deterministic hash of the benchmark identity, used to group repeated runs of the same setup.result_fingerprint: deterministic hash including measured metric values, used to detect exact duplicate result rows.source_file_sha256: hash of the uploaded normalized results file.
Recommended interpretation
When analyzing this dataset:
- Use
result_fingerprintto identify exact duplicate rows. - Use
run_fingerprintto group repeated runs of the same hardware + model + settings. - Treat multiple submissions from different users with the same consumer hardware as useful replication, not as spam.
- Prefer median or percentile summaries over single best-case rows when building rankings.
Schema notes
PPB normalizes uploaded results into a flat schema so they can be indexed by Hugging Face and easily consumed by spreadsheet tools and analytics systems.
Not every field applies to every runner.
For example:
llama-benchrows may contain throughput-oriented fields such asn_prompt,n_gen,backends, andthroughput_tok_s.llama-serverrows may contain interactive serving metrics such asavg_ttft_ms,p50_ttft_ms,p99_ttft_ms,avg_itl_ms,p50_itl_ms, andp99_itl_ms.
Fields that do not apply to a specific row are set to null.
Data quality and trust model
This dataset mixes:
- Machine-generated benchmark metrics, which are the primary source of truth.
- User-submitted uploads, which may contain repeated runs, mislabeled display names, or unusual local setups.
PPB aims to maximize openness while preserving reproducibility. For that reason:
- Raw evidence is retained.
- Benchmark rows are normalized before upload.
- Provenance fields are included for future validation and curation.
- Final public dashboards may apply stricter filtering, deduplication, or quality scoring than this raw dataset.
Privacy
This dataset is designed to avoid collecting personally identifying information by default.
Hardware identity is represented using normalized benchmark fields and anonymous fingerprints rather than direct personal identity. Optional public-facing fields such as submitter display names may be included if a contributor chooses to provide them.
Contributors should avoid uploading secrets, API keys, private file paths they do not want disclosed, or other personal information.
Intended use
Good use cases:
- Comparing throughput and latency across hardware.
- Studying context-length degradation.
- Exploring tradeoffs between quantizations and runtime settings.
- Building open dashboards and analytics tools.
- Supporting hardware buying and deployment decisions.
Less suitable use cases:
- Treating raw rows as a final ranked leaderboard without deduplication.
- Using one-off best results as definitive truth without considering reruns and variance.
- Inferring financial cost, ownership cost, or electricity cost directly from this dataset.
Loading the dataset
Example with pandas:
import pandas as pd
df = pd.read_json(
"https://huggingface.co/datasets/paulplee/ppb-results/resolve/main/data/results_*.jsonl",
lines=True
)
- Downloads last month
- -