AccelEval-sample / README.md
Accel-Eval's picture
Upload README.md with huggingface_hub
790d529 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - cuda
  - gpu-acceleration
  - code-generation
  - benchmark
  - llm-evaluation
  - sample
size_categories:
  - n<1K
configs:
  - config_name: tasks
    data_files: tasks.parquet
    default: true

AccelEval — Reviewer Sample (small split only)

A representative sample of AccelEval provided per NeurIPS 2026 dataset-hosting guidelines for reviewers inspecting a >4 GB dataset.

⚠️ This dataset is uploaded anonymously to support a double-blind conference submission. Author information is intentionally omitted.

Relationship to the full dataset

Sample (this repo) Full dataset
Tasks All 42 tasks All 42 tasks
Scales small only small, medium, large
Packed size ~30 MB ~12 GB
Manifest tasks.parquet, tasks.csv (full) identical
URL this repo Accel-Eval/AccelEval-data

The two repos share an identical 42-row tasks.parquet (the manifest is not affected by which scales ship), so reviewers can browse every task's metadata, full cpu_reference.c source, and L1/L2/L3 prompt_template.yaml directly here without needing the larger tarballs.

How the sample was created

small.tar.gz is the unmodified small-scale tarball from the full dataset — same SHA256, same byte layout. Specifically, every task's gen_data.py is run with the parameters listed under "input_sizes.small" in tasks/<task_id>/task.json, with the deterministic seed (seed=42); the resulting input.bin, expected_output.txt, cpu_time_ms.txt, and requests.txt files for all 42 tasks are then packed under tasks/<task_id>/data/small/... into a single gzipped tar. The same code path produces medium.tar.gz and large.tar.gz in the full dataset; this sample simply omits those two tarballs.

The manifest.json shipped here contains the SHA256 of small.tar.gz, which matches the entry for small.tar.gz in the full dataset's manifest.json.

What's in this repo

File Role
tasks.parquet The 42-task manifest (browseable above), 13 columns. Includes the full cpu_reference.c source and the LLM prompt_template.yaml for every task so the benchmark is self-contained at the metadata level.
tasks.csv Same content as tasks.parquet, plain text.
small.tar.gz All 42 tasks at the small scale (~30 MB packed).
manifest.json SHA256 of small.tar.gz + per-task metadata.

small.tar.gz expands to tasks/<task_id>/data/small/{input.bin, expected_output.txt, cpu_time_ms.txt, requests.txt}.

Quick load (manifest table)

import pandas as pd
tasks = pd.read_parquet("hf://datasets/Accel-Eval/AccelEval-sample/tasks.parquet")
print(tasks.head())

Pulling the small inputs

huggingface-cli download Accel-Eval/AccelEval-sample small.tar.gz \
    --repo-type dataset --local-dir .
tar xzf small.tar.gz   # populates tasks/<task>/data/small/

Or via the companion code repo's helper:

ACCELEVAL_HF_REPO=Accel-Eval/AccelEval-sample \
    python3 scripts/download_data.py small

Categories (same as the full dataset)

Category Count Examples
Operations research 12 crew_pairing, network_rm_dp, pdlp, gittins_index, ...
Graph algorithms 7 bellman_ford, held_karp_tsp, gapbs_pagerank_pullgs, ...
Spatial--temporal 7 dbscan, dtw_distance, euclidean_distance_matrix, ...
HPC reference 7 hpcg_spmv_27pt, npb_lu_ssor_structured, miniWeather, ...
Financial computing 5 black_scholes, bonds_pricing, monte_carlo, ...
Scientific simulation 4 sph_cell_index, sph_forces, sph_position, hotspot_2d

The full row-by-row description is in tasks.parquet.

Determinism

Every task generator uses a fixed seed (seed=42 in gen_data.py); the tarball in this repo is byte-reproducible from the code repo's tasks/<task>/gen_data.py. The manifest.json ships SHA256 hashes so an independent regeneration can be verified bit-exactly.

License

Released under Apache-2.0. Each individual task adapts code from a publicly available source (GitHub repository or published research paper); the original sources retain their respective licenses, and tasks.parquet records each source for attribution.