cjerzak's picture
Update README.md
fd608dd verified
|
raw
history blame
8.72 kB
metadata
license: bigscience-openrail-m
size_categories:
  - 1K<n<10K

rerandomization-benchmarks

Replication dataset for the benchmark and diagnostic analyses in
Goldstein, Jerzak, Kamat & Zhu (2025), “fastrerandomize: Fast Rerandomization Using Accelerated Computing”.


Project & Paper Links


What’s in this dataset?

The dataset contains simulation-based benchmark results used to compare:

  • Different hardware backends

    • M4-CPU (Apple M4 CPU, via JAX/XLA)
    • M4-GPU (Apple M4 GPU / METAL)
    • RTX4090 (NVIDIA CUDA GPU)
    • BaseR (non-accelerated R baseline)
    • jumble (the jumble package as an alternative rerandomization implementation)
  • Different problem scales

    • Sample sizes: n_units ∈ {10, 100, 1000}
    • Covariate dimensions: k_covars ∈ {10, 100, 1000}
    • Monte Carlo draw budgets: maxDraws ∈ {1e5, 2e5}
    • Exact vs approximate linear algebra: approximate_inv ∈ {TRUE, FALSE}
  • Different rerandomization specifications

    • Acceptance probability targets (via randomization_accept_prob)
    • Use or non-use of fiducial intervals (findFI)

Each row corresponds to a particular Monte Carlo configuration and summarizes:

  1. Design & simulation settings (e.g., n_units, k_covars, maxDraws, treatment_effect)
  2. Performance metrics (e.g., runtime for randomization generation and testing)
  3. Statistical diagnostics (e.g., p-value behavior, coverage, FI width)
  4. Hardware & system metadata (CPU model, number of cores, OS, etc.)

These data were used to:

  • Produce the runtime benchmark figures (CPU vs GPU vs baseline R / jumble)
  • Compute speedup factors and time-reduction summaries
  • Feed into macros such as \FRRMaxSpeedupGPUvsBaselineOverall, \FRRGPUVsCPUTimeReductionDthousandPct, etc., which are then read from ./Figures/bench_macros.tex in the paper.

Files & Structure

(Adjust this section to match exactly what you upload to Hugging Face; here is a suggested structure.)

  • VaryNAndD_main.csv
    Aggregated benchmark/simulation results across all configurations used in the paper.

  • VaryNAndD_main.parquet (optional)
    Parquet version of the same table (faster to load in many environments).

  • CODE/ (optional, if you choose to include)

    • FastSRR_VaryNAndD.R
    • FastRR_PlotFigs.R
      Exact R scripts used to generate the raw CSV files and figures.

Main Columns (schema overview)

Below is an overview of the most important columns you will encounter in VaryNAndD_main.*.
Names are taken directly from the R code (especially the res <- as.data.frame(cbind(...)) section in FastSRR_VaryNAndD.R and the subsequent processing in FastRR_PlotFigs.R).

Core design variables

  • treatment_effect – Constant treatment effect used in the simulation (e.g., 0.1).
  • SD_inherent – Baseline SD of potential outcomes (SD_inherent in GenerateCausalData).
  • n_units – Total number of experimental units.
  • k_covars – Number of covariates.
  • maxDraws – Maximum number of candidate randomizations drawn (e.g., 1e5, 2e5).
  • findFI – Logical (TRUE/FALSE): whether fiducial intervals were computed.
  • approximate_inv – Logical (TRUE/FALSE): whether approximate inverse / stabilized linear algebra was used.
  • Hardware – Hardware / implementation label, recoded in FastRR_PlotFigs.R to:
    • "M4-CPU" (was "CPU")
    • "M4-GPU" (was "METAL")
    • "RTX4090" (was "NVIDIA")
    • "jumble" (was "AltPackage")
    • "BaseR" (pure R baseline)
  • monte_i – Monte Carlo replication index.

Rerandomization configuration

  • prob_accept – Target acceptance probability (randomization_accept_prob).
  • accept_prob – Same or related acceptance probability field (used within plotting code).

Randomization-test & FI summaries

These are typically aggregated across Monte Carlo replications and/or over covariate-dimension strata:

  • p_value – Mean p-value across replications, by k_covars and acceptance probability.
  • p_value_se – Standard error of the above p-value estimates.
  • min_p_value – Average minimum achievable p-value (1/(1 + n_accepted)), reflecting how many accepted randomizations were available.
  • number_successes – Average number of accepted randomizations (per configuration).
  • tau_hat_mean – Mean estimated treatment effect across replications.
  • tau_hat_var – Variance of the estimated treatment effect across replications.
  • FI_lower_vec, FI_upper_vec – Mean lower/upper endpoints of fiducial intervals.
  • FI_width – Median width of the fiducial interval (where available).
  • truth_covered – Average indicator for whether the interval covered the true treatment effect.

Estimator-selection diagnostics (acceptance-prob “minimization”)

These summarize how well different strategies for choosing the optimal acceptance probability perform:

  • colMeans_mean_p_value_matrix, colMeans_median_p_value_matrix, colMeans_modal_p_value_matrix
    Average p-value summaries used to define estimators of the “best” acceptance probability.

  • bias_select_p_via_mean, rmse_select_p_via_mean
    Bias and RMSE when selecting the acceptance probability based on the mean p-value.

  • bias_select_p_via_median, rmse_select_p_via_median
    Bias and RMSE when selecting the acceptance probability based on the median p-value.

  • bias_select_p_via_mode, rmse_select_p_via_mode
    Bias and RMSE when selecting the acceptance probability based on the modal p-value.

  • bias_select_p_via_baseline, rmse_select_p_via_baseline
    Bias and RMSE of a naive baseline strategy (e.g., choosing acceptance probability at random), used as a comparison.

Timing and hardware metadata

Timing quantities are used to produce the benchmark plots in the paper:

  • t_GenerateRandomizations – Time (seconds) spent generating randomization pools.
  • t_RandomizationTest – Time (seconds) spent on randomization-based inference.
  • randtest_time – Duplicated / convenience version of t_RandomizationTest in some contexts.
  • sysname, machine, hardware_version – OS and machine-level metadata (Sys.info()).
  • nCores – Number of CPU cores from benchmarkme::get_cpu().
  • cpuModel – CPU model name from benchmarkme::get_cpu().

Note: Because the scripts were developed iteratively, some columns may appear duplicated or with slightly redundant naming (e.g., multiple randtest_time-like fields). For replication of the paper’s figures, these are harmless; users may drop redundant columns as needed.


How to use the dataset

In Python (via datasets)

from datasets import load_dataset

ds = load_dataset("YOUR_USERNAME/rerandomization-benchmarks", split="train")
print(ds)
print(ds.column_names)

Or directly with pandas:

import pandas as pd

df = pd.read_csv("VaryNAndD_main.csv")
df.head()

In R

library(data.table)

bench <- fread("VaryNAndD_main.csv")
str(bench)

# Example: reproduce summaries by hardware and problem size
bench[, .(
  mean_t_generate = mean(t_GenerateRandomizations, na.rm = TRUE),
  mean_t_test     = mean(t_RandomizationTest, na.rm = TRUE)
), by = .(Hardware, n_units, k_covars, maxDraws, approximate_inv)]

You can then:

  • Recreate runtime comparisons across hardware platforms.
  • Explore how acceptance probability, dimension, and sample size interact.
  • Use the timing information as inputs for your own design/planning calculations.

Citation

If you use this dataset, please cite the main paper:

@misc{goldstein2025fastrerandomizefastrerandomizationusing,
      title        = {fastrerandomize: Fast Rerandomization Using Accelerated Computing},
      author       = {Rebecca Goldstein and Connor T. Jerzak and Aniket Kamat and Fucheng Warren Zhu},
      year         = {2025},
      eprint       = {2501.07642},
      archivePrefix= {arXiv},
      primaryClass = {stat.CO},
      url          = {https://arxiv.org/abs/2501.07642}
}

Contact

For questions about the paper, software, or dataset:

  • Corresponding author: Connor T. Jerzakconnor.jerzak@austin.utexas.edu
  • Issues & contributions: please use the GitHub repository issues page for fastrerandomize.