File size: 5,978 Bytes
02a9afc
 
 
 
b081408
 
02a9afc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3cf84ca
 
02a9afc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3cf84ca
02a9afc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
license: bigscience-openrail-m
size_categories:
- 1K<n<10K
tags:
- synthetic
---

# rerandomization-benchmarks

Replication dataset for the benchmark and diagnostic analyses in  
**Goldstein, Jerzak, Kamat & Zhu (2025), _“fastrerandomize: Fast Rerandomization Using Accelerated Computing”_.**

---

## Project & Paper Links

- **Paper (preprint):** <https://arxiv.org/abs/2501.07642>  
- **Software repository:** <https://github.com/cjerzak/fastrerandomize-software>  
- **Package name:** `fastrerandomize` (R)

---

##  What’s in this dataset?

The dataset contains **simulation-based benchmark results** used to compare:

- Different **hardware backends**  
  - `M4-CPU` (Apple M4 CPU, via JAX/XLA)  
  - `M4-GPU` (Apple M4 GPU / METAL)  
  - `RTX4090` (NVIDIA CUDA GPU)  
  - `BaseR` (non-accelerated R baseline)  
  - `jumble` (the `jumble` package as an alternative rerandomization implementation)

- Different **problem scales**  
  - Sample sizes: `n_units ∈ {10, 100, 1000}`  
  - Covariate dimensions: `k_covars ∈ {10, 100, 1000}`  
  - Monte Carlo draw budgets: `maxDraws ∈ {1e5, 2e5}`  
  - Exact vs approximate linear algebra: `approximate_inv ∈ {TRUE, FALSE}`  

- Different **rerandomization specifications**  
  - Acceptance probability targets (via `randomization_accept_prob`)  
  - Use or non-use of fiducial intervals (`findFI`)  

Each row corresponds to a particular Monte Carlo configuration and summarizes:

1. **Design & simulation settings** (e.g., `n_units`, `k_covars`, `maxDraws`, `treatment_effect`)  
2. **Performance metrics** (e.g., runtime for randomization generation and testing)  
3. **Statistical diagnostics** (e.g., p-value behavior, coverage, FI width)  
4. **Hardware & system metadata** (CPU model, number of cores, OS, etc.)

These data were used to:

- Produce the **runtime benchmark figures** (CPU vs GPU vs baseline R / `jumble`)  
- Compute **speedup factors** and **time-reduction summaries**  
- Feed into macros such as `\FRRMaxSpeedupGPUvsBaselineOverall`, `\FRRGPUVsCPUTimeReductionDthousandPct`, etc., which are then read from `./Figures/bench_macros.tex` in the paper.

---

##  Files & Structure

*(Adjust this section to match exactly what you upload to Hugging Face; here is a suggested structure.)*

- `VaryNAndD_main.csv`  
  Aggregated benchmark/simulation results across all configurations used in the paper.

- `VaryNAndD_main.parquet` (optional)  
  Parquet version of the same table (faster to load in many environments).

---

## Main Columns (schema overview)

Below is an overview of the most important columns you will encounter in `VaryNAndD_main.*`.  

### Core design variables

- `treatment_effect` – Constant treatment effect used in the simulation (e.g., `0.1`).  
- `SD_inherent` – Baseline SD of potential outcomes (`SD_inherent` in `GenerateCausalData`).  
- `n_units` – Total number of experimental units.  
- `k_covars` – Number of covariates.
- `prob_accept` – Target acceptance probability (`randomization_accept_prob`).  
- `maxDraws` – Maximum number of candidate randomizations drawn (e.g., `1e5`, `2e5`).  
- `findFI` – Logical (`TRUE`/`FALSE`): whether fiducial intervals were computed.  
- `approximate_inv` – Logical (`TRUE`/`FALSE`): whether approximate inverse / stabilized linear algebra was used.  
- `Hardware` – Hardware / implementation label, recoded in `FastRR_PlotFigs.R` to:  
  - `"M4-CPU"`  (was `"CPU"`)  
  - `"M4-GPU"`  (was `"METAL"`)  
  - `"RTX4090"` (was `"NVIDIA"`)  
  - `"jumble"`  (was `"AltPackage"`)  
  - `"BaseR"`   (pure R baseline)  
- `monte_i` – Monte Carlo replication index.

### Timing and hardware metadata

Timing quantities are used to produce the benchmark plots in the paper:

- `t_GenerateRandomizations` – Time (seconds) spent generating randomization pools.  
- `t_RandomizationTest` – Time (seconds) spent on randomization-based inference.  
- `randtest_time` – Duplicated / convenience version of `t_RandomizationTest` in some contexts.  
- `sysname`, `machine`, `hardware_version` – OS and machine-level metadata (`Sys.info()`).  
- `nCores` – Number of CPU cores from `benchmarkme::get_cpu()`.  
- `cpuModel` – CPU model name from `benchmarkme::get_cpu()`.

---

## How to use the dataset

### In Python (via `datasets`)

```python
from datasets import load_dataset

ds = load_dataset("YOUR_USERNAME/rerandomization-benchmarks", split="train")
print(ds)
print(ds.column_names)
````

Or directly with `pandas`:

```python
import pandas as pd

df = pd.read_csv("VaryNAndD_main.csv")
df.head()
```

### In R

```r
library(data.table)

bench <- fread("VaryNAndD_main.csv")
str(bench)

# Example: reproduce summaries by hardware and problem size
bench[, .(
  mean_t_generate = mean(t_GenerateRandomizations, na.rm = TRUE),
  mean_t_test     = mean(t_RandomizationTest, na.rm = TRUE)
), by = .(Hardware, n_units, k_covars, maxDraws, approximate_inv)]
```

You can then:

* Recreate runtime comparisons across hardware platforms.
* Explore how acceptance probability, dimension, and sample size interact.
* Use the timing information as inputs for your own design/planning calculations.

---

## Citation

If you use this dataset, please cite the paper:

```bibtex
@misc{goldstein2025fastrerandomizefastrerandomizationusing,
      title        = {fastrerandomize: Fast Rerandomization Using Accelerated Computing},
      author       = {Rebecca Goldstein and Connor T. Jerzak and Aniket Kamat and Fucheng Warren Zhu},
      year         = {2025},
      eprint       = {2501.07642},
      archivePrefix= {arXiv},
      primaryClass = {stat.CO},
      url          = {https://arxiv.org/abs/2501.07642}
}
```

---

## Contact

For questions about the paper, software, or dataset:

* Corresponding author: **Connor T. Jerzak** – [connor.jerzak@austin.utexas.edu](mailto:connor.jerzak@austin.utexas.edu)
* Issues & contributions: please use the GitHub repository issues page for `fastrerandomize`.

---