Datasets:
v2.0 Release: Added C++ Source, Compact Configs, and Improved Schedule Schema (#2)
Browse files- v2.0 Release: new schedule schema, add generators, add compact configs (df7dcc696ce2c6bdf61f38333b49933e8355ffe4)
- .gitattributes +1 -0
- README.md +176 -87
- data/{looperset_full.jsonl.gz → full/looperset_v2_full.jsonl.gz} +2 -2
- data/{pact25/looperset_pact25_train.jsonl.gz → full/looperset_v2_full_compact.jsonl.gz} +2 -2
- data/pact25/looperset_v2_pact_train.jsonl.gz +3 -0
- data/pact25/looperset_v2_pact_train_compact.jsonl.gz +3 -0
- data/pact25/{looperset_pact25_validation.jsonl.gz → looperset_v2_pact_validation.jsonl.gz} +2 -2
- data/pact25/looperset_v2_pact_validation_compact.jsonl.gz +3 -0
- data/source/looperset_v2_generators.tar.gz +3 -0
.gitattributes
CHANGED
|
@@ -58,3 +58,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.jsonl.gz filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.jsonl.gz filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,4 +1,3 @@
|
|
| 1 |
-
|
| 2 |
---
|
| 3 |
pretty_name: "LOOPerSet"
|
| 4 |
license: "cc-by-4.0"
|
|
@@ -12,66 +11,117 @@ task_categories:
|
|
| 12 |
size_categories:
|
| 13 |
- 10M<n<100M
|
| 14 |
configs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
- config_name: pact25_split
|
| 16 |
data_files:
|
| 17 |
- split: train
|
| 18 |
-
path: "data/
|
| 19 |
- split: validation
|
| 20 |
-
path: "data/
|
| 21 |
|
| 22 |
-
- config_name:
|
| 23 |
-
data_files:
|
| 24 |
- split: train
|
| 25 |
-
path: "data/
|
|
|
|
|
|
|
| 26 |
---
|
| 27 |
|
| 28 |
# LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Optimization
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
<div align="center">
|
| 33 |
|
| 34 |
[](https://arxiv.org/abs/2510.10209)
|
| 35 |
-
[](https://
|
| 36 |
[](https://creativecommons.org/licenses/by/4.0/)
|
| 37 |
[]()
|
| 38 |
|
| 39 |
</div>
|
| 40 |
|
| 41 |
|
|
|
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
`LOOPerSet` is a large-scale public dataset for machine learning-based compiler optimization. It provides labeled performance data for training and evaluating models that predict the effects of code transformations.
|
| 46 |
-
|
| 47 |
-
The dataset contains over **28 million labeled data points** derived from approximately **220,000 unique, synthetically generated loop nests**. Each data point consists of a program, a specific sequence of applied loop transformations (e.g., fusion, tiling, skewing, parallelization), and its resulting ground-truth performance measurement.
|
| 48 |
-
|
| 49 |
-
Transformation sequences were generated using a polyhedral compilation framework to ensure they were legal and semantics-preserving. `LOOPerSet` was originally created to train the cost model for the [LOOPer autoscheduler](https://tbd) (PACT '25). For a full description of the generation process and a diversity analysis, please see our [companion paper on arXiv](https://arxiv.org/abs/xxxx.xxxxx).
|
| 50 |
|
|
|
|
| 51 |
|
| 52 |
-
###
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
|
|
|
|
|
|
| 56 |
|
|
|
|
|
|
|
| 57 |
* **Performance Prediction**: The dataset's primary use case. Train a model to map a program's features and a candidate optimization schedule to a predicted performance value (e.g., execution time or speedup). This forms the core of a learned cost model for guiding compiler optimization.
|
| 58 |
* **Schedule Ranking**: A learning-to-rank task where a model learns to order a set of candidate schedules for a given program based on their relative performance.
|
| 59 |
* **Compiler Heuristic Discovery**: A data analysis task to discover new optimization heuristics by finding correlations between program features and the effectiveness of transformation sequences.
|
| 60 |
* **Program Representation Learning**: Develop and evaluate novel methods for featurizing programs, computer code, and transformation schedules, such as learning dense vector embeddings.
|
| 61 |
-
*
|
|
|
|
| 62 |
|
| 63 |
### Dataset Configurations
|
| 64 |
|
| 65 |
-
The dataset is provided in two configurations
|
| 66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
* **`full`**: The complete ~28 million point dataset (composed of ~220k programs), available as a single `train` split.
|
| 68 |
-
* **`pact25`** split: A 10-million-point version used to train the LOOPer cost model, pre-split into `train` (90%) and `validation` (10%) sets for reproducibility. This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
## How to Use
|
| 71 |
|
| 72 |
The dataset files are stored in `.jsonl.gz` format (gzipped JSON Lines), where each line is a complete JSON object representing one program.
|
| 73 |
|
| 74 |
-
|
| 75 |
|
| 76 |
### Installation
|
| 77 |
|
|
@@ -86,13 +136,9 @@ pip install huggingface-hub
|
|
| 86 |
|
| 87 |
### Step 1: Download the Data Files
|
| 88 |
|
| 89 |
-
The dataset is available in
|
|
|
|
| 90 |
|
| 91 |
-
| File | Compressed Size | Decompressed Size |
|
| 92 |
-
| ----------------------------- | --------------- | ----------------- |
|
| 93 |
-
| `looperset_full.jsonl.gz` | ~3.7 GB | ~34 GB |
|
| 94 |
-
| `looperset_pact25_train.jsonl.gz` | ~1.2 GB | ~22 GB |
|
| 95 |
-
| `looperset_pact25_validation.jsonl.gz` | ~146 MB | ~5.3 GB |
|
| 96 |
|
| 97 |
First, use the `hf_hub_download` function to fetch the dataset files you need.
|
| 98 |
|
|
@@ -102,24 +148,24 @@ import os
|
|
| 102 |
|
| 103 |
REPO_ID = "Mascinissa/LOOPerSet"
|
| 104 |
|
| 105 |
-
# --- Option 1: Download the
|
| 106 |
-
|
| 107 |
repo_id=REPO_ID,
|
| 108 |
-
filename="data/
|
| 109 |
repo_type="dataset",
|
| 110 |
)
|
| 111 |
-
print(f"Full dataset downloaded to: {
|
| 112 |
|
| 113 |
|
| 114 |
-
# --- Option 2: Download the PACT '25 splits ---
|
| 115 |
pact25_train_path = hf_hub_download(
|
| 116 |
repo_id=REPO_ID,
|
| 117 |
-
filename="data/pact25/
|
| 118 |
repo_type="dataset",
|
| 119 |
)
|
| 120 |
pact25_validation_path = hf_hub_download(
|
| 121 |
repo_id=REPO_ID,
|
| 122 |
-
filename="data/pact25/
|
| 123 |
repo_type="dataset",
|
| 124 |
)
|
| 125 |
print(f"PACT'25 train split downloaded to: {pact25_train_path}")
|
|
@@ -245,7 +291,7 @@ for processed_count, program in enumerate(data_stream):
|
|
| 245 |
|
| 246 |
if current_time < min_time:
|
| 247 |
min_time = current_time
|
| 248 |
-
best_schedule_info = schedule['
|
| 249 |
|
| 250 |
speedup = initial_time / min_time if min_time > 0 else float('inf')
|
| 251 |
|
|
@@ -283,24 +329,22 @@ Each row in the dataset represents a single synthetic program and contains all o
|
|
| 283 |
"computations": { "...": "..." },
|
| 284 |
"buffers": { "...": "..." }
|
| 285 |
},
|
|
|
|
| 286 |
"initial_execution_time": 1393.751,
|
| 287 |
"schedules_list": [
|
| 288 |
{
|
| 289 |
-
"
|
| 290 |
-
|
| 291 |
-
|
| 292 |
-
|
| 293 |
-
"
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
|
| 297 |
-
|
| 298 |
-
|
| 299 |
-
"comp01": {
|
| 300 |
-
"...": "..."
|
| 301 |
-
}
|
| 302 |
},
|
| 303 |
-
{
|
| 304 |
]
|
| 305 |
}
|
| 306 |
|
|
@@ -312,10 +356,13 @@ Each row in the dataset represents a single synthetic program and contains all o
|
|
| 312 |
|
| 313 |
### Top-Level Fields
|
| 314 |
|
| 315 |
-
* `program_name` (string): A unique identifier for the synthetic program (e.g., "function684979").
|
|
|
|
| 316 |
* `program_annotation` (dict): A detailed, structured representation of the original, untransformed program. This serves as the primary source for program feature engineering.
|
|
|
|
| 317 |
* `initial_execution_time` (float): The median execution time (in ms) of the program before any optimizations.
|
| 318 |
* `schedules_list` (list of dicts): A list of all optimization sequences explored for this program. Each dictionary in the list details a unique schedule and its performance.
|
|
|
|
| 319 |
|
| 320 |
---
|
| 321 |
|
|
@@ -339,37 +386,67 @@ This object contains all the static information about the source program.
|
|
| 339 |
Each element in this list represents one complete optimization schedule applied to the program.
|
| 340 |
|
| 341 |
* `execution_times` (list of float): A list of 30 raw execution time measurements (in ms) for this specific schedule. The ground-truth label for ML models is typically derived from this list (e.g., by taking the median).
|
| 342 |
-
*
|
| 343 |
-
* `
|
| 344 |
-
|
| 345 |
-
|
| 346 |
-
|
| 347 |
-
|
| 348 |
-
* `parallelized_dim` (string): The name of the loop that was parallelized (if applied).
|
| 349 |
-
* `transformations_list` (list): Each element in the list is a vector representing one affine transformation (interchange, reversal, or skewing). The order of vectors defines the order of application.
|
| 350 |
|
| 351 |
|
| 352 |
-
<details>
|
| 353 |
-
<summary><b>`transformations_list` format</b></summary>
|
| 354 |
-
Each element in the list is a fixed-length (16-element) integer vector representing one affine transformation. The order of vectors in the list determines the order of application.
|
| 355 |
|
| 356 |
-
|
| 357 |
-
* `1`: Loop Interchange
|
| 358 |
-
* `2`: Loop Reversal
|
| 359 |
-
* `3`: Loop Skewing
|
| 360 |
|
| 361 |
-
|
| 362 |
|
| 363 |
-
|
| 364 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 365 |
|
| 366 |
-
|
| 367 |
-
* `vector[3]` specifies the loop level (as an integer index) to be reversed. Other elements are unused.
|
| 368 |
|
| 369 |
-
|
| 370 |
-
|
| 371 |
-
|
| 372 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 373 |
|
| 374 |
|
| 375 |
## Dataset Creation
|
|
@@ -395,7 +472,7 @@ Full details are available in our [companion paper](https://arxiv.org/abs/xxxx.x
|
|
| 395 |
If you use this dataset, please cite the following paper:
|
| 396 |
|
| 397 |
```bibtex
|
| 398 |
-
@misc{
|
| 399 |
title={LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Compiler Optimization},
|
| 400 |
author={Massinissa Merouani and Afif Boudaoud and Riyadh Baghdadi},
|
| 401 |
year={2025},
|
|
@@ -409,15 +486,15 @@ If you use this dataset, please cite the following paper:
|
|
| 409 |
If you a building upon or comparing against the `LOOPer` cost model, please cite our PACT '25 paper:
|
| 410 |
|
| 411 |
```bibtex
|
| 412 |
-
@
|
| 413 |
-
|
| 414 |
-
|
| 415 |
-
|
| 416 |
-
|
| 417 |
-
|
| 418 |
-
|
| 419 |
-
|
| 420 |
-
|
| 421 |
```
|
| 422 |
|
| 423 |
|
|
@@ -426,3 +503,15 @@ If you a building upon or comparing against the `LOOPer` cost model, please cite
|
|
| 426 |
|
| 427 |
This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC-BY 4.0) License](https://creativecommons.org/licenses/by/4.0/).
|
| 428 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
pretty_name: "LOOPerSet"
|
| 3 |
license: "cc-by-4.0"
|
|
|
|
| 11 |
size_categories:
|
| 12 |
- 10M<n<100M
|
| 13 |
configs:
|
| 14 |
+
- config_name: full
|
| 15 |
+
data_files:
|
| 16 |
+
- split: train
|
| 17 |
+
path: "data/full/looperset_v2_full.jsonl.gz"
|
| 18 |
+
|
| 19 |
+
- config_name: full_compact
|
| 20 |
+
data_files:
|
| 21 |
+
- split: train
|
| 22 |
+
path: "data/full/looperset_v2_full_compact.jsonl.gz"
|
| 23 |
+
|
| 24 |
- config_name: pact25_split
|
| 25 |
data_files:
|
| 26 |
- split: train
|
| 27 |
+
path: "data/pact25/looperset_v2_pact_train.jsonl.gz"
|
| 28 |
- split: validation
|
| 29 |
+
path: "data/pact25/looperset_v2_pact_validation.jsonl.gz"
|
| 30 |
|
| 31 |
+
- config_name: pact25_split_compact
|
| 32 |
+
data_files:
|
| 33 |
- split: train
|
| 34 |
+
path: "data/pact25/looperset_v2_pact_train_compact.jsonl.gz"
|
| 35 |
+
- split: validation
|
| 36 |
+
path: "data/pact25/looperset_v2_pact_validation_compact.jsonl.gz"
|
| 37 |
---
|
| 38 |
|
| 39 |
# LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Optimization
|
| 40 |
|
|
|
|
|
|
|
| 41 |
<div align="center">
|
| 42 |
|
| 43 |
[](https://arxiv.org/abs/2510.10209)
|
| 44 |
+
[](https://ieeexplore.ieee.org/document/11282943)
|
| 45 |
[](https://creativecommons.org/licenses/by/4.0/)
|
| 46 |
[]()
|
| 47 |
|
| 48 |
</div>
|
| 49 |
|
| 50 |
|
| 51 |
+
## Dataset at a Glance
|
| 52 |
|
| 53 |
+
`LOOPerSet` is a corpus of 28 million labeled compilation traces designed for machine learning research in compilers and systems. It maps synthetically generated loop nests and complex optimization sequences to ground-truth execution times measured on physical hardware. Transformation sequences were generated using a polyhedral compilation framework to ensure they were legal and semantics-preserving.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
+
`LOOPerSet` was originally created to train the cost model for the [LOOPer autoscheduler](https://ieeexplore.ieee.org/document/11282943) (PACT '25). For a full description of the generation process and a diversity analysis, please see our [companion paper on arXiv](https://arxiv.org/abs/2510.10209).
|
| 56 |
|
| 57 |
+
### What is inside?
|
| 58 |
+
Each data point represents a **(Program, Schedule) $\rightarrow$ Performance** tuple containing:
|
| 59 |
+
* **Source Code & IR:** Raw Tiramisu (C++) generator code, lowered Halide IR, and ISL ASTs.
|
| 60 |
+
* **Structured Features:** JSON-based representation of the program structure (loop hierarchy, memory access patterns, arithmetic expressions) for feature engineering.
|
| 61 |
+
* **Optimization Schedules:** Sequences of code transformations (tiling, skewing, fusion, interchange, unrolling, parallelization, etc) and the specific API commands used to apply them.
|
| 62 |
+
* **Ground Truth:** Execution time (ms) measured over many runs on physical hardware.
|
| 63 |
|
| 64 |
+
### Key Research Tasks
|
| 65 |
+
By exposing both low-level source code and high-level structural features, the dataset can be used for several research applications in machine learning and compilers:
|
| 66 |
* **Performance Prediction**: The dataset's primary use case. Train a model to map a program's features and a candidate optimization schedule to a predicted performance value (e.g., execution time or speedup). This forms the core of a learned cost model for guiding compiler optimization.
|
| 67 |
* **Schedule Ranking**: A learning-to-rank task where a model learns to order a set of candidate schedules for a given program based on their relative performance.
|
| 68 |
* **Compiler Heuristic Discovery**: A data analysis task to discover new optimization heuristics by finding correlations between program features and the effectiveness of transformation sequences.
|
| 69 |
* **Program Representation Learning**: Develop and evaluate novel methods for featurizing programs, computer code, and transformation schedules, such as learning dense vector embeddings.
|
| 70 |
+
* **Transfer Learning**: A general-purpose cost model can be pre-trained on `LOOPerSet` and then fine-tuned on a much smaller, target-specific dataset, significantly reducing the data collection cost for new architectures.
|
| 71 |
+
|
| 72 |
|
| 73 |
### Dataset Configurations
|
| 74 |
|
| 75 |
+
The dataset is provided in two structural variants (**Standard** and **Compact**) across two split configurations (**Full** and **PACT '25**), plus a supplementary source code archive.
|
| 76 |
|
| 77 |
+
|
| 78 |
+
#### Variants
|
| 79 |
+
* **Standard**: Contains complete program information including raw C++ code, lowered IRs, and compile commands. Ideal for source code analysis and NLP tasks.
|
| 80 |
+
* **Compact**: Optimized for speed and low-memory usage. It retains all features needed for training performance models but excludes raw code strings and intermediate representations. Recommended for training cost models and performance prediction.
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
#### Splits
|
| 84 |
* **`full`**: The complete ~28 million point dataset (composed of ~220k programs), available as a single `train` split.
|
| 85 |
+
* **`pact25`** split: A 10-million-point version used to train the LOOPer cost model, pre-split into `train` (90%) and `validation` (10%) sets for reproducibility. This is a subset of the Full dataset.
|
| 86 |
+
|
| 87 |
+
#### Generators Archive
|
| 88 |
+
A compressed `tar.gz` containing the raw ~220k Tiramisu C++ generator files (`.cpp`). These match the `program_name` keys in the dataset and are useful for static analysis or if you wish to re-compile/re-execute the programs yourself.
|
| 89 |
+
|
| 90 |
+
#### Directory Structure
|
| 91 |
+
|
| 92 |
+
```text
|
| 93 |
+
data/
|
| 94 |
+
├── full/
|
| 95 |
+
│ ├── looperset_v2_full.jsonl.gz (Standard)
|
| 96 |
+
│ └── looperset_v2_full_compact.jsonl.gz (Compact)
|
| 97 |
+
│
|
| 98 |
+
├── pact25/
|
| 99 |
+
│ ├── looperset_v2_pact_train.jsonl.gz (Standard)
|
| 100 |
+
│ ├── looperset_v2_pact_train_compact.jsonl.gz (Compact)
|
| 101 |
+
│ ├── looperset_v2_pact_validation.jsonl.gz (Standard)
|
| 102 |
+
│ └── looperset_v2_pact_validation_compact.jsonl.gz (Compact)
|
| 103 |
+
│
|
| 104 |
+
└── source/
|
| 105 |
+
└── looperset_v2_generators.tar.gz (Raw C++ Generators)
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
#### File Sizes
|
| 109 |
+
|
| 110 |
+
| Configuration | File Path | Compressed Size | Decompressed Size |
|
| 111 |
+
| :--- | :--- | :--- | :--- |
|
| 112 |
+
| **Full (Standard)** | `data/full/looperset_v2_full.jsonl.gz` | 6.0 GB | 84 GB |
|
| 113 |
+
| **Full (Compact)** | `data/full/looperset_v2_full_compact.jsonl.gz` | 3.1 GB | 21 GB |
|
| 114 |
+
| **PACT Train (Standard)** | `data/pact25/looperset_v2_pact_train.jsonl.gz` | 2.0 GB | 28 GB |
|
| 115 |
+
| **PACT Train (Compact)** | `data/pact25/looperset_v2_pact_train_compact.jsonl.gz` | 1.1 GB | 6.8 GB |
|
| 116 |
+
| **PACT Val (Standard)** | `data/pact25/looperset_v2_pact_validation.jsonl.gz` | 236 MB | 3.3 GB |
|
| 117 |
+
| **PACT Val (Compact)** | `data/pact25/looperset_v2_pact_validation_compact.jsonl.gz` | 121 MB | 818 MB |
|
| 118 |
+
| **Generators Source** | `data/source/looperset_v2_generators.tar.gz` | 34 MB | 339 MB |
|
| 119 |
|
| 120 |
## How to Use
|
| 121 |
|
| 122 |
The dataset files are stored in `.jsonl.gz` format (gzipped JSON Lines), where each line is a complete JSON object representing one program.
|
| 123 |
|
| 124 |
+
Below we provide a simple method to download the files and stream the data in Python.
|
| 125 |
|
| 126 |
### Installation
|
| 127 |
|
|
|
|
| 136 |
|
| 137 |
### Step 1: Download the Data Files
|
| 138 |
|
| 139 |
+
The dataset is available in X configurations<....>:
|
| 140 |
+
|
| 141 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
First, use the `hf_hub_download` function to fetch the dataset files you need.
|
| 144 |
|
|
|
|
| 148 |
|
| 149 |
REPO_ID = "Mascinissa/LOOPerSet"
|
| 150 |
|
| 151 |
+
# --- Option 1: Download the Full Compact (Recommended for Speed) ---
|
| 152 |
+
full_compact_path = hf_hub_download(
|
| 153 |
repo_id=REPO_ID,
|
| 154 |
+
filename="data/full/looperset_v2_full_compact.jsonl.gz",
|
| 155 |
repo_type="dataset",
|
| 156 |
)
|
| 157 |
+
print(f"Full Compact dataset downloaded to: {full_compact_path}")
|
| 158 |
|
| 159 |
|
| 160 |
+
# --- Option 2: Download the Standard PACT '25 splits ---
|
| 161 |
pact25_train_path = hf_hub_download(
|
| 162 |
repo_id=REPO_ID,
|
| 163 |
+
filename="data/pact25/looperset_v2_pact_train.jsonl.gz",
|
| 164 |
repo_type="dataset",
|
| 165 |
)
|
| 166 |
pact25_validation_path = hf_hub_download(
|
| 167 |
repo_id=REPO_ID,
|
| 168 |
+
filename="data/pact25/looperset_v2_pact_validation.jsonl.gz",
|
| 169 |
repo_type="dataset",
|
| 170 |
)
|
| 171 |
print(f"PACT'25 train split downloaded to: {pact25_train_path}")
|
|
|
|
| 291 |
|
| 292 |
if current_time < min_time:
|
| 293 |
min_time = current_time
|
| 294 |
+
best_schedule_info = schedule['schedule_str']
|
| 295 |
|
| 296 |
speedup = initial_time / min_time if min_time > 0 else float('inf')
|
| 297 |
|
|
|
|
| 329 |
"computations": { "...": "..." },
|
| 330 |
"buffers": { "...": "..." }
|
| 331 |
},
|
| 332 |
+
"Tiramisu_cpp": "// raw tiramisu generator source code ...",
|
| 333 |
"initial_execution_time": 1393.751,
|
| 334 |
"schedules_list": [
|
| 335 |
{
|
| 336 |
+
"transformations_list": [
|
| 337 |
+
{"type": "interchange", "loop_levels": [0,1], "parameters": [], "computations": ["comp00"]},
|
| 338 |
+
{"type": "tiling", "loop_levels": [1,2], "parameters": [32,32], "computations": ["comp01","comp02"]}
|
| 339 |
+
],
|
| 340 |
+
"schedule_str": "I(L0,L1,comps=[comp00])|T2(L1,L2,32,32,comps=[comp02,comp02])",
|
| 341 |
+
"legacy_schedule_str": "I({C0},L0,L1)T2({C1,C2},L2,L3,32,32)...",
|
| 342 |
+
"ISL_AST": "..." ,
|
| 343 |
+
"Halide_IR": "// lowered Halide IR ...",
|
| 344 |
+
"Tiramisu_transform_commands": "comp01.tile(...); comp00.interchange(...); ...",
|
| 345 |
+
"execution_times": [451.234, 465.112, 458.543, ...]
|
|
|
|
|
|
|
|
|
|
| 346 |
},
|
| 347 |
+
{ /* ... another schedule object ... */ }
|
| 348 |
]
|
| 349 |
}
|
| 350 |
|
|
|
|
| 356 |
|
| 357 |
### Top-Level Fields
|
| 358 |
|
| 359 |
+
* `program_name` (string): A unique identifier for the synthetic program (e.g., "function684979"). This name is also used to locate the corresponding generator file in the source archive:
|
| 360 |
+
`<program_name>_generator.cpp`.
|
| 361 |
* `program_annotation` (dict): A detailed, structured representation of the original, untransformed program. This serves as the primary source for program feature engineering.
|
| 362 |
+
* `Tiramisu_cpp` (string): Raw Tiramisu generator C++ source code of the program before any schedule transformations. **(Excluded in Compact version)**
|
| 363 |
* `initial_execution_time` (float): The median execution time (in ms) of the program before any optimizations.
|
| 364 |
* `schedules_list` (list of dicts): A list of all optimization sequences explored for this program. Each dictionary in the list details a unique schedule and its performance.
|
| 365 |
+
* `exploration_trace` (dict): Internal search logs. **(Excluded in Compact version)**.
|
| 366 |
|
| 367 |
---
|
| 368 |
|
|
|
|
| 386 |
Each element in this list represents one complete optimization schedule applied to the program.
|
| 387 |
|
| 388 |
* `execution_times` (list of float): A list of 30 raw execution time measurements (in ms) for this specific schedule. The ground-truth label for ML models is typically derived from this list (e.g., by taking the median).
|
| 389 |
+
* `transformations_list` (list of dicts): A structured list where each element describes a specific transformation step (see format below).
|
| 390 |
+
* `schedule_str` (string): A human-readable summary string of the transformations applied in this schedule (see format below).
|
| 391 |
+
- `legacy_schedule_str` (string): Legacy schedule string found in older versions of the dataset. **(Excluded in Compact version)**.
|
| 392 |
+
- `ISL_AST` (string): An ISL (Integer Set Library) abstract syntax tree representing the loop nest after the transformation is applied. **(Excluded in Compact version)**.
|
| 393 |
+
- `Halide_IR` (string): Generated/lowered Halide IR after applying the transformations. **(Excluded in Compact version)**.
|
| 394 |
+
- `Tiramisu_transform_commands` (string): The actual Tiramisu C++ API commands used to apply this schedule. **(Excluded in Compact version)**.
|
|
|
|
|
|
|
| 395 |
|
| 396 |
|
|
|
|
|
|
|
|
|
|
| 397 |
|
| 398 |
+
#### Transformation Object Format (`transformations_list`)
|
|
|
|
|
|
|
|
|
|
| 399 |
|
| 400 |
+
Each item in the `transformations_list` is a dictionary describing a single step:
|
| 401 |
|
| 402 |
+
```json
|
| 403 |
+
{
|
| 404 |
+
"type": "String", // e.g., "skewing", "interchange", "tiling", etc.
|
| 405 |
+
"loop_levels": [Integers], // List of loop levels involved (e.g., [0,1])
|
| 406 |
+
"parameters": [Integers], // Numeric parameters (tiling factors, skewing coefficients)
|
| 407 |
+
"computations": ["String"] // List of computation IDs affected
|
| 408 |
+
}
|
| 409 |
+
```
|
| 410 |
|
| 411 |
+
#### Schedule String Format (`schedule_str`)
|
|
|
|
| 412 |
|
| 413 |
+
A schedule is represented as a pipe-separated list of transformations:
|
| 414 |
+
|
| 415 |
+
`<T1>|<T2>|<T3>|...`
|
| 416 |
+
|
| 417 |
+
Supported transformations:
|
| 418 |
+
|
| 419 |
+
- `S(LX,LY,v1,v2,comps=[...])`: skewing loop levels `LX` and `LY` with factors `v1`, `v2`
|
| 420 |
+
- `I(LX,LY,comps=[...])`: interchange loop levels `LX` and `LY`
|
| 421 |
+
- `R(LX,comps=[...])`: reversal of loop level `LX`
|
| 422 |
+
- `P(LX,comps=[...])`: parallelization of loop level `LX`
|
| 423 |
+
- `T2(LX,LY,v1,v2,comps=[...])`: 2D tiling of loop levels `LX`,`LY` with factors `v1`,`v2`
|
| 424 |
+
- `T3(LX,LY,LZ,v1,v2,v3,comps=[...])`: 3D tiling of loop levels `LX`,`LY`,`LZ` with factors `v1`,`v2`,`v3`
|
| 425 |
+
- `U(LX,v,comps=[...])`: unrolling of loop level `LX` with factor `v`
|
| 426 |
+
- `F(LX,comps=[...])`: fusion at loop level `LX` for the computations listed in `comps`
|
| 427 |
+
|
| 428 |
+
|
| 429 |
+
## C++ Source Code Archive
|
| 430 |
+
|
| 431 |
+
This repository also includes a compressed archive containing the raw Tiramisu generator sources for all programs (`data/looperset_generators.tar.gz`). These are provided to enable researchers to perform static program analysis, reproduce results by re-executing schedules on different hardware architectures, or extend the dataset by collecting completely new schedules.
|
| 432 |
+
|
| 433 |
+
* **Content:** Contains ~220,000 `.cpp` files.
|
| 434 |
+
* **Filename Format:** `<program_name>_generator.cpp` (e.g., `function12345_generator.cpp`).
|
| 435 |
+
* **Relation to JSON:** The content of these files is identical to the string found in the `Tiramisu_cpp` field within the JSON dataset. The archive is provided purely for convenience.
|
| 436 |
+
|
| 437 |
+
**How to extract specific files in Python:**
|
| 438 |
+
|
| 439 |
+
```python
|
| 440 |
+
import tarfile
|
| 441 |
+
|
| 442 |
+
# Extract a specific generator file
|
| 443 |
+
with tarfile.open(source_code_path, "r:gz") as tar:
|
| 444 |
+
# Example: Extract function12345_generator.cpp
|
| 445 |
+
member = tar.getmember("looperset_generators/function12345_generator.cpp")
|
| 446 |
+
f = tar.extractfile(member)
|
| 447 |
+
content = f.read().decode('utf-8')
|
| 448 |
+
print(content)
|
| 449 |
+
```
|
| 450 |
|
| 451 |
|
| 452 |
## Dataset Creation
|
|
|
|
| 472 |
If you use this dataset, please cite the following paper:
|
| 473 |
|
| 474 |
```bibtex
|
| 475 |
+
@misc{looperset,
|
| 476 |
title={LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Compiler Optimization},
|
| 477 |
author={Massinissa Merouani and Afif Boudaoud and Riyadh Baghdadi},
|
| 478 |
year={2025},
|
|
|
|
| 486 |
If you a building upon or comparing against the `LOOPer` cost model, please cite our PACT '25 paper:
|
| 487 |
|
| 488 |
```bibtex
|
| 489 |
+
@INPROCEEDINGS{looper,
|
| 490 |
+
author={Merouani, Massinissa and Boudaoud, Afif and Aouadj, Iheb Nassim and Tchoulak, Nassim and Bernou, Islem Kara and Benyamina, Hamza and Tayeb, Fatima Benbouzid-Si and Benatchba, Karima and Leather, Hugh and Baghdadi, Riyadh},
|
| 491 |
+
booktitle={2025 34th International Conference on Parallel Architectures and Compilation Techniques (PACT)},
|
| 492 |
+
title={LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers},
|
| 493 |
+
year={2025},
|
| 494 |
+
pages={201-215},
|
| 495 |
+
keywords={Deep learning;Costs;Codes;Program processors;Predictive models;Space exploration;Parallel architectures;Optimization;Pluto;Faces;Compilers;Optimization;Program transformation;Machine learning;Modeling techniques},
|
| 496 |
+
doi={10.1109/PACT65351.2025.00028}}
|
| 497 |
+
|
| 498 |
```
|
| 499 |
|
| 500 |
|
|
|
|
| 503 |
|
| 504 |
This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC-BY 4.0) License](https://creativecommons.org/licenses/by/4.0/).
|
| 505 |
|
| 506 |
+
|
| 507 |
+
## Versioning / Changelog
|
| 508 |
+
|
| 509 |
+
**v2.0 Schema Update & Compact Splits**
|
| 510 |
+
* **Source Code Availability:** Added raw Tiramisu (C++) generator code to both the JSON dataset (`Tiramisu_cpp` field) and as a standalone downloadable archive.
|
| 511 |
+
* **Compact Mode:** Introduced "Compact" dataset configurations that strip out large text fields (Source, IR, AST) to optimize for speed when training standard performance models.
|
| 512 |
+
* **Schema Update:** Completely restructured the `schedules_list`. Replaced ad-hoc lists with a structured `transformations_list` dictionary format and standardized the schedule string representation.
|
| 513 |
+
* **PACT '25 Update:** Updated citation information for the published LOOPer paper.
|
| 514 |
+
|
| 515 |
+
**v1.0 (Initial Release)**
|
| 516 |
+
* Original dataset release containing ~220k programs and 28M schedules.
|
| 517 |
+
* Includes `full` and `pact25` split configurations.
|
data/{looperset_full.jsonl.gz → full/looperset_v2_full.jsonl.gz}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f5cc4e2739018d12d01382abed7af8e07ea9d84daecd7cc3e94ed839db9151c
|
| 3 |
+
size 6335196993
|
data/{pact25/looperset_pact25_train.jsonl.gz → full/looperset_v2_full_compact.jsonl.gz}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c881d4fe0f75ea7133e610e7d44e1f6091985e2fa5196712126e926fd306839e
|
| 3 |
+
size 3239084097
|
data/pact25/looperset_v2_pact_train.jsonl.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36919281f0e3ab633d34a08764d7efc48455f2c2ffd366799b22979d07defedb
|
| 3 |
+
size 2107219002
|
data/pact25/looperset_v2_pact_train_compact.jsonl.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4a057005df295d55c8b0dfc2dda5511ef268e2f9a48f74bc71f8bfcbcc70359
|
| 3 |
+
size 1077751270
|
data/pact25/{looperset_pact25_validation.jsonl.gz → looperset_v2_pact_validation.jsonl.gz}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c17d040d7db1200c5e0936f3662a6420acbcc8acfbe42b4d468713cedaff09d8
|
| 3 |
+
size 247065410
|
data/pact25/looperset_v2_pact_validation_compact.jsonl.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7324884ff762f3a9bc17c7d7bbaae72aca2fc1f7633141b4a2f694cdaf0dc622
|
| 3 |
+
size 126516105
|
data/source/looperset_v2_generators.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36b71a62b9816a5b823239e9d81b3e4db4a3003cf45539d821e69bd1d46926cc
|
| 3 |
+
size 35512759
|