Update README.md
Browse files
README.md
CHANGED
|
@@ -42,11 +42,12 @@ configs:
|
|
| 42 |
data_files:
|
| 43 |
- split: train
|
| 44 |
path: data/train-*
|
|
|
|
| 45 |
---
|
| 46 |
|
| 47 |
-
# Dataset Card for
|
| 48 |
|
| 49 |
-
**
|
| 50 |
|
| 51 |
------
|
| 52 |
|
|
@@ -61,13 +62,13 @@ configs:
|
|
| 61 |
### Source
|
| 62 |
|
| 63 |
- **Repositories**: 1,704 open-source GitHub repositories
|
| 64 |
-
- **Link**: [
|
| 65 |
|
| 66 |
------
|
| 67 |
|
| 68 |
## Intended Uses
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
1. **IR Understanding**: Train models to extract structural and semantic information from LLVM IR code.
|
| 73 |
2. **Optimization Behavior Analysis**: Evaluate model ability to capture and apply real-world compiler optimizations.
|
|
@@ -81,7 +82,7 @@ IROpti is suitable for the following use cases:
|
|
| 81 |
|
| 82 |
## Dataset Structure
|
| 83 |
|
| 84 |
-
Each record in
|
| 85 |
|
| 86 |
- `original_ir`: Unoptimized LLVM IR (via `clang/clang++ -Xclang -disable-llvm-passes`)
|
| 87 |
- `preprocessed_ir`: Cleaned version of the original IR
|
|
@@ -140,9 +141,9 @@ All transformations preserve the semantic and structural fidelity of the IR.
|
|
| 140 |
```
|
| 141 |
@misc{iropti2025,
|
| 142 |
title={IROpti: Enhancing LLMs to Understand and Perform IR-level Optimizations in Compilers},
|
| 143 |
-
author={
|
| 144 |
year={2025},
|
| 145 |
-
url={https://huggingface.co/datasets/YangziResearch/
|
| 146 |
}
|
| 147 |
```
|
| 148 |
|
|
|
|
| 42 |
data_files:
|
| 43 |
- split: train
|
| 44 |
path: data/train-*
|
| 45 |
+
|
| 46 |
---
|
| 47 |
|
| 48 |
+
# Dataset Card for IR-OptSet
|
| 49 |
|
| 50 |
+
**IR-OptSet** is a publicly available dataset designed to advance the use of large language models (LLMs) in compiler optimization. It leverages LLVM—one of the most widely adopted modern compilers—and focuses on its intermediate representation (IR) as the foundation of the dataset. IR-OptSet contains 170,000 IR samples curated from 1,704 GitHub repositories across diverse domains. The dataset provides a comprehensive resource for training and evaluating models in the field of IR-level compiler optimizations.
|
| 51 |
|
| 52 |
------
|
| 53 |
|
|
|
|
| 62 |
### Source
|
| 63 |
|
| 64 |
- **Repositories**: 1,704 open-source GitHub repositories
|
| 65 |
+
- **Link**: [IR-OptSet on Hugging Face](https://huggingface.co/datasets/YangziResearch/IR-OptSet)
|
| 66 |
|
| 67 |
------
|
| 68 |
|
| 69 |
## Intended Uses
|
| 70 |
|
| 71 |
+
IR-OptSet is suitable for the following use cases:
|
| 72 |
|
| 73 |
1. **IR Understanding**: Train models to extract structural and semantic information from LLVM IR code.
|
| 74 |
2. **Optimization Behavior Analysis**: Evaluate model ability to capture and apply real-world compiler optimizations.
|
|
|
|
| 82 |
|
| 83 |
## Dataset Structure
|
| 84 |
|
| 85 |
+
Each record in IR-OptSet includes:
|
| 86 |
|
| 87 |
- `original_ir`: Unoptimized LLVM IR (via `clang/clang++ -Xclang -disable-llvm-passes`)
|
| 88 |
- `preprocessed_ir`: Cleaned version of the original IR
|
|
|
|
| 141 |
```
|
| 142 |
@misc{iropti2025,
|
| 143 |
title={IROpti: Enhancing LLMs to Understand and Perform IR-level Optimizations in Compilers},
|
| 144 |
+
author={YangZi},
|
| 145 |
year={2025},
|
| 146 |
+
url={https://huggingface.co/datasets/YangziResearch/IR-OptSet_Models}
|
| 147 |
}
|
| 148 |
```
|
| 149 |
|