Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,75 +1,103 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
| 4 |
|
| 5 |
-
# HLO Feature Dataset for Deep Learning
|
| 6 |
|
| 7 |
-
[](https://huggingface.co/datasets/your-username/hlo-feature-dataset)
|
| 5 |
|
| 6 |
## Dataset Summary
|
| 7 |
+
The **HLO Feature Dataset** is a collection of compiler-level graph features (HLO graphs) extracted from deep learning training workloads. Alongside detailed metadata (model configs, GPU stats), this dataset enables machine learning approaches for:
|
| 8 |
|
| 9 |
+
- ⏱️ **Training Time Prediction**
|
| 10 |
+
- 📉 **Resource Consumption Estimation**
|
| 11 |
+
- ⚡ **HPC and GPU Scheduling Optimization**
|
| 12 |
+
- 🧩 **Graph-based Neural Architecture Analysis**
|
| 13 |
|
| 14 |
+
This dataset is ideal for experimenting with regression models (e.g., XGBoost) and Graph Neural Networks (GNNs) using compiler features.
|
| 15 |
|
| 16 |
---
|
| 17 |
|
| 18 |
+
## Supported Tasks
|
| 19 |
+
- **⚙️ Runtime & Resource Prediction**: Predict training time (`fit_time`) based on HLO features.
|
| 20 |
+
- **📊 ML for Systems Optimization**: Use tabular + graph data for AI workload management.
|
| 21 |
+
- **🔗 Graph Representation Learning**: Apply GNNs on HLO graphs (`node_feat`, `edge_index`).
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
---
|
| 24 |
|
| 25 |
## Dataset Structure
|
| 26 |
|
| 27 |
+
Each entry includes:
|
| 28 |
+
- **Metadata**: From `dataset-new.csv` (model, optimizer, GPU specs, timing metrics, etc.)
|
| 29 |
+
- **HLO Graph Features**: `.npz` files containing:
|
| 30 |
+
- `node_opcode`, `node_feat`, `edge_index`, `node_config_ids`, `node_splits`
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
---
|
| 33 |
|
| 34 |
+
## Usage Example
|
| 35 |
+
|
| 36 |
+
This example demonstrates how to load metadata, preprocess features, and train an XGBoost model to predict training time (`fit_time`), as shown in the Colab notebook.
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
import pandas as pd
|
| 40 |
+
import numpy as np
|
| 41 |
+
from sklearn.model_selection import train_test_split
|
| 42 |
+
from sklearn.metrics import mean_squared_error
|
| 43 |
+
from xgboost import XGBRegressor
|
| 44 |
+
|
| 45 |
+
# Load metadata CSV
|
| 46 |
+
df = pd.read_csv('dataset-new.csv')
|
| 47 |
+
|
| 48 |
+
# Example feature selection (drop non-numeric/categorical handling needed)
|
| 49 |
+
X = df[['batch', 'epochs', 'learn_rate', 'gpu_core_count', 'gpu_memory_size']]
|
| 50 |
+
y = df['fit_time']
|
| 51 |
+
|
| 52 |
+
# Train-test split
|
| 53 |
+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
|
| 54 |
+
|
| 55 |
+
# Initialize XGBoost Regressor
|
| 56 |
+
xgb_model = XGBRegressor(n_estimators=100, learning_rate=0.1, max_depth=6, random_state=42)
|
| 57 |
+
xgb_model.fit(X_train, y_train)
|
| 58 |
|
| 59 |
+
# Evaluate
|
| 60 |
+
preds = xgb_model.predict(X_test)
|
| 61 |
+
rmse = mean_squared_error(y_test, preds, squared=False)
|
| 62 |
+
print(f"RMSE: {rmse}")
|
| 63 |
+
```
|
| 64 |
|
| 65 |
---
|
| 66 |
|
| 67 |
+
### Loading HLO Graph Features
|
| 68 |
+
For graph-based ML tasks, load the `.npz` files:
|
| 69 |
|
| 70 |
```python
|
| 71 |
+
npz_file = df.iloc[0]['npz_path']
|
| 72 |
+
graph_data = np.load(npz_file)
|
| 73 |
+
|
| 74 |
+
node_features = graph_data['node_feat']
|
| 75 |
+
edges = graph_data['edge_index']
|
| 76 |
+
|
| 77 |
+
print("Node Feature Shape:", node_features.shape)
|
| 78 |
+
print("Edge Index Shape:", edges.shape)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
|
| 83 |
+
## Citation
|
| 84 |
+
If you use this dataset, please cite:
|
|
|
|
| 85 |
|
| 86 |
+
```
|
| 87 |
+
@misc{hlofeatures2025,
|
| 88 |
+
title={HLO Feature Dataset for AI Resource Estimation},
|
| 89 |
+
author={Your Name},
|
| 90 |
+
year={2025},
|
| 91 |
+
url={https://huggingface.co/datasets/your-username/hlo-feature-dataset}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
## License
|
| 98 |
+
Specify your license here (e.g., MIT, Apache-2.0).
|
| 99 |
+
|
| 100 |
+
---
|
| 101 |
|
| 102 |
+
## Contributions
|
| 103 |
+
Open to contributions! Feel free to suggest improvements or share your models trained on this dataset.
|