akhauriyash commited on
Commit
aa8c1c9
·
verified ·
1 Parent(s): e4d4e9f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - regression
5
+ - text regression
6
+ - NAS
7
+ - neural architecture search
8
+ ---
9
+ # GraphArch-Regression
10
+
11
+ A unified regression dataset collated from multiple graph/architecture search sources (FBNet, Hiaml, Inception, NB101, NB201, NDS, OfaMB, OfaPN, OfaRN, SNAS, Twopath) for training and evaluating models that map **ONNX-readable graph strings** to a target metric.
12
+
13
+ ## Schema
14
+ - **identifier** *(string)*: Source key for the example, e.g. `FBNet_0`, `SNAS_42`.
15
+ - **space** *(string)*: Logical dataset source (`FBNet`, `Hiaml`, `Inception`, `NB101`, `NB201`, `NDS`, `OfaMB`, `OfaPN`, `OfaRN`, `SNAS`, `Twopath`).
16
+ - **uid** *(string)*: Original UID, if provided by the source.
17
+ - **arch_str** *(string)*: Architecture identity; first non-empty among `arch_str`, `hash`, `uid`.
18
+ - **input** *(string)*: ONNX-readable graph string (`onnx_readable`).
19
+ - **target_metric** *(string)*: Always `val_accuracy`.
20
+ - **val_accuracy** *(number | null)*: Primary regression target (Accuracy)
21
+ - **flops** *(number | null)*: FLOPs for the architecture (if available).
22
+ - **params** *(number | null)*: Parameter count (if available).
23
+ - **metadata** *(string)*: Python-dict-like string including **only** keys that start with `zcp_` or `lat_` (e.g., zero-cost proxies and latency measurements). **Not populated for `SNAS`.** These can be used for multi-objective regression.
24
+ - **metainformation** *(string)*: Only for `SNAS`; Python-dict-like string of selected fields `{arch_str, macro, train_time_sec, steps_ran, precision, batch_size}`.
25
+
26
+ ## Dataset Size
27
+ With this dataset, we provide ONNX text for universal-NAS regression training over 611931 architectures:
28
+ - Amoeba: 4983
29
+ - DARTS: 5000
30
+ - DARTS_fix-w-d: 5000
31
+ - DARTS_lr-wd: 5000
32
+ - ENAS: 4999
33
+ - ENAS_fix-w-d: 5000
34
+ - FBNet: 5000
35
+ - Hiaml: 4629
36
+ - Inception: 580
37
+ - NASBench101: 423624
38
+ - NASBench201: 15625
39
+ - NASNet: 4846
40
+ - OfaMB: 7491
41
+ - OfaPN: 8206
42
+ - OfaRN: 10000
43
+ - PNAS: 4999
44
+ - PNAS_fix-w-d: 4559
45
+ - SNAS: 85500
46
+ - TwoPath: 6890
47
+
48
+ > Tip: turn `metadata` or `metainformation` back into a dict with:
49
+ > ```python
50
+ > from ast import literal_eval
51
+ > meta = literal_eval(row["metadata"])
52
+ > ```
53
+
54
+ ## How to load with 🤗 Datasets
55
+ ```python
56
+ from datasets import load_dataset
57
+
58
+ # After you upload this folder to a dataset repo, e.g. your-username/GraphArch-Regression
59
+ ds = load_dataset("your-username/GraphArch-Regression")
60
+
61
+ # Or from a local clone:
62
+ # ds = load_dataset("json", data_files="GraphArch-Regression/data.jsonl", split="train")
63
+ ```
64
+
65
+ Credits
66
+
67
+ This dataset was collated from several graph/NAS sources, along with our own profiling where applicable. Please credit and cite the original datasets accordingly.
68
+
69
+ Inception, Hiaml, Ofa-MB/PN/RN, Twopath: `
70
+ Mills, K. G., Han, F. X., Zhang, J., Chudak, F., Mamaghani, A. S., Salameh, M., Lu, W., Jui, S., & Niu, D. (2023). Gennape: Towards generalized neural architecture performance estimators. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9190–9199.`
71
+
72
+ NDS: `Radosavovic, Ilija, et al. "On network design spaces for visual recognition." Proceedings of the IEEE/CVF international conference on computer vision. 2019.`
73
+
74
+ NB101: `Ying, Chris, et al. "Nas-bench-101: Towards reproducible neural architecture search." International conference on machine learning. PMLR, 2019.`
75
+
76
+ NB201: `Dong, Xuanyi, and Yi Yang. "Nas-bench-201: Extending the scope of reproducible neural architecture search." arXiv preprint arXiv:2001.00326 (2020).`
77
+
78
+ FBNet: `Wu, Bichen, et al. "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.`
79
+
80
+ Further, multi-objective latency and zero cost proxies were sourced from
81
+
82
+ ```
83
+ Krishnakumar, Arjun, et al. "Nas-bench-suite-zero: Accelerating research on zero cost proxies." Advances in Neural Information Processing Systems 35 (2022): 28037-28051.
84
+
85
+ Akhauri, Yash, and Mohamed S. Abdelfattah. "Encodings for prediction-based neural architecture search." arXiv preprint arXiv:2403.02484 (2024).
86
+
87
+ Akhauri, Yash, and Mohamed Abdelfattah. "On latency predictors for neural architecture search." Proceedings of Machine Learning and Systems 6 (2024): 512-523.
88
+
89
+ Lee, Hayeon, et al. "Help: Hardware-adaptive efficient latency prediction for nas via meta-learning." arXiv preprint arXiv:2106.08630 (2021).
90
+ ```
91
+
92
+
93
+ Citations
94
+
95
+ If you found this dataset useful for your research, please cite the original sources above as well as:
96
+
97
+ ```
98
+ @article{akhauri2025performance,
99
+ title={Performance Prediction for Large Systems via Text-to-Text Regression},
100
+ author={Akhauri, Yash and Lewandowski, Bryan and Lin, Cheng-Hsi and Reyes, Adrian N and Forbes, Grant C and Wongpanich, Arissa and Yang, Bangding and Abdelfattah, Mohamed S and Perel, Sagi and Song, Xingyou},
101
+ journal={arXiv preprint arXiv:2506.21718},
102
+ year={2025}
103
+ }
104
+ ```
105
+
106
+ (Original Paper Coming Soon!)