|
|
--- |
|
|
license: mit |
|
|
tags: |
|
|
- regression |
|
|
- text regression |
|
|
- NAS |
|
|
- neural architecture search |
|
|
--- |
|
|
# GraphArch-Regression |
|
|
|
|
|
A unified regression dataset collated from multiple graph/architecture search sources (FBNet, Hiaml, Inception, NB101, NB201, NDS, OfaMB, OfaPN, OfaRN, SNAS, Twopath) for training and evaluating models that map **ONNX-readable graph strings** to a target metric. |
|
|
|
|
|
## Schema |
|
|
- **identifier** *(string)*: Source key for the example, e.g. `FBNet_0`, `SNAS_42`. |
|
|
- **space** *(string)*: Logical dataset source (`FBNet`, `Hiaml`, `Inception`, `NB101`, `NB201`, `NDS`, `OfaMB`, `OfaPN`, `OfaRN`, `SNAS`, `Twopath`). |
|
|
- **uid** *(string)*: Original UID, if provided by the source. |
|
|
- **arch_str** *(string)*: Architecture identity; first non-empty among `arch_str`, `hash`, `uid`. |
|
|
- **input** *(string)*: ONNX-readable graph string (`onnx_readable`). |
|
|
- **target_metric** *(string)*: Always `val_accuracy`. |
|
|
- **val_accuracy** *(number | null)*: Primary regression target (Accuracy) |
|
|
- **flops** *(number | null)*: FLOPs for the architecture (if available). |
|
|
- **params** *(number | null)*: Parameter count (if available). |
|
|
- **metadata** *(string)*: Python-dict-like string including **only** keys that start with `zcp_` or `lat_` (e.g., zero-cost proxies and latency measurements). **Not populated for `SNAS`.** These can be used for multi-objective regression. |
|
|
- **metainformation** *(string)*: Only for `SNAS`; Python-dict-like string of selected fields `{arch_str, macro, train_time_sec, steps_ran, precision, batch_size}`. |
|
|
|
|
|
## Dataset Size |
|
|
With this dataset, we provide ONNX text for universal-NAS regression training over 611931 architectures: |
|
|
- Amoeba: 4983 |
|
|
- DARTS: 5000 |
|
|
- DARTS_fix-w-d: 5000 |
|
|
- DARTS_lr-wd: 5000 |
|
|
- ENAS: 4999 |
|
|
- ENAS_fix-w-d: 5000 |
|
|
- FBNet: 5000 |
|
|
- Hiaml: 4629 |
|
|
- Inception: 580 |
|
|
- NASBench101 (NB101): 423624 |
|
|
- NASBench201 (NB201): 15625 |
|
|
- NASNet: 4846 |
|
|
- OfaMB: 7491 |
|
|
- OfaPN: 8206 |
|
|
- OfaRN: 10000 |
|
|
- PNAS: 4999 |
|
|
- PNAS_fix-w-d: 4559 |
|
|
- SNAS: 85500 |
|
|
- TwoPath: 6890 |
|
|
|
|
|
> Tip: turn `metadata` or `metainformation` back into a dict with: |
|
|
> ```python |
|
|
> from ast import literal_eval |
|
|
> meta = literal_eval(row["metadata"]) |
|
|
> ``` |
|
|
|
|
|
## How to load with 🤗 Datasets |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
ds = load_dataset("akhauriyash/GraphArch-Regression") |
|
|
``` |
|
|
|
|
|
## Testing Graph Architecture Regression with a basic Gemma RLM model |
|
|
|
|
|
Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) ) |
|
|
|
|
|
**Note that the best practice is to fine-tune this base model on more NAS ONNX graph data**, and few-shot transfer to the target search space (Say NASNet, etc.). |
|
|
If we want to finetune on 16 examples from say, ENAS, the optimal strategy we found was to construct a small NAS dataset of e.g., DARTS, NASNet, Amoeba, ENAS and use ~(1024, 1024, 1024, 16) samples from each, and up-sample (repeat) the 16 ENAS samples 8 times. Random-shuffle the dataset and fine-tune the RLM with 1e-4 LR (cosine decay) to avoid catastrophic forgetting. |
|
|
The code below is just illustrative to demonstrate non-trivial NAS performance. The model training corpus was only 1% NAS data, the rest was code. |
|
|
|
|
|
``` |
|
|
import torch |
|
|
import numpy as np |
|
|
from datasets import load_dataset |
|
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
from scipy.stats import spearmanr |
|
|
from tqdm import tqdm |
|
|
|
|
|
REPO_ID = "akhauriyash/RLM-GemmaS-Code-v0" |
|
|
DATASET = "akhauriyash/GraphArch-Regression" |
|
|
dataset = load_dataset(DATASET, split="train") |
|
|
tok = AutoTokenizer.from_pretrained(REPO_ID, trust_remote_code=True) |
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
model = AutoModelForSeq2SeqLM.from_pretrained(REPO_ID, trust_remote_code=True).to(device).eval() |
|
|
MAX_ITEMS, BATCH_SIZE, spaces, results = 512, 4, ["NASBench101", "ENAS", "NASNet"], {} |
|
|
n_out_tokens = getattr(model.config, "num_tokens_per_obj", 8) * getattr(model.config, "max_num_objs", 1) |
|
|
n_out_tokens = model.config.num_tokens_per_obj * model.config.max_num_objs |
|
|
|
|
|
for SPACE in spaces: |
|
|
inputs, targets = [], [] |
|
|
for row in tqdm(dataset, desc=f"Processing {SPACE} till {MAX_ITEMS} items"): |
|
|
if row.get("space") == SPACE and "input" in row and "val_accuracy" in row: |
|
|
try: |
|
|
targets.append(float(row["val_accuracy"])) |
|
|
inputs.append(f"{SPACE}\n\n{row['input']}") |
|
|
except: continue |
|
|
if len(inputs) >= MAX_ITEMS: break |
|
|
preds = [] |
|
|
for i in tqdm(range(0, len(inputs), BATCH_SIZE)): |
|
|
enc = tok(inputs[i:i+BATCH_SIZE], return_tensors="pt", truncation=True, padding=True, max_length=4096).to(device) |
|
|
batch_preds = [] |
|
|
for _ in range(8): |
|
|
out = model.generate(**enc, max_new_tokens=n_out_tokens, min_new_tokens=n_out_tokens, do_sample=True, top_p=0.95, temperature=1.0) |
|
|
decoded = [tok.token_ids_to_floats(seq.tolist()) for seq in out] |
|
|
decoded = [d[0] if isinstance(d, list) and d else float("nan") for d in decoded] |
|
|
batch_preds.append(decoded) |
|
|
preds.extend(torch.tensor(batch_preds).median(dim=0).values.tolist()) |
|
|
spear, _ = spearmanr(np.array(targets), np.array(preds)) |
|
|
results[SPACE] = spear; print(f"Spearman ρ for {SPACE}: {spear:.3f}") |
|
|
|
|
|
print("Spearman ρ | NASBench101 | ENAS | NASNet") |
|
|
print(f"{REPO_ID} | " + " | ".join(f"{results[s]:.3f}" for s in spaces)) |
|
|
``` |
|
|
|
|
|
|
|
|
We got the following results when testing on a random subset of the GraphArch-Regression dataset. |
|
|
|
|
|
``` |
|
|
Model ID | NASBench101 | ENAS | NASNet |
|
|
akhauriyash/RegressLM-gemma-s-RLM-table3 | 0.384 | 0.211 | 0.209 |
|
|
``` |
|
|
|
|
|
## Credits |
|
|
|
|
|
This dataset was collated from several graph/NAS sources, along with our own profiling where applicable. We export and generate the ONNX descriptions of all architectures in our dataset. Please credit and cite the original datasets accordingly. |
|
|
|
|
|
Inception, Hiaml, Ofa-MB/PN/RN, Twopath: ` |
|
|
Mills, K. G., Han, F. X., Zhang, J., Chudak, F., Mamaghani, A. S., Salameh, M., Lu, W., Jui, S., & Niu, D. (2023). Gennape: Towards generalized neural architecture performance estimators. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9190–9199.` |
|
|
|
|
|
NDS: `Radosavovic, Ilija, et al. "On network design spaces for visual recognition." Proceedings of the IEEE/CVF international conference on computer vision. 2019.` |
|
|
|
|
|
NB101: `Ying, Chris, et al. "Nas-bench-101: Towards reproducible neural architecture search." International conference on machine learning. PMLR, 2019.` |
|
|
|
|
|
NB201: `Dong, Xuanyi, and Yi Yang. "Nas-bench-201: Extending the scope of reproducible neural architecture search."` |
|
|
|
|
|
FBNet: `Wu, Bichen, et al. "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.` |
|
|
|
|
|
Further, multi-objective latency and zero cost proxies were sourced from |
|
|
|
|
|
``` |
|
|
Krishnakumar, Arjun, et al. "Nas-bench-suite-zero: Accelerating research on zero cost proxies." Advances in Neural Information Processing Systems 35 (2022): 28037-28051. |
|
|
|
|
|
Akhauri, Yash, and Mohamed S. Abdelfattah. "Encodings for prediction-based neural architecture search." arXiv preprint arXiv:2403.02484 (2024). |
|
|
|
|
|
Akhauri, Yash, and Mohamed Abdelfattah. "On latency predictors for neural architecture search." Proceedings of Machine Learning and Systems 6 (2024): 512-523. |
|
|
|
|
|
Lee, Hayeon, et al. "Help: Hardware-adaptive efficient latency prediction for nas via meta-learning.". |
|
|
``` |
|
|
|
|
|
|
|
|
## Citations |
|
|
|
|
|
If you found this dataset useful for your research, please cite the original sources above as well as: |
|
|
|
|
|
``` |
|
|
@article{akhauri2025regressionlanguagemodelscode, |
|
|
title={Regression Language Models for Code}, |
|
|
author={Yash Akhauri and Xingyou Song and Arissa Wongpanich and Bryan Lewandowski and Mohamed S. Abdelfattah}, |
|
|
journal={arXiv preprint arXiv:2509.26476}, |
|
|
year={2025} |
|
|
} |
|
|
|
|
|
@article{akhauri2025performance, |
|
|
title={Performance Prediction for Large Systems via Text-to-Text Regression}, |
|
|
author={Akhauri, Yash and Lewandowski, Bryan and Lin, Cheng-Hsi and Reyes, Adrian N and Forbes, Grant C and Wongpanich, Arissa and Yang, Bangding and Abdelfattah, Mohamed S and Perel, Sagi and Song, Xingyou}, |
|
|
journal={arXiv preprint arXiv:2506.21718}, |
|
|
year={2025} |
|
|
} |
|
|
``` |