akhauriyash commited on
Commit
d18c593
·
verified ·
1 Parent(s): 201c073

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -6
README.md CHANGED
@@ -34,8 +34,8 @@ With this dataset, we provide ONNX text for universal-NAS regression training ov
34
  - FBNet: 5000
35
  - Hiaml: 4629
36
  - Inception: 580
37
- - NASBench101: 423624
38
- - NASBench201: 15625
39
  - NASNet: 4846
40
  - OfaMB: 7491
41
  - OfaPN: 8206
@@ -54,12 +54,67 @@ With this dataset, we provide ONNX text for universal-NAS regression training ov
54
  ## How to load with 🤗 Datasets
55
  ```python
56
  from datasets import load_dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
- # After you upload this folder to a dataset repo, e.g. your-username/GraphArch-Regression
59
- ds = load_dataset("your-username/GraphArch-Regression")
60
 
61
- # Or from a local clone:
62
- # ds = load_dataset("json", data_files="GraphArch-Regression/data.jsonl", split="train")
 
 
 
63
  ```
64
 
65
  Credits
 
34
  - FBNet: 5000
35
  - Hiaml: 4629
36
  - Inception: 580
37
+ - NASBench101 (NB101): 423624
38
+ - NASBench201 (NB201): 15625
39
  - NASNet: 4846
40
  - OfaMB: 7491
41
  - OfaPN: 8206
 
54
  ## How to load with 🤗 Datasets
55
  ```python
56
  from datasets import load_dataset
57
+ ds = load_dataset("akhauriyash/GraphArch-Regression")
58
+ ```
59
+
60
+ ## Testing Graph Architecture Regression with a basic Gemma RLM model
61
+
62
+ Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )
63
+
64
+ Note that the best practice is to fine-tune this base model on more NAS ONNX graph data, and few-shot transfer to the target search space (Say NASNet, etc.).
65
+ If we want to finetune on 16 examples from say, ENAS, the optimal strategy we found was to construct a small NAS dataset of e.g., DARTS, NASNet, Amoeba, ENAS and use ~(1024, 1024, 1024, 16) samples from each, and up-sample (repeat) the 16 ENAS samples 8 times. Random-shuffle the dataset and fine-tune the RLM with 1e-4 LR (cosine decay) to avoid catastrophic forgetting.
66
+ The code below is just illustrative to demonstrate non-trivial NAS performance
67
+
68
+ ```
69
+ import torch
70
+ import numpy as np
71
+ from datasets import load_dataset
72
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
73
+ from scipy.stats import spearmanr
74
+ from tqdm import tqdm
75
+
76
+ REPO_ID = "akhauriyash/RLM-GemmaS-Code-v0"
77
+ DATASET = "akhauriyash/GraphArch-Regression"
78
+ dataset = load_dataset(DATASET, split="train")
79
+ tok = AutoTokenizer.from_pretrained(REPO_ID, trust_remote_code=True)
80
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
81
+ model = AutoModelForSeq2SeqLM.from_pretrained(REPO_ID, trust_remote_code=True).to(device).eval()
82
+ MAX_ITEMS, BATCH_SIZE, spaces, results = 512, 4, ["NASBench101", "ENAS", "NASNet"], {}
83
+ n_out_tokens = getattr(model.config, "num_tokens_per_obj", 8) * getattr(model.config, "max_num_objs", 1)
84
+ n_out_tokens = model.config.num_tokens_per_obj * model.config.max_num_objs
85
+
86
+ for SPACE in spaces:
87
+ inputs, targets = [], []
88
+ for row in tqdm(dataset, desc=f"Processing {SPACE} till {MAX_ITEMS} items"):
89
+ if row.get("space") == SPACE and "input" in row and "val_accuracy" in row:
90
+ try:
91
+ targets.append(float(row["val_accuracy"]))
92
+ inputs.append(f"{SPACE}\n\n{row['input']}")
93
+ except: continue
94
+ if len(inputs) >= MAX_ITEMS: break
95
+ preds = []
96
+ for i in tqdm(range(0, len(inputs), BATCH_SIZE)):
97
+ enc = tok(inputs[i:i+BATCH_SIZE], return_tensors="pt", truncation=True, padding=True, max_length=4096).to(device)
98
+ batch_preds = []
99
+ for _ in range(8):
100
+ out = model.generate(**enc, max_new_tokens=n_out_tokens, min_new_tokens=n_out_tokens, do_sample=True, top_p=0.95, temperature=1.0)
101
+ decoded = [tok.token_ids_to_floats(seq.tolist()) for seq in out]
102
+ decoded = [d[0] if isinstance(d, list) and d else float("nan") for d in decoded]
103
+ batch_preds.append(decoded)
104
+ preds.extend(torch.tensor(batch_preds).median(dim=0).values.tolist())
105
+ spear, _ = spearmanr(np.array(targets), np.array(preds))
106
+ results[SPACE] = spear; print(f"Spearman ρ for {SPACE}: {spear:.3f}")
107
+
108
+ print("Spearman ρ | NASBench101 | ENAS | NASNet")
109
+ print(f"{REPO_ID} | " + " | ".join(f"{results[s]:.3f}" for s in spaces))
110
+ ```
111
 
 
 
112
 
113
+ We got the following results when testing on a random subset of the GraphArch-Regression dataset.
114
+
115
+ ```
116
+ Model ID | NASBench101 | ENAS | NASNet
117
+ akhauriyash/RegressLM-gemma-s-RLM-table3 | 0.384 | 0.211 | 0.209
118
  ```
119
 
120
  Credits