File size: 3,159 Bytes
31d41d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
datasets:
- Ashenone3/LM-Searcher-Trajectory-228K
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
tags:
- nas
- optimization
- agent
---
# LM-Searcher: Cross-domain Neural Architecture Search with LLMs via Unified Numerical Encoding

Repo: [https://github.com/Ashone3/LM-Searcher](https://github.com/Ashone3/LM-Searcher)

## Introduction
We introduce LM-Searcher, a task-agnostic neural architecture search framework powered by LLMs.

## Usage

### Deployment
We use [vllm](https://github.com/vllm-project/vllm) to deploy our LM-Searcher for inference.
After installing the dependencies required by vllm. You can deploy the model using `vllm_deploy.sh`:
```shell
vllm serve path-to-the-checkpoint --dtype auto --api-key token-abc123 --chat-template template.jinja
```

### Inference
An example is provided to show how LM-Searcher can be used to search for the optimal solution to a given problem:
```python
import os
import re
import json
import time
import random
import argparse

from decimal import Decimal
from openai import OpenAI

from utils import generate_random_cell, sample_new_cell

# -----------------------------
# Argument parser configuration
# -----------------------------
parser = argparse.ArgumentParser()
parser.add_argument('--output_dir', type=str, default='history', help="Directory to save search results.")
parser.add_argument('--chat_model', type=str, default='path-to-the-checkpoint', help="LLM model used for sampling new cells.")
parser.add_argument('--trial_num', type=int, default=192, help="Number of search trials to run.")
args = parser.parse_args()
print(args)

# -----------------------------
# Define the search space here
# (Customize according to your task)
# -----------------------------
search_space = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5] # Search space with 5^10 solutions

performance_history = []
trial_dict = {}

# -----------------------------
# Create output directory if it doesn’t exist
# -----------------------------
if not os.path.exists(args.output_dir):
    os.makedirs(args.output_dir)

num_iters = 0
for iteration in range(num_iters, args.trial_num):
    # Control number of previous trials referenced by the model
    if iteration <= 200:
        output_num = iteration
    else:
        output_num = 200

    # First few trials are random
    if iteration <= 4:
        cell = generate_random_cell(search_space, trial_dict)
    # Later trials sample based on history
    else:
        cell = sample_new_cell(trial_dict, output_num, args.chat_model)
    
    # -----------------------------
    # Here the "reward function" is defined.
    # Replace this with your custom evaluation metric.
    # -----------------------------
    val_acc = random.uniform(0, 100)

    # Record results for the current trial
    trial_dict[f"Trial{iteration+1}"] = {}
    trial_dict[f"Trial{iteration+1}"]["cell"] = cell
    trial_dict[f"Trial{iteration+1}"]["prediction"] = val_acc

    # Save all historical results to file
    with open('{}/historical_results.json'.format(args.output_dir), 'w') as f:
        json.dump(trial_dict, f)
```