SamChYe commited on
Commit
aa677e3
·
verified ·
1 Parent(s): b664b67

Publish EdgeEDA agent

Browse files
CHANGELOG_FIXES.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Changelog: Immediate and Short-term Fixes
2
+
3
+ ## Summary
4
+
5
+ This document tracks the immediate and short-term fixes implemented based on the repository analysis.
6
+
7
+ ---
8
+
9
+ ## Immediate Fixes (Completed)
10
+
11
+ ### 1. ✅ Comprehensive Logging Added (`src/edgeeda/cli.py`)
12
+
13
+ **Changes:**
14
+ - Added `_setup_logging()` function to configure logging to both file and console
15
+ - Log file created at `{out_dir}/tuning.log`
16
+ - Added detailed logging throughout the tuning loop:
17
+ - Experiment start/configuration
18
+ - Each action proposal (variant, fidelity, knobs)
19
+ - Make command execution results
20
+ - Metadata extraction attempts
21
+ - Reward computation results
22
+ - Summary statistics at completion
23
+
24
+ **Benefits:**
25
+ - Full visibility into tuning process
26
+ - Easy debugging of failures
27
+ - Historical record of experiments
28
+
29
+ ### 2. ✅ SurrogateUCBAgent Knob Storage Fixed (`src/edgeeda/agents/surrogate_ucb.py`)
30
+
31
+ **Changes:**
32
+ - Initialize `_variant_knobs` dictionary in `__init__()` instead of lazy initialization
33
+ - Removed `hasattr()` checks - always use `self._variant_knobs`
34
+ - Ensures knob values are always available for promotion logic
35
+
36
+ **Benefits:**
37
+ - Prevents `AttributeError` when promoting variants
38
+ - More reliable multi-fidelity optimization
39
+ - Cleaner code without hasattr checks
40
+
41
+ ### 3. ✅ Configuration Validation Added (`src/edgeeda/config.py`)
42
+
43
+ **Changes:**
44
+ - Added `_validate_config()` function with comprehensive checks:
45
+ - Budget validation (total_actions > 0, max_expensive >= 0, max_expensive <= total_actions)
46
+ - Fidelities validation (non-empty)
47
+ - Knobs validation (non-empty, min < max, valid types)
48
+ - Reward weights validation (non-empty)
49
+ - Reward candidates validation (at least one list non-empty)
50
+
51
+ **Benefits:**
52
+ - Catches configuration errors early
53
+ - Clear error messages for invalid configs
54
+ - Prevents runtime failures from bad configs
55
+
56
+ ### 4. ✅ Improved Error Messages (`src/edgeeda/orfs/runner.py`)
57
+
58
+ **Changes:**
59
+ - Added `RunResult.is_success()` method
60
+ - Added `RunResult.error_summary()` method that:
61
+ - Extracts error lines from stderr
62
+ - Falls back to last few lines if no error keywords found
63
+ - Provides concise error information
64
+
65
+ **Benefits:**
66
+ - Better error visibility
67
+ - Easier debugging of failed make commands
68
+ - Structured error information
69
+
70
+ ### 5. ✅ Robust Metadata Extraction (`src/edgeeda/orfs/metrics.py`)
71
+
72
+ **Changes:**
73
+ - Added logging throughout metadata search process
74
+ - Improved `find_best_metadata_json()` with:
75
+ - Multiple pattern matching (exact matches first, then patterns)
76
+ - Better error handling for missing directories
77
+ - Debug logging for search process
78
+ - Enhanced `load_json()` with:
79
+ - Specific exception handling
80
+ - Error logging for different failure modes
81
+
82
+ **Benefits:**
83
+ - More reliable metadata discovery
84
+ - Tries exact matches: `metadata.json`, `metrics.json`
85
+ - Then pattern matches: `*metadata*.json`, `*metrics*.json`
86
+ - Falls back to any JSON file
87
+ - Better debugging when metadata is missing
88
+ - Clear error messages for JSON parsing failures
89
+
90
+ ### 6. ✅ Retry Logic for Transient Failures (`src/edgeeda/orfs/runner.py`)
91
+
92
+ **Changes:**
93
+ - Added `max_retries` parameter to `run_make()` method
94
+ - Implements exponential backoff (2^attempt seconds)
95
+ - Handles:
96
+ - Subprocess failures (retries on non-zero return codes)
97
+ - Timeout exceptions
98
+ - General exceptions during execution
99
+
100
+ **Benefits:**
101
+ - Handles transient network/filesystem issues
102
+ - Reduces false failures from temporary problems
103
+ - Configurable retry behavior
104
+
105
+ ---
106
+
107
+ ## Short-term Fixes (Completed)
108
+
109
+ ### 7. ✅ Unit Tests for Agents (`tests/test_agents.py`)
110
+
111
+ **New Test File:**
112
+ - `test_random_search_proposes()` - Validates random search action proposals
113
+ - `test_random_search_observe()` - Tests observe method
114
+ - `test_successive_halving_initialization()` - Tests SH agent setup
115
+ - `test_successive_halving_propose()` - Tests action proposals
116
+ - `test_successive_halving_promotion()` - Tests multi-fidelity promotion
117
+ - `test_surrogate_ucb_initialization()` - Tests SurrogateUCB setup
118
+ - `test_surrogate_ucb_propose()` - Tests action proposals
119
+ - `test_surrogate_ucb_observe()` - Tests observation storage
120
+ - `test_surrogate_ucb_knob_storage()` - Tests knob storage for promotion
121
+ - `test_surrogate_ucb_surrogate_fitting()` - Tests surrogate model fitting
122
+ - `test_agent_action_consistency()` - Tests all agents produce valid actions
123
+
124
+ **Coverage:**
125
+ - All three agent types
126
+ - Key agent behaviors (propose, observe, promotion)
127
+ - Edge cases and error handling
128
+
129
+ ### 8. ✅ Unit Tests for Metrics (`tests/test_metrics.py`)
130
+
131
+ **New Test File:**
132
+ - `test_flatten_metrics_*()` - Tests metric flattening (simple, complex, empty, leaf values)
133
+ - `test_coerce_float_*()` - Tests float coercion (int, float, string, invalid)
134
+ - `test_pick_first_*()` - Tests metric key selection (found, not found, case-insensitive, multiple candidates)
135
+ - `test_load_json_*()` - Tests JSON loading (valid, invalid, missing)
136
+
137
+ **Coverage:**
138
+ - All metrics utility functions
139
+ - Edge cases and error conditions
140
+ - Type coercion and matching logic
141
+
142
+ ### 9. ✅ Updated Dependencies (`requirements.txt`)
143
+
144
+ **Changes:**
145
+ - Added `pytest>=7.0` for running unit tests
146
+
147
+ ---
148
+
149
+ ## Testing
150
+
151
+ To run the new tests:
152
+
153
+ ```bash
154
+ # Install pytest if not already installed
155
+ pip install pytest
156
+
157
+ # Run all tests
158
+ pytest tests/ -v
159
+
160
+ # Run specific test file
161
+ pytest tests/test_agents.py -v
162
+ pytest tests/test_metrics.py -v
163
+
164
+ # Run with coverage
165
+ pytest tests/ --cov=edgeeda --cov-report=html
166
+ ```
167
+
168
+ ---
169
+
170
+ ## Usage Examples
171
+
172
+ ### Using Logging
173
+
174
+ Logs are automatically created when running `edgeeda tune`:
175
+ ```bash
176
+ edgeeda tune --config configs/gcd_nangate45.yaml --budget 24
177
+ # Logs written to: runs/tuning.log
178
+ ```
179
+
180
+ ### Using Retry Logic
181
+
182
+ Retry logic is available but defaults to 0 retries. To enable:
183
+ ```python
184
+ # In cli.py, modify run_make calls:
185
+ rr = runner.run_make(
186
+ target=make_target,
187
+ design_config=cfg.design.design_config,
188
+ flow_variant=action.variant,
189
+ overrides={k: str(v) for k, v in action.knobs.items()},
190
+ timeout_sec=args.timeout,
191
+ max_retries=2, # Add this parameter
192
+ )
193
+ ```
194
+
195
+ ### Configuration Validation
196
+
197
+ Invalid configurations now fail early with clear messages:
198
+ ```python
199
+ # This will raise ValueError:
200
+ cfg = load_config("invalid_config.yaml")
201
+ # ValueError: total_actions must be > 0, got -5
202
+ ```
203
+
204
+ ---
205
+
206
+ ## Files Modified
207
+
208
+ 1. `src/edgeeda/cli.py` - Added logging throughout
209
+ 2. `src/edgeeda/agents/surrogate_ucb.py` - Fixed knob storage
210
+ 3. `src/edgeeda/config.py` - Added validation
211
+ 4. `src/edgeeda/orfs/runner.py` - Improved error messages, added retry logic
212
+ 5. `src/edgeeda/orfs/metrics.py` - Enhanced metadata extraction with logging
213
+ 6. `requirements.txt` - Added pytest
214
+ 7. `tests/test_agents.py` - New test file
215
+ 8. `tests/test_metrics.py` - New test file
216
+
217
+ ---
218
+
219
+ ## Next Steps
220
+
221
+ ### Recommended Follow-ups:
222
+
223
+ 1. **Run Tests**: Install pytest and verify all tests pass
224
+ 2. **Test Logging**: Run a small experiment and verify logs are created
225
+ 3. **Test Retry Logic**: Manually test retry behavior with transient failures
226
+ 4. **Validate Config**: Try invalid configs to see validation in action
227
+
228
+ ### Future Enhancements:
229
+
230
+ - Add integration tests with mock ORFS runner
231
+ - Add performance benchmarks
232
+ - Add more visualization options
233
+ - Implement parallel execution
234
+ - Add resume from checkpoint functionality
235
+
236
+ ---
237
+
238
+ ## Notes
239
+
240
+ - All changes maintain backward compatibility
241
+ - No breaking changes to existing APIs
242
+ - Logging can be disabled by setting log level to WARNING or ERROR
243
+ - Retry logic defaults to 0 (no retries) to maintain current behavior
244
+ - Tests require pytest but don't affect runtime dependencies
EXECUTE_IN_DOCKER.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Execute Tests and Plots in Docker
2
+
3
+ ## Quick Command
4
+
5
+ Since Docker is running, execute this inside your container:
6
+
7
+ ```bash
8
+ cd /workspace/edgeeda-agent
9
+ ./run_all.sh
10
+ ```
11
+
12
+ Or if you're on the host:
13
+
14
+ ```bash
15
+ docker exec -it <CONTAINER_ID> bash -c "cd /workspace/edgeeda-agent && ./run_all.sh"
16
+ ```
17
+
18
+ ## What It Does
19
+
20
+ 1. **Runs all agent tests** - Tests RandomSearch, SuccessiveHalving, and SurrogateUCB agents
21
+ 2. **Generates comprehensive plots** - Creates visualization of your experiment data
22
+
23
+ ## Expected Output
24
+
25
+ ```
26
+ ==========================================
27
+ EdgeEDA-Agent: Tests and Plots
28
+ ==========================================
29
+
30
+ Step 1: Running Agent Tests...
31
+ ----------------------------------------
32
+ Running agent tests...
33
+
34
+ ✅ RandomSearchAgent
35
+ ✅ SuccessiveHalvingAgent
36
+ ✅ SurrogateUCBAgent
37
+ ✅ Metrics Flattening
38
+ ✅ Metrics Coercion
39
+
40
+ ==================================================
41
+ Results: 5/5 tests passed
42
+ ✅ All tests passed!
43
+
44
+
45
+ Step 2: Generating Plots...
46
+ ----------------------------------------
47
+ Loading data from runs/experiment.sqlite...
48
+ Found 11 trials
49
+ Successful: 3
50
+ With rewards: 0
51
+ Saved CSV to runs/plots/trials.csv
52
+ Generating plots...
53
+ Generating additional analysis plots...
54
+ ✓ Saved success_rate.png
55
+ ✓ Saved runtime_distribution.png
56
+ ✓ Saved return_code_distribution.png
57
+ ✓ Saved knob_PLACE_DENSITY.png
58
+ ✓ Saved knob_CORE_UTILIZATION.png
59
+
60
+ ✅ All plots generated in runs/plots
61
+
62
+ ==========================================
63
+ Summary
64
+ ==========================================
65
+ ✅ Tests: PASSED
66
+ ✅ Plots: Generated successfully
67
+
68
+ Generated files in runs/plots/:
69
+ - runs/plots/trials.csv (2.5K)
70
+ - runs/plots/success_rate.png (45K)
71
+ - runs/plots/runtime_distribution.png (38K)
72
+ - runs/plots/return_code_distribution.png (42K)
73
+ - runs/plots/knob_PLACE_DENSITY.png (35K)
74
+ - runs/plots/knob_CORE_UTILIZATION.png (36K)
75
+ ==========================================
76
+ ```
77
+
78
+ ## Manual Execution
79
+
80
+ If you prefer to run steps separately:
81
+
82
+ ### Tests Only:
83
+ ```bash
84
+ python3 run_tests_simple.py
85
+ ```
86
+
87
+ ### Plots Only:
88
+ ```bash
89
+ python3 generate_plots.py --db runs/experiment.sqlite --out runs/plots
90
+ ```
91
+
92
+ ## View Results
93
+
94
+ ### Copy plots to host:
95
+ ```bash
96
+ docker cp <CONTAINER_ID>:/workspace/edgeeda-agent/runs/plots ./runs/
97
+ ```
98
+
99
+ ### View in container:
100
+ ```bash
101
+ # List generated files
102
+ ls -lh runs/plots/
103
+
104
+ # View CSV
105
+ head runs/plots/trials.csv
106
+ ```
107
+
108
+ ## Troubleshooting
109
+
110
+ **Issue**: "Permission denied" on run_all.sh
111
+ **Fix**: `chmod +x run_all.sh`
112
+
113
+ **Issue**: "No module named 'pandas'"
114
+ **Solution**: Make sure you're inside Docker container where dependencies are installed
115
+
116
+ **Issue**: "Database not found"
117
+ **Solution**: Check if experiments have been run: `ls -la runs/experiment.sqlite`
LICENSE ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Sam
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
QUICK_START.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quick Start: Run Tests & Generate Plots
2
+
3
+ ## 🚀 One Command to Run Everything
4
+
5
+ Since Docker is running, execute this **inside your Docker container**:
6
+
7
+ ```bash
8
+ cd /workspace/edgeeda-agent
9
+ ./run_all.sh
10
+ ```
11
+
12
+ ## 📋 Step-by-Step (If you prefer)
13
+
14
+ ### 1. Enter Docker Container
15
+
16
+ If you're on the host machine:
17
+ ```bash
18
+ cd /Users/thalia/Desktop/EdgePPAgent/edgeeda-agent
19
+ docker run --rm -it -v "$(pwd)":/workspace/edgeeda-agent -w /workspace/edgeeda-agent edgeeda-agent bash
20
+ ```
21
+
22
+ Or if container is already running:
23
+ ```bash
24
+ docker exec -it <CONTAINER_ID> bash
25
+ ```
26
+
27
+ ### 2. Run Tests
28
+ ```bash
29
+ cd /workspace/edgeeda-agent
30
+ python3 run_tests_simple.py
31
+ ```
32
+
33
+ ### 3. Generate Plots
34
+ ```bash
35
+ python3 generate_plots.py --db runs/experiment.sqlite --out runs/plots
36
+ ```
37
+
38
+ ### 4. View Results
39
+ ```bash
40
+ # List generated files
41
+ ls -lh runs/plots/
42
+
43
+ # View test results (already shown in console)
44
+ # View plots (copy to host if needed)
45
+ ```
46
+
47
+ ## 📊 Expected Output
48
+
49
+ ### Tests:
50
+ ```
51
+ Running agent tests...
52
+
53
+ ✅ RandomSearchAgent
54
+ ✅ SuccessiveHalvingAgent
55
+ ✅ SurrogateUCBAgent
56
+ ✅ Metrics Flattening
57
+ ✅ Metrics Coercion
58
+
59
+ ==================================================
60
+ Results: 5/5 tests passed
61
+ ✅ All tests passed!
62
+ ```
63
+
64
+ ### Plots:
65
+ ```
66
+ Loading data from runs/experiment.sqlite...
67
+ Found 11 trials
68
+ Successful: 3
69
+ With rewards: 0
70
+ Saved CSV to runs/plots/trials.csv
71
+ Generating plots...
72
+ ✓ Saved success_rate.png
73
+ ✓ Saved runtime_distribution.png
74
+ ✓ Saved return_code_distribution.png
75
+ ✓ Saved knob_PLACE_DENSITY.png
76
+ ✓ Saved knob_CORE_UTILIZATION.png
77
+
78
+ ✅ All plots generated in runs/plots
79
+ ```
80
+
81
+ ## 📁 Generated Files
82
+
83
+ After running, you'll have in `runs/plots/`:
84
+
85
+ - **trials.csv** - Complete trial data
86
+ - **success_rate.png** - Success rate over time
87
+ - **runtime_distribution.png** - Runtime histogram
88
+ - **return_code_distribution.png** - Error breakdown
89
+ - **knob_*.png** - Knob value analysis (one per knob)
90
+ - **learning_curve.png** - Best reward over time (if rewards exist)
91
+ - **area_vs_wns.png** - Pareto plot (if metrics exist)
92
+
93
+ ## 🔧 Troubleshooting
94
+
95
+ **"Permission denied" on run_all.sh:**
96
+ ```bash
97
+ chmod +x run_all.sh
98
+ ```
99
+
100
+ **"No module named 'pandas'":**
101
+ - Make sure you're inside Docker container
102
+ - Run: `pip3 install -e .`
103
+
104
+ **"Database not found":**
105
+ - Check: `ls -la runs/experiment.sqlite`
106
+ - If missing, run experiments first: `edgeeda tune --config configs/gcd_nangate45.yaml --budget 5`
107
+
108
+ ## 💡 Pro Tip
109
+
110
+ Copy plots to host machine:
111
+ ```bash
112
+ # From host (replace CONTAINER_ID)
113
+ docker cp <CONTAINER_ID>:/workspace/edgeeda-agent/runs/plots ./runs/
114
+ ```
README_RUN.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🎯 READY TO RUN: Tests & Plots
2
+
3
+ ## ✅ Everything is Ready!
4
+
5
+ All scripts are created and ready to execute. Here's what to do:
6
+
7
+ ## 🐳 In Docker Container (Recommended)
8
+
9
+ ```bash
10
+ # Navigate to project
11
+ cd /workspace/edgeeda-agent
12
+
13
+ # Run everything at once
14
+ ./run_all.sh
15
+ ```
16
+
17
+ That's it! The script will:
18
+ 1. ✅ Run all agent tests
19
+ 2. ✅ Generate comprehensive plots
20
+ 3. ✅ Show summary of results
21
+
22
+ ## 📝 What Gets Generated
23
+
24
+ ### Test Results
25
+ - Console output with ✅/❌ for each test
26
+ - Summary: "X/Y tests passed"
27
+
28
+ ### Plots (in `runs/plots/`)
29
+ - **trials.csv** - All data in CSV format
30
+ - **success_rate.png** - How success rate improves over time
31
+ - **runtime_distribution.png** - Histogram of how long trials take
32
+ - **return_code_distribution.png** - Breakdown of success/failure codes
33
+ - **knob_*.png** - Analysis of each knob's values across trials
34
+ - **learning_curve.png** - Best reward progression (if rewards available)
35
+ - **area_vs_wns.png** - Pareto front visualization (if metrics available)
36
+
37
+ ## 🔍 Quick Check
38
+
39
+ Before running, verify:
40
+ ```bash
41
+ # Check database exists
42
+ ls -la runs/experiment.sqlite
43
+
44
+ # Check scripts are executable
45
+ ls -la run_all.sh run_tests_simple.py generate_plots.py
46
+ ```
47
+
48
+ ## 🚨 If Something Goes Wrong
49
+
50
+ 1. **"No such file or directory"**
51
+ - Make sure you're in `/workspace/edgeeda-agent`
52
+ - Check: `pwd`
53
+
54
+ 2. **"Permission denied"**
55
+ - Fix: `chmod +x run_all.sh`
56
+
57
+ 3. **"Module not found"**
58
+ - You're not in Docker - enter container first
59
+ - Or install: `pip3 install -e .`
60
+
61
+ 4. **"Database not found"**
62
+ - You need to run experiments first
63
+ - Run: `edgeeda tune --config configs/gcd_nangate45.yaml --budget 5`
64
+
65
+ ## 📤 Getting Results Out
66
+
67
+ ```bash
68
+ # Copy plots to host (from host machine)
69
+ docker cp <CONTAINER_ID>:/workspace/edgeeda-agent/runs/plots ./runs/
70
+
71
+ # Or use shared volume if configured
72
+ ```
73
+
74
+ ## 🎉 That's It!
75
+
76
+ Just run `./run_all.sh` in Docker and you're done!
configs/gcd_nangate45.yaml ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_edgebudget"
3
+ seed: 42
4
+ db_path: "runs/experiment.sqlite"
5
+ out_dir: "runs"
6
+ # ORFS flow directory (OpenROAD-flow-scripts/flow). If not set, uses env ORFS_FLOW_DIR.
7
+ orfs_flow_dir: null
8
+
9
+ design:
10
+ platform: "nangate45"
11
+ design: "gcd"
12
+ # ORFS "DESIGN_CONFIG" path relative to ORFS flow dir:
13
+ design_config: "./designs/nangate45/gcd/config.mk"
14
+
15
+ flow:
16
+ # multi-fidelity stages (cheap -> expensive)
17
+ fidelities: ["synth", "place", "route"]
18
+ # make targets mapping (ORFS commonly supports these targets)
19
+ targets:
20
+ synth: "synth"
21
+ place: "place"
22
+ route: "route"
23
+ finish: "finish"
24
+ metadata: "metadata"
25
+
26
+ tuning:
27
+ agent: "surrogate_ucb" # random | successive_halving | surrogate_ucb
28
+ budget:
29
+ # number of total actions (each action is one ORFS make run at some fidelity)
30
+ total_actions: 24
31
+ # max actions running full expensive fidelity
32
+ max_expensive: 6
33
+
34
+ # knobs to tune (make VAR=value)
35
+ knobs:
36
+ PLACE_DENSITY:
37
+ type: float
38
+ min: 0.35
39
+ max: 0.75
40
+ CORE_UTILIZATION:
41
+ type: int
42
+ min: 35
43
+ max: 80
44
+ CELL_PAD_IN_SITES_GLOBAL_PLACEMENT:
45
+ type: int
46
+ min: 0
47
+ max: 4
48
+
49
+ reward:
50
+ # scalarization weights (minimize area/power, maximize timing slack -> implement as penalties)
51
+ weights:
52
+ wns: 1.0
53
+ area: 0.25
54
+ power: 0.10
55
+ # metric keys to try (repo will try multiple candidates)
56
+ keys:
57
+ wns_candidates:
58
+ - "timing__setup__wns"
59
+ - "timing__setup__WNS"
60
+ - "finish__timing__setup__wns"
61
+ area_candidates:
62
+ - "design__die__area"
63
+ - "finish__design__die__area"
64
+ - "final__design__die__area"
65
+ power_candidates:
66
+ - "power__total"
67
+ - "finish__power__total"
68
+ - "final__power__total"
configs/gcd_nangate45_sweep.yaml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_sweep24_clean"
3
+ seed: 42
4
+ db_path: "runs/sweep_clean.sqlite"
5
+ out_dir: "runs/sweep_clean"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth", "place", "route"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "surrogate_ucb"
24
+ budget:
25
+ total_actions: 24
26
+ max_expensive: 6
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.35
31
+ max: 0.75
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 80
36
+ CELL_PAD_IN_SITES_GLOBAL_PLACEMENT:
37
+ type: int
38
+ min: 0
39
+ max: 4
40
+
41
+ reward:
42
+ weights:
43
+ wns: 1.0
44
+ area: 0.25
45
+ power: 0.10
46
+ keys:
47
+ wns_candidates:
48
+ - "timing__setup__wns"
49
+ - "timing__setup__WNS"
50
+ - "finish__timing__setup__wns"
51
+ area_candidates:
52
+ - "design__die__area"
53
+ - "finish__design__die__area"
54
+ - "final__design__die__area"
55
+ power_candidates:
56
+ - "power__total"
57
+ - "finish__power__total"
58
+ - "final__power__total"
configs/gcd_nangate45_sweep24_finish.yaml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_sweep24_finish"
3
+ seed: 42
4
+ db_path: "runs/sweep_finish.sqlite"
5
+ out_dir: "runs/sweep_finish"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth", "place", "route", "finish"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "surrogate_ucb"
24
+ budget:
25
+ total_actions: 24
26
+ max_expensive: 8
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.35
31
+ max: 0.75
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 80
36
+ CELL_PAD_IN_SITES_GLOBAL_PLACEMENT:
37
+ type: int
38
+ min: 0
39
+ max: 4
40
+
41
+ reward:
42
+ weights:
43
+ wns: 1.0
44
+ area: 0.25
45
+ power: 0.10
46
+ keys:
47
+ wns_candidates:
48
+ - "timing__setup__wns"
49
+ - "timing__setup__WNS"
50
+ - "finish__timing__setup__wns"
51
+ area_candidates:
52
+ - "design__die__area"
53
+ - "finish__design__die__area"
54
+ - "final__design__die__area"
55
+ power_candidates:
56
+ - "power__total"
57
+ - "finish__power__total"
58
+ - "final__power__total"
configs/gcd_nangate45_sweep24_random.yaml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_sweep24_random"
3
+ seed: 42
4
+ db_path: "runs/sweep_random.sqlite"
5
+ out_dir: "runs/sweep_random"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth", "place", "route"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "random"
24
+ budget:
25
+ total_actions: 24
26
+ max_expensive: 6
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.35
31
+ max: 0.75
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 80
36
+ CELL_PAD_IN_SITES_GLOBAL_PLACEMENT:
37
+ type: int
38
+ min: 0
39
+ max: 4
40
+
41
+ reward:
42
+ weights:
43
+ wns: 1.0
44
+ area: 0.25
45
+ power: 0.10
46
+ keys:
47
+ wns_candidates:
48
+ - "timing__setup__wns"
49
+ - "timing__setup__WNS"
50
+ - "finish__timing__setup__wns"
51
+ area_candidates:
52
+ - "design__die__area"
53
+ - "finish__design__die__area"
54
+ - "final__design__die__area"
55
+ power_candidates:
56
+ - "power__total"
57
+ - "finish__power__total"
58
+ - "final__power__total"
configs/gcd_nangate45_sweep24_sh.yaml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_sweep24_sh"
3
+ seed: 42
4
+ db_path: "runs/sweep_sh.sqlite"
5
+ out_dir: "runs/sweep_sh"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth", "place", "route"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "successive_halving"
24
+ budget:
25
+ total_actions: 24
26
+ max_expensive: 6
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.35
31
+ max: 0.75
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 80
36
+ CELL_PAD_IN_SITES_GLOBAL_PLACEMENT:
37
+ type: int
38
+ min: 0
39
+ max: 4
40
+
41
+ reward:
42
+ weights:
43
+ wns: 1.0
44
+ area: 0.25
45
+ power: 0.10
46
+ keys:
47
+ wns_candidates:
48
+ - "timing__setup__wns"
49
+ - "timing__setup__WNS"
50
+ - "finish__timing__setup__wns"
51
+ area_candidates:
52
+ - "design__die__area"
53
+ - "finish__design__die__area"
54
+ - "final__design__die__area"
55
+ power_candidates:
56
+ - "power__total"
57
+ - "finish__power__total"
58
+ - "final__power__total"
configs/ibex_sky130hd.yaml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "gcd_nangate45_edgebudget"
3
+ seed: 7
4
+ db_path: "runs/experiment.sqlite"
5
+ out_dir: "runs"
6
+ orfs_flow_dir: null
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth", "place", "route"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "successive_halving"
24
+ budget:
25
+ total_actions: 18
26
+ max_expensive: 4
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.45
31
+ max: 0.80
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 75
36
+
37
+ reward:
38
+ weights:
39
+ wns: 1.0
40
+ area: 0.2
41
+ power: 0.1
42
+ keys:
43
+ wns_candidates: ["timing__setup__wns", "finish__timing__setup__wns"]
44
+ area_candidates: ["design__die__area", "finish__design__die__area"]
45
+ power_candidates: ["power__total", "finish__power__total"]
configs/quick_gcd.yaml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "quick_gcd_test"
3
+ seed: 42
4
+ db_path: "runs/quick_gcd.sqlite"
5
+ out_dir: "runs/quick_gcd"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "random"
24
+ budget:
25
+ total_actions: 4
26
+ max_expensive: 1
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.35
31
+ max: 0.75
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 80
36
+
37
+ reward:
38
+ weights:
39
+ wns: 1.0
40
+ area: 0.25
41
+ power: 0.10
42
+ keys:
43
+ wns_candidates:
44
+ - "timing__setup__wns"
45
+ - "timing__setup__WNS"
46
+ - "finish__timing__setup__wns"
47
+ area_candidates:
48
+ - "design__die__area"
49
+ - "finish__design__die__area"
50
+ - "final__design__die__area"
51
+ power_candidates:
52
+ - "power__total"
53
+ - "finish__power__total"
54
+ - "final__power__total"
configs/quick_ibex.yaml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ experiment:
2
+ name: "quick_ibex_test"
3
+ seed: 7
4
+ db_path: "runs/quick_ibex.sqlite"
5
+ out_dir: "runs/quick_ibex"
6
+ orfs_flow_dir: "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow"
7
+
8
+ design:
9
+ platform: "nangate45"
10
+ design: "gcd"
11
+ design_config: "./designs/nangate45/gcd/config.mk"
12
+
13
+ flow:
14
+ fidelities: ["synth"]
15
+ targets:
16
+ synth: "synth"
17
+ place: "place"
18
+ route: "route"
19
+ finish: "finish"
20
+ metadata: "metadata"
21
+
22
+ tuning:
23
+ agent: "successive_halving"
24
+ budget:
25
+ total_actions: 4
26
+ max_expensive: 1
27
+ knobs:
28
+ PLACE_DENSITY:
29
+ type: float
30
+ min: 0.45
31
+ max: 0.80
32
+ CORE_UTILIZATION:
33
+ type: int
34
+ min: 35
35
+ max: 75
36
+
37
+ reward:
38
+ weights:
39
+ wns: 1.0
40
+ area: 0.2
41
+ power: 0.1
42
+ keys:
43
+ wns_candidates: ["timing__setup__wns", "finish__timing__setup__wns"]
44
+ area_candidates: ["design__die__area", "finish__design__die__area"]
45
+ power_candidates: ["power__total", "finish__power__total"]
docker/Dockerfile ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Uses ORFS official image, then installs this package.
2
+ FROM openroad/orfs:latest
3
+
4
+ # Ensure python tooling exists
5
+ RUN apt-get update && apt-get install -y --no-install-recommends \
6
+ python3-pip python3-venv git \
7
+ && rm -rf /var/lib/apt/lists/*
8
+
9
+ WORKDIR /workspace/edgeeda-agent
10
+ COPY . /workspace/edgeeda-agent
11
+
12
+ RUN pip3 install --upgrade pip \
13
+ && pip3 install -e .
14
+
15
+ # Default: start a shell so you can run edgeeda commands easily.
16
+ CMD ["/bin/bash"]
docker/run_orfs_edgeeda.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ # Runs the container and mounts this repo so outputs persist.
5
+ docker run --rm -it \
6
+ -v "$(pwd)":/workspace/edgeeda-agent \
7
+ -w /workspace/edgeeda-agent \
8
+ edgeeda-agent bash
pyproject.toml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=68", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "edgeeda"
7
+ version = "0.1.0"
8
+ description = "Agentic multi-fidelity PPA tuning for OpenROAD-flow-scripts (ORFS)"
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ dependencies = [
12
+ "pyyaml>=6.0",
13
+ "numpy>=1.23",
14
+ "pandas>=2.0",
15
+ "tqdm>=4.66",
16
+ "scikit-learn>=1.3",
17
+ ]
18
+
19
+ [project.scripts]
20
+ edgeeda = "edgeeda.cli:main"
21
+
22
+ [tool.setuptools]
23
+ package-dir = {"" = "src"}
24
+
25
+ [tool.setuptools.packages.find]
26
+ where = ["src"]
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ pyyaml>=6.0
2
+ numpy>=1.23
3
+ pandas>=2.0
4
+ tqdm>=4.66
5
+ scikit-learn>=1.3
6
+ pytest>=7.0
7
+ huggingface_hub>=0.20
scripts/generate_pareto_plot.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ from __future__ import annotations
3
+
4
+ import json
5
+ from pathlib import Path
6
+
7
+ import matplotlib
8
+ matplotlib.use("Agg")
9
+ import matplotlib.pyplot as plt
10
+ import pandas as pd
11
+ import sqlite3
12
+
13
+
14
+ WNS_KEYS = [
15
+ "timing__setup__wns",
16
+ "timing__setup__WNS",
17
+ "finish__timing__setup__wns",
18
+ "timing__setup__ws",
19
+ "finish__timing__setup__ws",
20
+ "route__timing__setup__ws",
21
+ "cts__timing__setup__ws",
22
+ "detailedplace__timing__setup__ws",
23
+ "floorplan__timing__setup__ws",
24
+ "globalplace__timing__setup__ws",
25
+ "globalroute__timing__setup__ws",
26
+ "placeopt__timing__setup__ws",
27
+ ]
28
+
29
+ AREA_KEYS = [
30
+ "finish__design__die__area",
31
+ "globalroute__design__die__area",
32
+ "placeopt__design__die__area",
33
+ "detailedplace__design__die__area",
34
+ "floorplan__design__die__area",
35
+ "design__die__area",
36
+ ]
37
+
38
+
39
+ def pick_first(metrics: dict, keys: list[str]) -> float | None:
40
+ for k in keys:
41
+ if k in metrics:
42
+ try:
43
+ return float(metrics[k])
44
+ except Exception:
45
+ return None
46
+ lower = {kk.lower(): kk for kk in metrics.keys()}
47
+ for k in keys:
48
+ kk = lower.get(k.lower())
49
+ if kk:
50
+ try:
51
+ return float(metrics[kk])
52
+ except Exception:
53
+ return None
54
+ return None
55
+
56
+
57
+ def load_metrics(mj: str | None) -> dict | None:
58
+ if not mj:
59
+ return None
60
+ try:
61
+ return json.loads(mj)
62
+ except Exception:
63
+ return None
64
+
65
+
66
+ def main() -> None:
67
+ import argparse
68
+
69
+ p = argparse.ArgumentParser(description="Generate Pareto scatter (area vs WNS)")
70
+ p.add_argument("--db", default="runs/sweep_finish.sqlite", help="SQLite database path")
71
+ p.add_argument("--out", default="runs/sweep_finish/plots/pareto_area_vs_wns", help="Output path without extension")
72
+ args = p.parse_args()
73
+
74
+ con = sqlite3.connect(args.db)
75
+ df = pd.read_sql_query("SELECT * FROM trials", con)
76
+ con.close()
77
+
78
+ rows = []
79
+ for _, row in df.iterrows():
80
+ metrics = load_metrics(row.get("metrics_json"))
81
+ if not metrics:
82
+ continue
83
+ wns = pick_first(metrics, WNS_KEYS)
84
+ area = pick_first(metrics, AREA_KEYS)
85
+ if wns is None or area is None:
86
+ continue
87
+ rows.append({"wns": wns, "area": area, "fidelity": row.get("fidelity", "unknown")})
88
+
89
+ if not rows:
90
+ print("No points with both WNS and area found.")
91
+ return
92
+
93
+ plot_df = pd.DataFrame(rows)
94
+ colors = {
95
+ "synth": "#999999",
96
+ "place": "#4B8BBE",
97
+ "route": "#306998",
98
+ "finish": "#E07B39",
99
+ "unknown": "#666666",
100
+ }
101
+ markers = {
102
+ "synth": "o",
103
+ "place": "s",
104
+ "route": "^",
105
+ "finish": "D",
106
+ "unknown": "o",
107
+ }
108
+
109
+ fig, ax = plt.subplots(figsize=(6.4, 4.6))
110
+ for fid, group in plot_df.groupby("fidelity"):
111
+ ax.scatter(
112
+ group["area"],
113
+ group["wns"],
114
+ label=fid,
115
+ color=colors.get(fid, "#666666"),
116
+ marker=markers.get(fid, "o"),
117
+ alpha=0.75,
118
+ edgecolors="none",
119
+ )
120
+
121
+ ax.axhline(0.0, color="#333333", linewidth=0.8, linestyle="--")
122
+ ax.set_xlabel("Die area (um^2)")
123
+ ax.set_ylabel("WNS (ns)")
124
+ ax.set_title("Area vs WNS (Pareto scatter)")
125
+ ax.legend(frameon=False, fontsize=8, loc="best")
126
+ fig.tight_layout()
127
+
128
+ out_base = Path(args.out)
129
+ out_base.parent.mkdir(parents=True, exist_ok=True)
130
+ fig.savefig(out_base.with_suffix(".png"), dpi=300)
131
+ fig.savefig(out_base.with_suffix(".pdf"))
132
+ plt.close(fig)
133
+
134
+
135
+ if __name__ == "__main__":
136
+ main()
scripts/generate_quick_plots.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from edgeeda.viz import export_trials
2
+ import pandas as pd, matplotlib
3
+ matplotlib.use('Agg')
4
+ import matplotlib.pyplot as plt, os, glob, json
5
+
6
+ out='runs/plots_quick'
7
+ os.makedirs(out, exist_ok=True)
8
+
9
+ # Load trials
10
+ df = export_trials('runs/experiment.sqlite')
11
+ print('rows:', len(df))
12
+ print('columns:', list(df.columns))
13
+
14
+ # Basic runtime histogram
15
+ runtimes = pd.to_numeric(df['runtime_sec'], errors='coerce').dropna()
16
+ if not runtimes.empty:
17
+ plt.figure(); runtimes.hist(bins=10)
18
+ plt.xlabel('runtime_sec'); plt.tight_layout(); plt.savefig(os.path.join(out,'runtime_hist.png'), dpi=200); plt.close()
19
+ print('wrote runtime_hist.png')
20
+ else:
21
+ print('no runtime data to plot')
22
+
23
+ # return_code counts
24
+ plt.figure(); df['return_code'].value_counts().plot(kind='bar')
25
+ plt.xlabel('return_code'); plt.tight_layout(); plt.savefig(os.path.join(out,'return_code_counts.png'), dpi=200); plt.close()
26
+ print('wrote return_code_counts.png')
27
+
28
+ # metadata availability
29
+ has_meta = df['metadata_path'].fillna('').apply(lambda x: bool(str(x).strip()))
30
+ plt.figure(); has_meta.value_counts().plot(kind='bar'); plt.xticks([0,1],['no metadata','has metadata']); plt.tight_layout(); plt.savefig(os.path.join(out,'metadata_counts.png'), dpi=200); plt.close()
31
+ print('wrote metadata_counts.png')
32
+
33
+ # learning curve from reward, if present
34
+ if 'reward' in df.columns:
35
+ r = pd.to_numeric(df['reward'], errors='coerce').dropna()
36
+ if not r.empty:
37
+ df2 = df.copy()
38
+ df2['reward'] = pd.to_numeric(df2['reward'], errors='coerce')
39
+ df2 = df2.dropna(subset=['reward']).sort_values('id')
40
+ best = df2['reward'].cummax()
41
+ plt.figure(); plt.plot(df2['id'].values, best.values)
42
+ plt.xlabel('trial id'); plt.ylabel('best reward so far'); plt.tight_layout(); plt.savefig(os.path.join(out,'learning_curve.png'), dpi=200); plt.close()
43
+ print('wrote learning_curve.png')
44
+ else:
45
+ print('no rewards to plot')
46
+ else:
47
+ print('reward column missing')
48
+
49
+ # area vs wns if metrics present
50
+ areas=[]; wnss=[]
51
+ for _, r in df.iterrows():
52
+ mj = r.get('metrics') or r.get('metrics_json') or r.get('metrics_json')
53
+ if not mj:
54
+ continue
55
+ if isinstance(mj, str):
56
+ try:
57
+ m = json.loads(mj)
58
+ except Exception:
59
+ continue
60
+ else:
61
+ m = mj
62
+ a = m.get('design__die__area') or m.get('finish__design__die__area')
63
+ w = m.get('timing__setup__wns') or m.get('finish__timing__setup__wns')
64
+ if a is None or w is None:
65
+ continue
66
+ try:
67
+ areas.append(float(a)); wnss.append(float(w))
68
+ except Exception:
69
+ pass
70
+ if areas:
71
+ plt.figure(); plt.scatter(areas, wnss); plt.xlabel('die area'); plt.ylabel('WNS'); plt.tight_layout(); plt.savefig(os.path.join(out,'area_vs_wns.png'), dpi=200); plt.close()
72
+ print('wrote area_vs_wns.png')
73
+ else:
74
+ print('no area/wns metrics to plot')
75
+
76
+ print('files:', glob.glob(out+'/*'))
scripts/publish_hf.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ from __future__ import annotations
3
+
4
+ import argparse
5
+ import os
6
+ from pathlib import Path
7
+
8
+ from huggingface_hub import HfApi, upload_file, upload_folder
9
+
10
+
11
+ def main() -> None:
12
+ p = argparse.ArgumentParser(description="Publish EdgeEDA agent to Hugging Face Hub")
13
+ p.add_argument("--repo-id", required=True, help="Repo id, e.g. org/name or user/name")
14
+ p.add_argument("--repo-type", default="model", choices=["model", "dataset", "space"])
15
+ p.add_argument("--private", action="store_true", help="Create a private repo")
16
+ p.add_argument("--token", default=None, help="HF token (or set HF_TOKEN)")
17
+ p.add_argument("--commit-message", default="Add EdgeEDA agent", help="Commit message")
18
+ args = p.parse_args()
19
+
20
+ token = args.token or os.environ.get("HF_TOKEN")
21
+ if not token:
22
+ raise SystemExit("Missing HF token. Provide --token or set HF_TOKEN.")
23
+
24
+ api = HfApi(token=token)
25
+ api.create_repo(
26
+ repo_id=args.repo_id,
27
+ repo_type=args.repo_type,
28
+ private=args.private,
29
+ exist_ok=True,
30
+ )
31
+
32
+ root = Path(__file__).resolve().parents[1]
33
+ readme_hf = root / "README_HF.md"
34
+ if not readme_hf.exists():
35
+ raise SystemExit(f"Missing {readme_hf}")
36
+
37
+ upload_file(
38
+ path_or_fileobj=str(readme_hf),
39
+ path_in_repo="README.md",
40
+ repo_id=args.repo_id,
41
+ repo_type=args.repo_type,
42
+ token=token,
43
+ commit_message=args.commit_message,
44
+ )
45
+
46
+ allow_patterns = [
47
+ "src/**",
48
+ "configs/**",
49
+ "scripts/**",
50
+ "docker/**",
51
+ "LICENSE",
52
+ "pyproject.toml",
53
+ "setup.py",
54
+ "requirements.txt",
55
+ "README_RUN.md",
56
+ "QUICK_START.md",
57
+ "EXECUTE_IN_DOCKER.md",
58
+ "CHANGELOG_FIXES.md",
59
+ ]
60
+ ignore_patterns = [
61
+ "OpenROAD-flow-scripts/**",
62
+ "runs/**",
63
+ "IEEE_EdgeEDA_Agent_ISVLSI/**",
64
+ "build/**",
65
+ "**/__pycache__/**",
66
+ "**/*.pyc",
67
+ "**/*.sqlite",
68
+ "**/*.png",
69
+ "**/*.pdf",
70
+ "**/*.log",
71
+ "README.md",
72
+ "README_HF.md",
73
+ "UNKNOWN.egg-info/**",
74
+ "src/edgeeda.egg-info/**",
75
+ "*.egg-info/**",
76
+ ]
77
+
78
+ upload_folder(
79
+ folder_path=str(root),
80
+ repo_id=args.repo_id,
81
+ repo_type=args.repo_type,
82
+ token=token,
83
+ commit_message=args.commit_message,
84
+ allow_patterns=allow_patterns,
85
+ ignore_patterns=ignore_patterns,
86
+ )
87
+
88
+
89
+ if __name__ == "__main__":
90
+ main()
scripts/run_experiment.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ CFG="${1:-configs/gcd_nangate45.yaml}"
5
+ BUDGET="${2:-24}"
6
+
7
+ edgeeda tune --config "$CFG" --budget "$BUDGET"
8
+ edgeeda analyze --db runs/experiment.sqlite --out runs/plots
scripts/setup_orfs.sh ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ # Optional helper if you want a local ORFS clone.
5
+ # Usage:
6
+ # bash scripts/setup_orfs.sh /path/to/install
7
+ DEST="${1:-$HOME/orfs}"
8
+ if [ -d "$DEST/OpenROAD-flow-scripts" ]; then
9
+ echo "[setup] ORFS already exists at $DEST/OpenROAD-flow-scripts"
10
+ exit 0
11
+ fi
12
+
13
+ mkdir -p "$DEST"
14
+ cd "$DEST"
15
+ git clone https://github.com/The-OpenROAD-Project/OpenROAD-flow-scripts.git
16
+ echo "[setup] ORFS cloned. Set:"
17
+ echo " export ORFS_FLOW_DIR=$DEST/OpenROAD-flow-scripts/flow"
scripts/summarize_sh001_results.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Summarize sh001_29b345d42a results and generate paper-ready plots."""
3
+ from __future__ import annotations
4
+
5
+ import csv
6
+ import math
7
+ import re
8
+ from collections import Counter
9
+ from pathlib import Path
10
+
11
+ import matplotlib
12
+ matplotlib.use("Agg")
13
+ import matplotlib.pyplot as plt
14
+
15
+ RESULT_DIR = Path(
16
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow/results/nangate45/gcd/sh001_29b345d42a"
17
+ )
18
+ REPORT_DIR = Path(
19
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow/reports/nangate45/gcd/sh001_29b345d42a"
20
+ )
21
+ LOG_DIR = Path(
22
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/OpenROAD-flow-scripts/flow/logs/nangate45/gcd/sh001_29b345d42a"
23
+ )
24
+ DEF_PATH = RESULT_DIR / "6_final.def"
25
+ V_PATH = RESULT_DIR / "6_final.v"
26
+ MEM_PATH = RESULT_DIR / "mem.json"
27
+ FINISH_RPT_PATH = REPORT_DIR / "6_finish.rpt"
28
+ REPORT_LOG_PATH = LOG_DIR / "6_report.log"
29
+ OUT_FIG_DIR = Path(
30
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/IEEE_EdgeEDA_Agent_ISVLSI/figures"
31
+ )
32
+ OUT_CSV = Path(
33
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/runs/sh001_29b345d42a_summary.csv"
34
+ )
35
+ OUT_TEX = Path(
36
+ "/Users/thalia/Desktop/EdgePPAgent/edgeeda-agent/IEEE_EdgeEDA_Agent_ISVLSI/gcd_sh001_results_table.tex"
37
+ )
38
+
39
+
40
+ def parse_def_metrics(def_text: str) -> dict:
41
+ metrics = {}
42
+ units_match = re.search(r"^UNITS\s+DISTANCE\s+MICRONS\s+(\d+)\s*;", def_text, re.M)
43
+ die_match = re.search(r"^DIEAREA\s*\(\s*(\d+)\s+(\d+)\s*\)\s*\(\s*(\d+)\s+(\d+)\s*\)\s*;", def_text, re.M)
44
+ comp_match = re.search(r"^COMPONENTS\s+(\d+)\s*;", def_text, re.M)
45
+ pin_match = re.search(r"^PINS\s+(\d+)\s*;", def_text, re.M)
46
+ net_match = re.search(r"^NETS\s+(\d+)\s*;", def_text, re.M)
47
+ row_count = len(re.findall(r"^ROW\s+", def_text, re.M))
48
+
49
+ if units_match:
50
+ metrics["units_per_micron"] = int(units_match.group(1))
51
+ if die_match:
52
+ x0, y0, x1, y1 = map(int, die_match.groups())
53
+ metrics["die_x0"] = x0
54
+ metrics["die_y0"] = y0
55
+ metrics["die_x1"] = x1
56
+ metrics["die_y1"] = y1
57
+ if comp_match:
58
+ metrics["components"] = int(comp_match.group(1))
59
+ if pin_match:
60
+ metrics["pins"] = int(pin_match.group(1))
61
+ if net_match:
62
+ metrics["nets"] = int(net_match.group(1))
63
+ metrics["rows"] = row_count
64
+
65
+ if "units_per_micron" in metrics and "die_x1" in metrics:
66
+ units = metrics["units_per_micron"]
67
+ width = (metrics["die_x1"] - metrics["die_x0"]) / units
68
+ height = (metrics["die_y1"] - metrics["die_y0"]) / units
69
+ metrics["die_width_um"] = width
70
+ metrics["die_height_um"] = height
71
+ metrics["die_area_um2"] = width * height
72
+
73
+ return metrics
74
+
75
+
76
+ def parse_def_cell_counts(def_text: str) -> Counter:
77
+ comp_match = re.search(r"^COMPONENTS\s+\d+\s*;\n(.*?)\nEND COMPONENTS", def_text, re.S | re.M)
78
+ if not comp_match:
79
+ return Counter()
80
+ section = comp_match.group(1)
81
+ counts = Counter()
82
+ for line in section.splitlines():
83
+ line = line.strip()
84
+ if not line.startswith("-"):
85
+ continue
86
+ parts = line.split()
87
+ if len(parts) >= 3:
88
+ counts[parts[2]] += 1
89
+ return counts
90
+
91
+
92
+ def parse_netlist_cell_counts(v_text: str) -> Counter:
93
+ pattern = re.compile(r"^\s*([A-Za-z_][\w$]*)\s+([A-Za-z_][\w$]*)\s*\(", re.M)
94
+ counts = Counter()
95
+ for cell, _inst in pattern.findall(v_text):
96
+ if cell in {"module", "endmodule", "input", "output", "wire", "reg", "assign", "always"}:
97
+ continue
98
+ counts[cell] += 1
99
+ return counts
100
+
101
+
102
+ def parse_finish_rpt(text: str) -> dict:
103
+ metrics = {}
104
+ if not text:
105
+ return metrics
106
+ tns_match = re.search(r"tns max\s+([+-]?\d+(?:\.\d+)?)", text)
107
+ wns_match = re.search(r"wns max\s+([+-]?\d+(?:\.\d+)?)", text)
108
+ worst_match = re.search(r"worst slack max\s+([+-]?\d+(?:\.\d+)?)", text)
109
+ period_match = re.search(r"period_min\s*=\s*([0-9.]+)\s+fmax\s*=\s*([0-9.]+)", text)
110
+
111
+ if tns_match:
112
+ metrics["tns_ns"] = float(tns_match.group(1))
113
+ if wns_match:
114
+ metrics["wns_ns"] = float(wns_match.group(1))
115
+ if worst_match:
116
+ metrics["worst_slack_ns"] = float(worst_match.group(1))
117
+ if period_match:
118
+ metrics["clock_period_ns"] = float(period_match.group(1))
119
+ metrics["clock_fmax_mhz"] = float(period_match.group(2))
120
+ return metrics
121
+
122
+
123
+ def parse_report_log(text: str) -> dict:
124
+ metrics = {}
125
+ if not text:
126
+ return metrics
127
+ design_match = re.search(r"Design area\s+([0-9.]+)\s+um\^2\s+([0-9.]+)% utilization", text)
128
+ power_match = re.search(r"Total power\s*:\s*([0-9.eE+-]+)\s*W", text)
129
+ ir_avg_match = re.search(r"Average IR drop\s*:\s*([0-9.eE+-]+)\s*V", text)
130
+ ir_worst_match = re.search(r"Worstcase IR drop:\s*([0-9.eE+-]+)\s*V", text)
131
+ ir_pct_match = re.search(r"Percentage drop\s*:\s*([0-9.eE+-]+)\s*%", text)
132
+ total_cells_match = re.search(r"^\s*Total\s+(\d+)\s+([0-9.]+)\s*$", text, re.M)
133
+
134
+ if design_match:
135
+ metrics["design_area_um2"] = float(design_match.group(1))
136
+ metrics["design_utilization_pct"] = float(design_match.group(2))
137
+ if power_match:
138
+ metrics["total_power_w"] = float(power_match.group(1))
139
+ if ir_avg_match:
140
+ metrics["ir_drop_avg_v"] = float(ir_avg_match.group(1))
141
+ if ir_worst_match:
142
+ metrics["ir_drop_worst_v"] = float(ir_worst_match.group(1))
143
+ if ir_pct_match:
144
+ metrics["ir_drop_pct"] = float(ir_pct_match.group(1))
145
+ if total_cells_match:
146
+ metrics["cell_total_count"] = int(total_cells_match.group(1))
147
+ metrics["cell_total_area_um2"] = float(total_cells_match.group(2))
148
+ return metrics
149
+
150
+
151
+ def classify_cells(counts: Counter) -> dict:
152
+ categories = {
153
+ "filler": 0,
154
+ "tap": 0,
155
+ "sequential": 0,
156
+ "combinational": 0,
157
+ "other": 0,
158
+ }
159
+ for cell, count in counts.items():
160
+ ucell = cell.upper()
161
+ if "FILL" in ucell:
162
+ categories["filler"] += count
163
+ elif "TAP" in ucell:
164
+ categories["tap"] += count
165
+ elif "DFF" in ucell or "LATCH" in ucell:
166
+ categories["sequential"] += count
167
+ elif re.match(r"[A-Z]+\d+_X\d+", ucell) or any(k in ucell for k in ["NAND", "NOR", "AOI", "OAI", "INV", "BUF", "XOR", "XNOR"]):
168
+ categories["combinational"] += count
169
+ else:
170
+ categories["other"] += count
171
+ return categories
172
+
173
+
174
+ def write_summary_csv(metrics: dict, def_counts: Counter, v_counts: Counter, categories: dict) -> None:
175
+ OUT_CSV.parent.mkdir(parents=True, exist_ok=True)
176
+ with OUT_CSV.open("w", newline="") as f:
177
+ writer = csv.writer(f)
178
+ writer.writerow(["metric", "value"])
179
+ for key in [
180
+ "components",
181
+ "pins",
182
+ "nets",
183
+ "rows",
184
+ "units_per_micron",
185
+ "die_width_um",
186
+ "die_height_um",
187
+ "die_area_um2",
188
+ "tns_ns",
189
+ "wns_ns",
190
+ "worst_slack_ns",
191
+ "clock_period_ns",
192
+ "clock_fmax_mhz",
193
+ "design_area_um2",
194
+ "design_utilization_pct",
195
+ "total_power_w",
196
+ "ir_drop_avg_v",
197
+ "ir_drop_worst_v",
198
+ "ir_drop_pct",
199
+ "cell_total_count",
200
+ "cell_total_area_um2",
201
+ ]:
202
+ if key in metrics:
203
+ writer.writerow([key, metrics[key]])
204
+ writer.writerow(["def_instance_total", sum(def_counts.values())])
205
+ writer.writerow(["netlist_instance_total", sum(v_counts.values())])
206
+ for k, v in categories.items():
207
+ writer.writerow([f"category_{k}", v])
208
+
209
+
210
+ def write_latex_table(metrics: dict, def_counts: Counter, categories: dict) -> None:
211
+ OUT_TEX.parent.mkdir(parents=True, exist_ok=True)
212
+ total = sum(def_counts.values())
213
+ def pct(x):
214
+ return 0.0 if total == 0 else (100.0 * x / total)
215
+ def fmt_num(value, fmt: str) -> str:
216
+ try:
217
+ return format(float(value), fmt)
218
+ except (TypeError, ValueError):
219
+ return "n/a"
220
+
221
+ die_width = fmt_num(metrics.get("die_width_um"), ".3f")
222
+ die_height = fmt_num(metrics.get("die_height_um"), ".3f")
223
+ die_area = fmt_num(metrics.get("die_area_um2"), ".2f")
224
+ wns = fmt_num(metrics.get("wns_ns"), ".3f")
225
+ tns = fmt_num(metrics.get("tns_ns"), ".3f")
226
+ worst_slack = fmt_num(metrics.get("worst_slack_ns"), ".3f")
227
+ period = fmt_num(metrics.get("clock_period_ns"), ".3f")
228
+ fmax = fmt_num(metrics.get("clock_fmax_mhz"), ".2f")
229
+ design_area = fmt_num(metrics.get("design_area_um2"), ".2f")
230
+ util = fmt_num(metrics.get("design_utilization_pct"), ".1f")
231
+ power_w = metrics.get("total_power_w")
232
+ power_mw = fmt_num(power_w * 1e3 if power_w is not None else None, ".3f")
233
+ ir_avg = fmt_num(metrics.get("ir_drop_avg_v"), ".4f")
234
+ ir_worst = fmt_num(metrics.get("ir_drop_worst_v"), ".4f")
235
+ ir_pct = fmt_num(metrics.get("ir_drop_pct"), ".2f")
236
+
237
+ lines = [
238
+ r"\\begin{table}[t]",
239
+ r"\\caption{Post-route summary for \texttt{nangate45/gcd/sh001\_29b345d42a}.}",
240
+ r"\\label{tab:postroute_sh001}",
241
+ r"\\centering",
242
+ r"\\small",
243
+ r"\\begin{tabular}{@{}ll@{}}",
244
+ r"\\toprule",
245
+ r"Metric & Value \\",
246
+ r"\\midrule",
247
+ f"Components & {metrics.get('components', 'n/a')} \\\\",
248
+ f"Pins / nets & {metrics.get('pins', 'n/a')} / {metrics.get('nets', 'n/a')} \\\\",
249
+ f"Rows & {metrics.get('rows', 'n/a')} \\\\",
250
+ rf"Die size ($\mu m$) & {die_width} $\times$ {die_height} \\",
251
+ rf"Die area ($\mu m^2$) & {die_area} \\",
252
+ f"WNS / TNS / worst (ns) & {wns} / {tns} / {worst_slack} \\\\",
253
+ f"Clock period / fmax & {period} ns / {fmax} MHz \\\\",
254
+ rf"Design area / util & {design_area} $\mu m^2$ / {util}\% \\",
255
+ f"Total power & {power_mw} mW \\\\",
256
+ rf"IR drop avg / worst / \% & {ir_avg} / {ir_worst} / {ir_pct} \\",
257
+ f"Filler / tap cells & {categories['filler']} ({pct(categories['filler']):.1f}\\%) / {categories['tap']} ({pct(categories['tap']):.1f}\\%) \\\\",
258
+ f"Sequential / combinational & {categories['sequential']} ({pct(categories['sequential']):.1f}\\%) / {categories['combinational']} ({pct(categories['combinational']):.1f}\\%) \\\\",
259
+ r"\\bottomrule",
260
+ r"\\end{tabular}",
261
+ r"\\end{table}",
262
+ "",
263
+ ]
264
+ OUT_TEX.write_text("\n".join(lines))
265
+
266
+
267
+ def plot_top_cell_types(def_counts: Counter) -> None:
268
+ OUT_FIG_DIR.mkdir(parents=True, exist_ok=True)
269
+ top = def_counts.most_common(10)
270
+ labels = [k for k, _ in top]
271
+ values = [v for _k, v in top]
272
+ fig, ax = plt.subplots(figsize=(7.2, 4.2))
273
+ bars = ax.bar(labels, values, color="#4B8BBE")
274
+ ax.set_ylabel("Instance count")
275
+ ax.set_title("Top cell types (post-route DEF)")
276
+ ax.tick_params(axis="x", rotation=45, labelsize=8)
277
+ for bar, value in zip(bars, values):
278
+ ax.text(bar.get_x() + bar.get_width() / 2, bar.get_height(), str(value), ha="center", va="bottom", fontsize=7)
279
+ fig.tight_layout()
280
+ for ext in ("png", "pdf"):
281
+ fig.savefig(OUT_FIG_DIR / f"gcd_sh001_celltype_top10.{ext}", dpi=300)
282
+ plt.close(fig)
283
+
284
+
285
+ def plot_category_pie(categories: dict) -> None:
286
+ OUT_FIG_DIR.mkdir(parents=True, exist_ok=True)
287
+ labels = ["Combinational", "Sequential", "Filler", "Tap", "Other"]
288
+ values = [
289
+ categories["combinational"],
290
+ categories["sequential"],
291
+ categories["filler"],
292
+ categories["tap"],
293
+ categories["other"],
294
+ ]
295
+ fig, ax = plt.subplots(figsize=(5.0, 4.2))
296
+ ax.pie(values, labels=labels, autopct=lambda p: f"{p:.1f}%" if p > 0 else "")
297
+ ax.set_title("Cell category mix (post-route DEF)")
298
+ fig.tight_layout()
299
+ for ext in ("png", "pdf"):
300
+ fig.savefig(OUT_FIG_DIR / f"gcd_sh001_celltype_categories.{ext}", dpi=300)
301
+ plt.close(fig)
302
+
303
+
304
+ def main() -> None:
305
+ def_text = DEF_PATH.read_text()
306
+ v_text = V_PATH.read_text()
307
+
308
+ metrics = parse_def_metrics(def_text)
309
+ finish_metrics = parse_finish_rpt(FINISH_RPT_PATH.read_text() if FINISH_RPT_PATH.exists() else "")
310
+ report_metrics = parse_report_log(REPORT_LOG_PATH.read_text() if REPORT_LOG_PATH.exists() else "")
311
+ metrics.update(finish_metrics)
312
+ metrics.update(report_metrics)
313
+ def_counts = parse_def_cell_counts(def_text)
314
+ v_counts = parse_netlist_cell_counts(v_text)
315
+ categories = classify_cells(def_counts)
316
+
317
+ write_summary_csv(metrics, def_counts, v_counts, categories)
318
+ write_latex_table(metrics, def_counts, categories)
319
+ plot_top_cell_types(def_counts)
320
+ plot_category_pie(categories)
321
+
322
+
323
+ if __name__ == "__main__":
324
+ main()
setup.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from setuptools import setup
2
+
3
+ setup()
src/edgeeda/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __all__ = ["cli", "config", "reward", "store", "viz"]
2
+ __version__ = "0.1.0"
src/edgeeda/agents/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from edgeeda.agents.base import Agent
2
+ from edgeeda.agents.random_search import RandomSearchAgent
3
+ from edgeeda.agents.successive_halving import SuccessiveHalvingAgent
4
+ from edgeeda.agents.surrogate_ucb import SurrogateUCBAgent
5
+
6
+ __all__ = [
7
+ "Agent",
8
+ "RandomSearchAgent",
9
+ "SuccessiveHalvingAgent",
10
+ "SurrogateUCBAgent",
11
+ ]
src/edgeeda/agents/base.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass
4
+ from typing import Any, Dict, List, Optional, Tuple
5
+
6
+
7
+ @dataclass
8
+ class Action:
9
+ variant: str
10
+ fidelity: str
11
+ knobs: Dict[str, Any]
12
+
13
+
14
+ class Agent:
15
+ def propose(self) -> Action:
16
+ raise NotImplementedError
17
+
18
+ def observe(
19
+ self,
20
+ action: Action,
21
+ ok: bool,
22
+ reward: Optional[float],
23
+ metrics_flat: Optional[Dict[str, Any]],
24
+ ) -> None:
25
+ raise NotImplementedError
src/edgeeda/agents/random_search.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import itertools
4
+ import random
5
+ from typing import Any, Dict, Optional
6
+
7
+ from edgeeda.agents.base import Action, Agent
8
+ from edgeeda.config import Config
9
+ from edgeeda.utils import sanitize_variant_prefix, stable_hash
10
+
11
+
12
+ class RandomSearchAgent(Agent):
13
+ def __init__(self, cfg: Config):
14
+ self.cfg = cfg
15
+ self.counter = 0
16
+ self.variant_prefix = sanitize_variant_prefix(cfg.experiment.name)
17
+
18
+ def _sample_knobs(self) -> Dict[str, Any]:
19
+ out: Dict[str, Any] = {}
20
+ for name, spec in self.cfg.tuning.knobs.items():
21
+ if spec.type == "int":
22
+ out[name] = random.randint(int(spec.min), int(spec.max))
23
+ else:
24
+ out[name] = float(spec.min) + random.random() * (float(spec.max) - float(spec.min))
25
+ out[name] = round(out[name], 3)
26
+ return out
27
+
28
+ def propose(self) -> Action:
29
+ self.counter += 1
30
+ knobs = self._sample_knobs()
31
+ variant = f"{self.variant_prefix}_t{self.counter:05d}_{stable_hash(str(knobs))}"
32
+ fidelity = self.cfg.flow.fidelities[0] # always start cheap
33
+ return Action(variant=variant, fidelity=fidelity, knobs=knobs)
34
+
35
+ def observe(self, action: Action, ok: bool, reward: Optional[float], metrics_flat: Optional[Dict[str, Any]]) -> None:
36
+ # Random agent doesn't adapt.
37
+ return
src/edgeeda/agents/successive_halving.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import random
4
+ from dataclasses import dataclass
5
+ from typing import Any, Dict, List, Optional, Tuple
6
+
7
+ from edgeeda.agents.base import Action, Agent
8
+ from edgeeda.config import Config
9
+ from edgeeda.utils import sanitize_variant_prefix, stable_hash
10
+
11
+
12
+ @dataclass
13
+ class Candidate:
14
+ variant: str
15
+ knobs: Dict[str, Any]
16
+ stage_idx: int
17
+ last_reward: Optional[float]
18
+
19
+
20
+ class SuccessiveHalvingAgent(Agent):
21
+ """
22
+ Simple multi-fidelity baseline:
23
+ - sample a pool
24
+ - evaluate at fidelity0
25
+ - keep top fraction
26
+ - promote to next fidelity
27
+ """
28
+
29
+ def __init__(self, cfg: Config, pool_size: int = 12, eta: float = 0.5):
30
+ self.cfg = cfg
31
+ self.pool_size = pool_size
32
+ self.eta = eta
33
+ self.stage_names = cfg.flow.fidelities
34
+ self.variant_prefix = sanitize_variant_prefix(cfg.experiment.name)
35
+ self.pool: List[Candidate] = []
36
+ self._init_pool()
37
+ self._queue: List[Action] = []
38
+ self._rebuild_queue()
39
+
40
+ def _sample_knobs(self) -> Dict[str, Any]:
41
+ out: Dict[str, Any] = {}
42
+ for name, spec in self.cfg.tuning.knobs.items():
43
+ if spec.type == "int":
44
+ out[name] = random.randint(int(spec.min), int(spec.max))
45
+ else:
46
+ out[name] = float(spec.min) + random.random() * (float(spec.max) - float(spec.min))
47
+ out[name] = round(out[name], 3)
48
+ return out
49
+
50
+ def _init_pool(self):
51
+ self.pool = []
52
+ for i in range(self.pool_size):
53
+ knobs = self._sample_knobs()
54
+ variant = f"{self.variant_prefix}_sh{i:03d}_{stable_hash(str(knobs))}"
55
+ self.pool.append(Candidate(variant=variant, knobs=knobs, stage_idx=0, last_reward=None))
56
+
57
+ def _rebuild_queue(self):
58
+ self._queue = []
59
+ for c in self.pool:
60
+ self._queue.append(Action(variant=c.variant, fidelity=self.stage_names[c.stage_idx], knobs=c.knobs))
61
+
62
+ def propose(self) -> Action:
63
+ if not self._queue:
64
+ # promote
65
+ self._promote()
66
+ self._rebuild_queue()
67
+ return self._queue.pop(0)
68
+
69
+ def _promote(self):
70
+ # group by stage idx
71
+ max_stage = max(c.stage_idx for c in self.pool)
72
+ if max_stage >= len(self.stage_names) - 1:
73
+ # already at final stage; resample fresh pool to continue
74
+ self._init_pool()
75
+ return
76
+
77
+ # keep top fraction among candidates at current max stage
78
+ current = [c for c in self.pool if c.stage_idx == max_stage]
79
+ # if rewards missing, treat as very bad
80
+ current.sort(key=lambda c: float("-inf") if c.last_reward is None else c.last_reward, reverse=True)
81
+ k = max(1, int(len(current) * self.eta))
82
+ survivors = current[:k]
83
+
84
+ # promote survivors to next stage; others replaced with new randoms at stage 0
85
+ promoted = []
86
+ for c in survivors:
87
+ promoted.append(Candidate(c.variant, c.knobs, c.stage_idx + 1, None))
88
+
89
+ fresh_needed = self.pool_size - len(promoted)
90
+ fresh = []
91
+ for i in range(fresh_needed):
92
+ knobs = self._sample_knobs()
93
+ variant = f"{self.variant_prefix}_shR{i:03d}_{stable_hash(str(knobs))}"
94
+ fresh.append(Candidate(variant=variant, knobs=knobs, stage_idx=0, last_reward=None))
95
+
96
+ self.pool = promoted + fresh
97
+
98
+ def observe(self, action: Action, ok: bool, reward: Optional[float], metrics_flat: Optional[Dict[str, Any]]) -> None:
99
+ for c in self.pool:
100
+ if c.variant == action.variant and self.stage_names[c.stage_idx] == action.fidelity:
101
+ c.last_reward = reward if ok else None
102
+ return
src/edgeeda/agents/surrogate_ucb.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import random
4
+ from dataclasses import dataclass
5
+ from typing import Any, Dict, List, Optional, Tuple
6
+
7
+ import numpy as np
8
+ from sklearn.ensemble import ExtraTreesRegressor
9
+
10
+ from edgeeda.agents.base import Action, Agent
11
+ from edgeeda.config import Config
12
+ from edgeeda.utils import sanitize_variant_prefix, stable_hash
13
+
14
+
15
+ @dataclass
16
+ class Obs:
17
+ x: np.ndarray
18
+ y: float
19
+ fidelity: str
20
+ variant: str
21
+
22
+
23
+ class SurrogateUCBAgent(Agent):
24
+ """
25
+ Agentic tuner:
26
+ - Generates candidates (random)
27
+ - Fits a lightweight surrogate (ExtraTrees) on observed rewards (for a given fidelity)
28
+ - Chooses next action via UCB: mean + kappa * std (std estimated across trees)
29
+
30
+ Multi-fidelity policy:
31
+ - Always start at cheapest fidelity for new variants
32
+ - Promote a subset to next fidelity when budget allows
33
+ """
34
+
35
+ def __init__(self, cfg: Config, kappa: float = 1.0, init_random: int = 6):
36
+ self.cfg = cfg
37
+ self.kappa = kappa
38
+ self.init_random = init_random
39
+ self.stage_names = cfg.flow.fidelities
40
+ self.knob_names = list(cfg.tuning.knobs.keys())
41
+ self.variant_prefix = sanitize_variant_prefix(cfg.experiment.name)
42
+
43
+ self.obs: List[Obs] = []
44
+ self.variant_stage: Dict[str, int] = {}
45
+ self._variant_knobs: Dict[str, Dict[str, Any]] = {} # Initialize knob storage
46
+ self.counter = 0
47
+
48
+ def _encode(self, knobs: Dict[str, Any]) -> np.ndarray:
49
+ xs = []
50
+ for name in self.knob_names:
51
+ spec = self.cfg.tuning.knobs[name]
52
+ v = float(knobs[name])
53
+ # normalize to [0,1]
54
+ xs.append((v - float(spec.min)) / max(1e-9, (float(spec.max) - float(spec.min))))
55
+ return np.array(xs, dtype=np.float32)
56
+
57
+ def _sample_knobs(self) -> Dict[str, Any]:
58
+ out: Dict[str, Any] = {}
59
+ for name, spec in self.cfg.tuning.knobs.items():
60
+ if spec.type == "int":
61
+ out[name] = random.randint(int(spec.min), int(spec.max))
62
+ else:
63
+ out[name] = float(spec.min) + random.random() * (float(spec.max) - float(spec.min))
64
+ out[name] = round(out[name], 3)
65
+ return out
66
+
67
+ def _fit_surrogate(self, fidelity: str) -> Optional[ExtraTreesRegressor]:
68
+ data = [o for o in self.obs if o.fidelity == fidelity]
69
+ if len(data) < max(5, self.init_random):
70
+ return None
71
+ X = np.stack([o.x for o in data], axis=0)
72
+ y = np.array([o.y for o in data], dtype=np.float32)
73
+ model = ExtraTreesRegressor(
74
+ n_estimators=128,
75
+ random_state=0,
76
+ min_samples_leaf=2,
77
+ n_jobs=-1,
78
+ )
79
+ model.fit(X, y)
80
+ return model
81
+
82
+ def _predict_ucb(self, model: ExtraTreesRegressor, Xcand: np.ndarray) -> np.ndarray:
83
+ # estimate mean/std across trees
84
+ preds = np.stack([t.predict(Xcand) for t in model.estimators_], axis=0)
85
+ mu = preds.mean(axis=0)
86
+ sd = preds.std(axis=0)
87
+ return mu + self.kappa * sd
88
+
89
+ def propose(self) -> Action:
90
+ self.counter += 1
91
+
92
+ # With some probability, promote an existing promising variant to next fidelity
93
+ promotable = [v for v, st in self.variant_stage.items() if st < len(self.stage_names) - 1]
94
+ if promotable and random.random() < 0.35:
95
+ # promote best observed (by latest reward) among promotable at current stage
96
+ best_v = None
97
+ best_y = float("-inf")
98
+ for v in promotable:
99
+ st = self.variant_stage[v]
100
+ fid = self.stage_names[st]
101
+ # best reward observed for this variant at its current fidelity
102
+ ys = [o.y for o in self.obs if o.fidelity == fid and o.variant == v]
103
+ if ys:
104
+ y = max(ys)
105
+ if y > best_y:
106
+ best_y = y
107
+ best_v = v
108
+ if best_v is not None:
109
+ st = self.variant_stage[best_v] + 1
110
+ self.variant_stage[best_v] = st
111
+ # knobs are encoded in variant hash, but store explicitly:
112
+ # easiest: resample from history by matching stable_hash prefix is messy;
113
+ # we instead keep a variant->knobs cache.
114
+ # If missing, fallback random.
115
+ knobs = self._variant_knobs.get(best_v, self._sample_knobs())
116
+ return Action(variant=best_v, fidelity=self.stage_names[st], knobs=knobs)
117
+
118
+ # Otherwise: propose a new variant at cheapest fidelity
119
+ knobs = self._sample_knobs()
120
+ x = self._encode(knobs)
121
+
122
+ fid0 = self.stage_names[0]
123
+ model = self._fit_surrogate(fid0)
124
+
125
+ if model is not None:
126
+ # do a small candidate search and pick best UCB
127
+ cands = []
128
+ Xc = []
129
+ for _ in range(32):
130
+ kk = self._sample_knobs()
131
+ cands.append(kk)
132
+ Xc.append(self._encode(kk))
133
+ Xc = np.stack(Xc, axis=0)
134
+ ucb = self._predict_ucb(model, Xc)
135
+ best_i = int(np.argmax(ucb))
136
+ knobs = cands[best_i]
137
+
138
+ variant = f"{self.variant_prefix}_u{self.counter:05d}_{stable_hash(str(knobs))}"
139
+ self.variant_stage[variant] = 0
140
+ self._variant_knobs[variant] = knobs
141
+ return Action(variant=variant, fidelity=fid0, knobs=knobs)
142
+
143
+ def observe(self, action: Action, ok: bool, reward: Optional[float], metrics_flat: Optional[Dict[str, Any]]) -> None:
144
+ if ok and reward is not None:
145
+ x = self._encode(action.knobs)
146
+ self.obs.append(Obs(x=x, y=float(reward), fidelity=action.fidelity, variant=action.variant))
147
+ # keep knobs cache
148
+ self._variant_knobs[action.variant] = action.knobs
src/edgeeda/cli.py ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import logging
5
+ import os
6
+ from typing import Any, Dict, Optional
7
+
8
+ from tqdm import tqdm
9
+
10
+ from edgeeda.config import load_config, Config
11
+ from edgeeda.utils import seed_everything, ensure_dir
12
+ from edgeeda.store import TrialStore, TrialRecord
13
+ from edgeeda.orfs.runner import ORFSRunner
14
+ from edgeeda.orfs.metrics import find_best_metadata_json, load_json
15
+ from edgeeda.reward import compute_reward
16
+ from edgeeda.viz import export_trials, make_plots
17
+
18
+ from edgeeda.agents.random_search import RandomSearchAgent
19
+ from edgeeda.agents.successive_halving import SuccessiveHalvingAgent
20
+ from edgeeda.agents.surrogate_ucb import SurrogateUCBAgent
21
+
22
+
23
+ AGENTS = {
24
+ "random": RandomSearchAgent,
25
+ "successive_halving": SuccessiveHalvingAgent,
26
+ "surrogate_ucb": SurrogateUCBAgent,
27
+ }
28
+
29
+
30
+ def _select_agent(cfg: Config):
31
+ name = cfg.tuning.agent
32
+ if name not in AGENTS:
33
+ raise ValueError(f"Unknown agent: {name}. Choose from {list(AGENTS.keys())}")
34
+ return AGENTS[name](cfg)
35
+
36
+
37
+ def _setup_logging(cfg: Config) -> None:
38
+ """Setup logging to both file and console."""
39
+ log_dir = cfg.experiment.out_dir
40
+ ensure_dir(log_dir)
41
+ log_file = os.path.join(log_dir, "tuning.log")
42
+
43
+ logging.basicConfig(
44
+ level=logging.INFO,
45
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
46
+ handlers=[
47
+ logging.FileHandler(log_file),
48
+ logging.StreamHandler()
49
+ ]
50
+ )
51
+ logging.info(f"Logging initialized. Log file: {log_file}")
52
+
53
+
54
+ def cmd_tune(args: argparse.Namespace) -> None:
55
+ cfg = load_config(args.config)
56
+ if args.budget is not None:
57
+ cfg.tuning.budget.total_actions = int(args.budget)
58
+
59
+ seed_everything(cfg.experiment.seed)
60
+ ensure_dir(cfg.experiment.out_dir)
61
+ _setup_logging(cfg)
62
+
63
+ logging.info(f"Starting tuning experiment: {cfg.experiment.name}")
64
+ logging.info(f"Agent: {cfg.tuning.agent}, Budget: {cfg.tuning.budget.total_actions} actions")
65
+ logging.info(f"Platform: {cfg.design.platform}, Design: {cfg.design.design}")
66
+
67
+ orfs_flow_dir = cfg.experiment.orfs_flow_dir or os.environ.get("ORFS_FLOW_DIR")
68
+ if not orfs_flow_dir:
69
+ raise RuntimeError("ORFS flow dir missing. Set experiment.orfs_flow_dir or export ORFS_FLOW_DIR=/path/to/ORFS/flow")
70
+
71
+ logging.info(f"ORFS flow directory: {orfs_flow_dir}")
72
+
73
+ runner = ORFSRunner(orfs_flow_dir)
74
+ store = TrialStore(cfg.experiment.db_path)
75
+ agent = _select_agent(cfg)
76
+
77
+ expensive_set = set(cfg.flow.fidelities[-1:]) # last stage treated as expensive
78
+ expensive_used = 0
79
+
80
+ for i in tqdm(range(cfg.tuning.budget.total_actions), desc="actions"):
81
+ action = agent.propose()
82
+ fidelity = action.fidelity
83
+
84
+ # enforce max expensive budget
85
+ if fidelity in expensive_set and expensive_used >= cfg.tuning.budget.max_expensive:
86
+ # downgrade to cheaper stage
87
+ fidelity = cfg.flow.fidelities[0]
88
+ action = type(action)(variant=action.variant, fidelity=fidelity, knobs=action.knobs)
89
+
90
+ make_target = cfg.flow.targets.get(fidelity, fidelity)
91
+ logging.info(f"Action {i+1}/{cfg.tuning.budget.total_actions}: variant={action.variant}, "
92
+ f"fidelity={action.fidelity}, knobs={action.knobs}")
93
+
94
+ # run ORFS make
95
+ logging.debug(f"Running: {make_target} for variant {action.variant}")
96
+ rr = runner.run_make(
97
+ target=make_target,
98
+ design_config=cfg.design.design_config,
99
+ flow_variant=action.variant,
100
+ overrides={k: str(v) for k, v in action.knobs.items()},
101
+ timeout_sec=args.timeout,
102
+ )
103
+
104
+ ok = (rr.return_code == 0)
105
+ if not ok:
106
+ logging.warning(f"Trial {i+1} failed: variant={action.variant}, return_code={rr.return_code}")
107
+ logging.debug(f"Command: {rr.cmd}")
108
+ if rr.stderr:
109
+ logging.debug(f"Stderr (last 500 chars): {rr.stderr[-500:]}")
110
+ else:
111
+ logging.info(f"Trial {i+1} succeeded: variant={action.variant}, runtime={rr.runtime_sec:.2f}s")
112
+
113
+ if fidelity in expensive_set:
114
+ expensive_used += 1
115
+
116
+ # always try to generate metadata JSON (avoid triggering full-flow when not needed)
117
+ meta_target = (
118
+ cfg.flow.targets.get("metadata_generate")
119
+ or cfg.flow.targets.get("metadata-generate")
120
+ or cfg.flow.targets.get("metadata", "metadata")
121
+ )
122
+ if meta_target == "metadata":
123
+ meta_target = "metadata-generate"
124
+ logging.debug(f"Generating metadata for variant {action.variant} using target={meta_target}")
125
+ meta_result = runner.run_make(
126
+ target=meta_target,
127
+ design_config=cfg.design.design_config,
128
+ flow_variant=action.variant,
129
+ overrides={},
130
+ timeout_sec=args.timeout,
131
+ )
132
+ if meta_result.return_code != 0:
133
+ logging.warning(f"Metadata generation failed for variant {action.variant}: return_code={meta_result.return_code}")
134
+
135
+ meta_path = find_best_metadata_json(
136
+ orfs_flow_dir=orfs_flow_dir,
137
+ platform=cfg.design.platform,
138
+ design=cfg.design.design,
139
+ variant=action.variant,
140
+ )
141
+
142
+ reward = None
143
+ flat = None
144
+
145
+ if meta_path:
146
+ logging.debug(f"Found metadata at: {meta_path}")
147
+ try:
148
+ mobj = load_json(meta_path)
149
+ reward, comps, flat = compute_reward(
150
+ metrics_obj=mobj,
151
+ wns_candidates=cfg.reward.wns_candidates,
152
+ area_candidates=cfg.reward.area_candidates,
153
+ power_candidates=cfg.reward.power_candidates,
154
+ weights=cfg.reward.weights,
155
+ )
156
+ if reward is not None:
157
+ logging.info(f"Computed reward for variant {action.variant}: {reward:.4f} "
158
+ f"(WNS={comps.wns}, area={comps.area}, power={comps.power})")
159
+ else:
160
+ logging.warning(f"Reward computation returned None for variant {action.variant}")
161
+ except Exception as e:
162
+ logging.error(f"Failed to compute reward for variant {action.variant}: {e}", exc_info=True)
163
+ ok = False
164
+ else:
165
+ logging.warning(f"Metadata not found for variant {action.variant} at "
166
+ f"reports/{cfg.design.platform}/{cfg.design.design}/{action.variant}/")
167
+
168
+ store.add(
169
+ TrialRecord(
170
+ exp_name=cfg.experiment.name,
171
+ platform=cfg.design.platform,
172
+ design=cfg.design.design,
173
+ variant=action.variant,
174
+ fidelity=action.fidelity,
175
+ knobs=action.knobs,
176
+ make_cmd=rr.cmd,
177
+ return_code=rr.return_code,
178
+ runtime_sec=rr.runtime_sec,
179
+ reward=reward,
180
+ metrics=flat,
181
+ metadata_path=meta_path,
182
+ )
183
+ )
184
+
185
+ agent.observe(action, ok=ok, reward=reward, metrics_flat=flat)
186
+
187
+ store.close()
188
+
189
+ # Export summary
190
+ logging.info("Exporting trial summary...")
191
+ df = export_trials(cfg.experiment.db_path)
192
+ out_csv = os.path.join(cfg.experiment.out_dir, "summary.csv")
193
+ df.to_csv(out_csv, index=False)
194
+
195
+ # Log summary statistics
196
+ total_trials = len(df)
197
+ successful = len(df[df['return_code'] == 0])
198
+ with_rewards = len(df[df['reward'].notna()])
199
+ logging.info(f"Experiment complete: {total_trials} trials, {successful} successful, {with_rewards} with rewards")
200
+
201
+ print(f"[done] wrote {out_csv}")
202
+
203
+
204
+ def cmd_analyze(args: argparse.Namespace) -> None:
205
+ df = export_trials(args.db)
206
+ ensure_dir(args.out)
207
+ df.to_csv(os.path.join(args.out, "trials.csv"), index=False)
208
+ make_plots(df, args.out)
209
+ print(f"[done] wrote plots to {args.out}")
210
+
211
+
212
+ def main() -> None:
213
+ p = argparse.ArgumentParser(prog="edgeeda")
214
+ sub = p.add_subparsers(dest="cmd", required=True)
215
+
216
+ p_tune = sub.add_parser("tune", help="Run agentic tuning loop on ORFS")
217
+ p_tune.add_argument("--config", required=True, help="YAML config")
218
+ p_tune.add_argument("--budget", type=int, default=None, help="Override total_actions")
219
+ p_tune.add_argument("--timeout", type=int, default=None, help="Timeout per make run (sec)")
220
+ p_tune.set_defaults(func=cmd_tune)
221
+
222
+ p_an = sub.add_parser("analyze", help="Export CSV + plots")
223
+ p_an.add_argument("--db", required=True, help="SQLite db path")
224
+ p_an.add_argument("--out", required=True, help="Output directory for plots")
225
+ p_an.set_defaults(func=cmd_analyze)
226
+
227
+ args = p.parse_args()
228
+ args.func(args)
229
+
230
+
231
+ if __name__ == "__main__":
232
+ main()
src/edgeeda/config.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ from dataclasses import dataclass
5
+ from typing import Any, Dict, List, Optional
6
+
7
+ import yaml
8
+
9
+
10
+ @dataclass
11
+ class KnobSpec:
12
+ type: str # "int" | "float"
13
+ min: float
14
+ max: float
15
+
16
+
17
+ @dataclass
18
+ class RewardSpec:
19
+ weights: Dict[str, float]
20
+ wns_candidates: List[str]
21
+ area_candidates: List[str]
22
+ power_candidates: List[str]
23
+
24
+
25
+ @dataclass
26
+ class BudgetSpec:
27
+ total_actions: int
28
+ max_expensive: int
29
+
30
+
31
+ @dataclass
32
+ class ExperimentSpec:
33
+ name: str
34
+ seed: int
35
+ db_path: str
36
+ out_dir: str
37
+ orfs_flow_dir: Optional[str]
38
+
39
+
40
+ @dataclass
41
+ class DesignSpec:
42
+ platform: str
43
+ design: str
44
+ design_config: str # relative to ORFS flow dir
45
+
46
+
47
+ @dataclass
48
+ class FlowSpec:
49
+ fidelities: List[str]
50
+ targets: Dict[str, str]
51
+
52
+
53
+ @dataclass
54
+ class TuningSpec:
55
+ agent: str
56
+ budget: BudgetSpec
57
+ knobs: Dict[str, KnobSpec]
58
+
59
+
60
+ @dataclass
61
+ class Config:
62
+ experiment: ExperimentSpec
63
+ design: DesignSpec
64
+ flow: FlowSpec
65
+ tuning: TuningSpec
66
+ reward: RewardSpec
67
+
68
+
69
+ def load_config(path: str) -> Config:
70
+ with open(path, "r", encoding="utf-8") as f:
71
+ d = yaml.safe_load(f)
72
+
73
+ exp = d["experiment"]
74
+ design = d["design"]
75
+ flow = d["flow"]
76
+ tuning = d["tuning"]
77
+ reward = d["reward"]
78
+
79
+ knobs: Dict[str, KnobSpec] = {}
80
+ for k, ks in tuning["knobs"].items():
81
+ knobs[k] = KnobSpec(
82
+ type=str(ks["type"]),
83
+ min=float(ks["min"]),
84
+ max=float(ks["max"]),
85
+ )
86
+
87
+ cfg = Config(
88
+ experiment=ExperimentSpec(
89
+ name=str(exp["name"]),
90
+ seed=int(exp.get("seed", 0)),
91
+ db_path=str(exp.get("db_path", "runs/experiment.sqlite")),
92
+ out_dir=str(exp.get("out_dir", "runs")),
93
+ orfs_flow_dir=exp.get("orfs_flow_dir", None),
94
+ ),
95
+ design=DesignSpec(
96
+ platform=str(design["platform"]),
97
+ design=str(design["design"]),
98
+ design_config=str(design["design_config"]),
99
+ ),
100
+ flow=FlowSpec(
101
+ fidelities=list(flow["fidelities"]),
102
+ targets=dict(flow["targets"]),
103
+ ),
104
+ tuning=TuningSpec(
105
+ agent=str(tuning["agent"]),
106
+ budget=BudgetSpec(
107
+ total_actions=int(tuning["budget"]["total_actions"]),
108
+ max_expensive=int(tuning["budget"]["max_expensive"]),
109
+ ),
110
+ knobs=knobs,
111
+ ),
112
+ reward=RewardSpec(
113
+ weights=dict(reward["weights"]),
114
+ wns_candidates=list(reward["keys"]["wns_candidates"]),
115
+ area_candidates=list(reward["keys"]["area_candidates"]),
116
+ power_candidates=list(reward["keys"]["power_candidates"]),
117
+ ),
118
+ )
119
+
120
+ # Validate configuration
121
+ _validate_config(cfg)
122
+
123
+ return cfg
124
+
125
+
126
+ def _validate_config(cfg: Config) -> None:
127
+ """Validate configuration values."""
128
+ # Validate budget
129
+ if cfg.tuning.budget.total_actions <= 0:
130
+ raise ValueError(f"total_actions must be > 0, got {cfg.tuning.budget.total_actions}")
131
+ if cfg.tuning.budget.max_expensive < 0:
132
+ raise ValueError(f"max_expensive must be >= 0, got {cfg.tuning.budget.max_expensive}")
133
+ if cfg.tuning.budget.max_expensive > cfg.tuning.budget.total_actions:
134
+ raise ValueError(
135
+ f"max_expensive ({cfg.tuning.budget.max_expensive}) cannot exceed "
136
+ f"total_actions ({cfg.tuning.budget.total_actions})"
137
+ )
138
+
139
+ # Validate fidelities
140
+ if not cfg.flow.fidelities:
141
+ raise ValueError("flow.fidelities cannot be empty")
142
+
143
+ # Validate knobs
144
+ if not cfg.tuning.knobs:
145
+ raise ValueError("tuning.knobs cannot be empty")
146
+
147
+ for name, spec in cfg.tuning.knobs.items():
148
+ if spec.min >= spec.max:
149
+ raise ValueError(f"Knob {name}: min ({spec.min}) must be < max ({spec.max})")
150
+ if spec.type not in ("int", "float"):
151
+ raise ValueError(f"Knob {name}: type must be 'int' or 'float', got '{spec.type}'")
152
+
153
+ # Validate reward weights
154
+ if not cfg.reward.weights:
155
+ raise ValueError("reward.weights cannot be empty")
156
+
157
+ # Validate reward candidates
158
+ if not cfg.reward.wns_candidates and not cfg.reward.area_candidates and not cfg.reward.power_candidates:
159
+ raise ValueError("At least one reward candidate list must be non-empty")
160
+
161
+ # Note: design_config validation happens later when ORFS dir is known
src/edgeeda/orfs/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ __all__ = ["runner", "metrics"]
src/edgeeda/orfs/metrics.py ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import glob
4
+ import gzip
5
+ import json
6
+ import logging
7
+ import os
8
+ from typing import Any, Dict, Optional, Tuple, List
9
+
10
+
11
+ def _list_json_files(dir_path: str):
12
+ if not os.path.isdir(dir_path):
13
+ return []
14
+ out = []
15
+ for fn in os.listdir(dir_path):
16
+ if fn.endswith(".json"):
17
+ out.append(os.path.join(dir_path, fn))
18
+ return out
19
+
20
+
21
+ def _list_json_and_gz_files(dir_path: str) -> List[str]:
22
+ """Return .json and .json.gz files in directory (non-recursive)."""
23
+ if not os.path.isdir(dir_path):
24
+ return []
25
+ out: List[str] = []
26
+ for fn in os.listdir(dir_path):
27
+ if fn.endswith(".json") or fn.endswith(".json.gz"):
28
+ out.append(os.path.join(dir_path, fn))
29
+ return out
30
+
31
+
32
+ def find_best_metadata_json(
33
+ orfs_flow_dir: str,
34
+ platform: str,
35
+ design: str,
36
+ variant: str,
37
+ ) -> Optional[str]:
38
+ """
39
+ ORFS convention:
40
+ reports/<platform>/<design>/<FLOW_VARIANT>/
41
+ We search for likely metadata / metrics files and pick most recently modified.
42
+
43
+ Tries multiple patterns in order of preference:
44
+ 1. Exact matches: metadata.json, metrics.json
45
+ 2. Pattern matches: *metadata*.json, *metrics*.json
46
+ 3. Fallback: any .json file
47
+ """
48
+ base = os.path.join(orfs_flow_dir, "reports", platform, design, variant)
49
+
50
+ if not os.path.exists(base):
51
+ logging.debug(f"Reports directory does not exist: {base}")
52
+ return None
53
+
54
+ if not os.path.isdir(base):
55
+ logging.warning(f"Reports path exists but is not a directory: {base}")
56
+ return None
57
+
58
+ # Try multiple patterns, searching recursively so nested report dirs are found
59
+ patterns = [
60
+ "**/metadata.json",
61
+ "**/metrics.json",
62
+ "**/*metadata*.json",
63
+ "**/*metrics*.json",
64
+ "**/*final*.json",
65
+ "**/*report*.json",
66
+ "**/*results*.json",
67
+ ]
68
+
69
+ candidates: List[str] = []
70
+ for pattern in patterns:
71
+ matches = glob.glob(os.path.join(base, pattern), recursive=True)
72
+ if matches:
73
+ candidates.extend(matches)
74
+ logging.debug(f"Found {len(matches)} files matching pattern '{pattern}' under {base}")
75
+ break # Prefer the first matching pattern set
76
+
77
+ # Fallback: any .json or .json.gz file in the dir (non-recursive)
78
+ if not candidates:
79
+ candidates = _list_json_and_gz_files(base)
80
+ if candidates:
81
+ logging.debug(f"Using fallback: found {len(candidates)} JSON(/gz) files in {base}")
82
+
83
+ if not candidates:
84
+ logging.warning(f"No JSON files found in {base}")
85
+ return None
86
+
87
+ # If still empty, try searching one level up or across siblings
88
+ if not candidates:
89
+ parent = os.path.dirname(base)
90
+ siblings = glob.glob(os.path.join(parent, "**/*.json"), recursive=True)
91
+ if siblings:
92
+ candidates = siblings
93
+ logging.debug(f"Fallback: found {len(siblings)} JSON files under {parent}")
94
+
95
+ if not candidates:
96
+ logging.warning(f"No JSON files found in {base} or nearby")
97
+ return None
98
+
99
+ # Sort by modification time (most recent first)
100
+ candidates.sort(key=lambda p: os.path.getmtime(p), reverse=True)
101
+ selected = candidates[0]
102
+ logging.debug(f"Selected metadata file: {selected} (from {len(candidates)} candidates)")
103
+ return selected
104
+
105
+
106
+ def load_json(path: str) -> Dict[str, Any]:
107
+ """Load JSON file with error handling."""
108
+ try:
109
+ if path.endswith('.gz') or path.endswith('.json.gz'):
110
+ with gzip.open(path, 'rt', encoding='utf-8') as f:
111
+ return json.load(f)
112
+ with open(path, "r", encoding="utf-8") as f:
113
+ return json.load(f)
114
+ except json.JSONDecodeError as e:
115
+ logging.warning(f"Failed to parse JSON from {path}: {e}. Trying lenient read.")
116
+ # Try a lenient read: read file and attempt to find JSON-like substring
117
+ try:
118
+ if path.endswith('.gz') or path.endswith('.json.gz'):
119
+ with gzip.open(path, 'rt', encoding='utf-8', errors='ignore') as f:
120
+ txt = f.read()
121
+ else:
122
+ with open(path, 'r', encoding='utf-8', errors='ignore') as f:
123
+ txt = f.read()
124
+ # attempt to locate first JSON object within text
125
+ start = txt.find('{')
126
+ end = txt.rfind('}')
127
+ if start != -1 and end != -1 and end > start:
128
+ snippet = txt[start:end+1]
129
+ return json.loads(snippet)
130
+ except Exception as e2:
131
+ logging.error(f"Lenient parse failed for {path}: {e2}")
132
+ raise
133
+ except FileNotFoundError:
134
+ logging.error(f"JSON file not found: {path}")
135
+ raise
136
+ except Exception as e:
137
+ logging.error(f"Unexpected error loading JSON from {path}: {e}")
138
+ raise
139
+
140
+
141
+ def flatten_metrics(obj: Any, prefix: str = "") -> Dict[str, Any]:
142
+ """
143
+ Flattens nested dicts into key paths joined by '__'.
144
+ Keeps non-dict leaf values.
145
+ """
146
+ out: Dict[str, Any] = {}
147
+ if isinstance(obj, dict):
148
+ for k, v in obj.items():
149
+ kk = f"{prefix}__{k}" if prefix else str(k)
150
+ out.update(flatten_metrics(v, kk))
151
+ else:
152
+ out[prefix] = obj
153
+ return out
154
+
155
+
156
+ def coerce_float(x: Any) -> Optional[float]:
157
+ if x is None:
158
+ return None
159
+ if isinstance(x, (int, float)):
160
+ return float(x)
161
+ if isinstance(x, str):
162
+ try:
163
+ return float(x)
164
+ except ValueError:
165
+ return None
166
+ return None
167
+
168
+
169
+ def pick_first(metrics_flat: Dict[str, Any], keys: list[str]) -> Optional[float]:
170
+ for k in keys:
171
+ if k in metrics_flat:
172
+ v = coerce_float(metrics_flat[k])
173
+ if v is not None:
174
+ return v
175
+ # also try case-insensitive match
176
+ lower = {kk.lower(): kk for kk in metrics_flat.keys()}
177
+ for k in keys:
178
+ kk = lower.get(k.lower())
179
+ if kk:
180
+ v = coerce_float(metrics_flat[kk])
181
+ if v is not None:
182
+ return v
183
+ return None
src/edgeeda/orfs/runner.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ import shutil
5
+ import shlex
6
+ import subprocess
7
+ import time
8
+ from dataclasses import dataclass
9
+ from typing import Dict, List, Optional, Tuple
10
+
11
+ from edgeeda.utils import ensure_dir
12
+
13
+
14
+ @dataclass
15
+ class RunResult:
16
+ return_code: int
17
+ runtime_sec: float
18
+ cmd: str
19
+ stdout: str
20
+ stderr: str
21
+
22
+ def is_success(self) -> bool:
23
+ """Check if the run was successful."""
24
+ return self.return_code == 0
25
+
26
+ def error_summary(self, max_lines: int = 5) -> str:
27
+ """Extract key error information from stderr."""
28
+ if self.is_success():
29
+ return "Success"
30
+
31
+ lines = self.stderr.split('\n')
32
+ # Find lines with error keywords
33
+ error_lines = [
34
+ l for l in lines
35
+ if any(kw in l.lower() for kw in ['error', 'fatal', 'failed', 'exception'])
36
+ ]
37
+
38
+ if error_lines:
39
+ return '\n'.join(error_lines[-max_lines:])
40
+
41
+ # Fallback: last few lines of stderr
42
+ if lines:
43
+ return '\n'.join(lines[-max_lines:])
44
+
45
+ return f"Command failed with return code {self.return_code}"
46
+
47
+
48
+ class ORFSRunner:
49
+ """
50
+ Minimal ORFS interface:
51
+ - Runs `make <target> DESIGN_CONFIG=... FLOW_VARIANT=... VAR=...`
52
+ - Uses ORFS_FLOW_DIR (OpenROAD-flow-scripts/flow) as working directory.
53
+ """
54
+
55
+ def __init__(self, orfs_flow_dir: str):
56
+ self.flow_dir = os.path.abspath(orfs_flow_dir)
57
+ if not os.path.isdir(self.flow_dir):
58
+ raise FileNotFoundError(f"ORFS flow dir not found: {self.flow_dir}")
59
+ self._openroad_fallback = os.path.abspath(
60
+ os.path.join(self.flow_dir, "..", "tools", "install", "OpenROAD", "bin", "openroad")
61
+ )
62
+ self._opensta_fallback = os.path.abspath(
63
+ os.path.join(self.flow_dir, "..", "tools", "install", "OpenROAD", "bin", "sta")
64
+ )
65
+ self._yosys_fallback = os.path.abspath(
66
+ os.path.join(self.flow_dir, "..", "tools", "install", "yosys", "bin", "yosys")
67
+ )
68
+
69
+ def _build_env(self) -> Dict[str, str]:
70
+ env = os.environ.copy()
71
+ openroad_exe = env.get("OPENROAD_EXE")
72
+ if not openroad_exe or not os.path.isfile(openroad_exe) or not os.access(openroad_exe, os.X_OK):
73
+ if os.path.isfile(self._openroad_fallback) and os.access(self._openroad_fallback, os.X_OK):
74
+ env["OPENROAD_EXE"] = self._openroad_fallback
75
+ else:
76
+ found = shutil.which("openroad")
77
+ if found:
78
+ env["OPENROAD_EXE"] = found
79
+ opensta_exe = env.get("OPENSTA_EXE")
80
+ if not opensta_exe or not os.path.isfile(opensta_exe) or not os.access(opensta_exe, os.X_OK):
81
+ if os.path.isfile(self._opensta_fallback) and os.access(self._opensta_fallback, os.X_OK):
82
+ env["OPENSTA_EXE"] = self._opensta_fallback
83
+ else:
84
+ found = shutil.which("sta")
85
+ if found:
86
+ env["OPENSTA_EXE"] = found
87
+ yosys_exe = env.get("YOSYS_EXE")
88
+ if not yosys_exe or not os.path.isfile(yosys_exe) or not os.access(yosys_exe, os.X_OK):
89
+ if os.path.isfile(self._yosys_fallback) and os.access(self._yosys_fallback, os.X_OK):
90
+ env["YOSYS_EXE"] = self._yosys_fallback
91
+ else:
92
+ found = shutil.which("yosys")
93
+ if found:
94
+ env["YOSYS_EXE"] = found
95
+ return env
96
+
97
+ def run_make(
98
+ self,
99
+ target: str,
100
+ design_config: str,
101
+ flow_variant: str,
102
+ overrides: Dict[str, str],
103
+ timeout_sec: Optional[int] = None,
104
+ extra_make_args: Optional[List[str]] = None,
105
+ max_retries: int = 0,
106
+ ) -> RunResult:
107
+ """
108
+ Run make command with optional retry logic.
109
+
110
+ Args:
111
+ target: Make target (e.g., 'synth', 'place', 'route')
112
+ design_config: Design configuration path
113
+ flow_variant: Flow variant identifier
114
+ overrides: Dictionary of make variable overrides
115
+ timeout_sec: Timeout in seconds
116
+ extra_make_args: Additional make arguments
117
+ max_retries: Maximum number of retries for transient failures
118
+
119
+ Returns:
120
+ RunResult with command execution details
121
+ """
122
+ extra_make_args = extra_make_args or []
123
+ # Build make command
124
+ cmd_list = [
125
+ "make",
126
+ target,
127
+ f"DESIGN_CONFIG={design_config}",
128
+ f"FLOW_VARIANT={flow_variant}",
129
+ ]
130
+ for k, v in overrides.items():
131
+ cmd_list.append(f"{k}={v}")
132
+ cmd_list += extra_make_args
133
+
134
+ cmd_str = " ".join(shlex.quote(x) for x in cmd_list)
135
+
136
+ # Retry logic for transient failures
137
+ last_result = None
138
+ for attempt in range(max_retries + 1):
139
+ t0 = time.time()
140
+ try:
141
+ env = self._build_env()
142
+ p = subprocess.run(
143
+ cmd_list,
144
+ cwd=self.flow_dir,
145
+ capture_output=True,
146
+ text=True,
147
+ timeout=timeout_sec,
148
+ env=env,
149
+ )
150
+ dt = time.time() - t0
151
+ result = RunResult(
152
+ return_code=p.returncode,
153
+ runtime_sec=dt,
154
+ cmd=cmd_str,
155
+ stdout=p.stdout[-20000:], # keep tail
156
+ stderr=p.stderr[-20000:],
157
+ )
158
+
159
+ # If successful or no more retries, return
160
+ if result.is_success() or attempt >= max_retries:
161
+ return result
162
+
163
+ last_result = result
164
+
165
+ # Exponential backoff before retry
166
+ if attempt < max_retries:
167
+ wait_time = 2 ** attempt
168
+ time.sleep(wait_time)
169
+
170
+ except subprocess.TimeoutExpired:
171
+ dt = time.time() - t0
172
+ result = RunResult(
173
+ return_code=124, # Standard timeout exit code
174
+ runtime_sec=dt,
175
+ cmd=cmd_str,
176
+ stdout="",
177
+ stderr=f"Command timed out after {timeout_sec} seconds",
178
+ )
179
+ if attempt >= max_retries:
180
+ return result
181
+ last_result = result
182
+ if attempt < max_retries:
183
+ time.sleep(2 ** attempt)
184
+ except Exception as e:
185
+ dt = time.time() - t0
186
+ result = RunResult(
187
+ return_code=1,
188
+ runtime_sec=dt,
189
+ cmd=cmd_str,
190
+ stdout="",
191
+ stderr=f"Exception during execution: {str(e)}",
192
+ )
193
+ if attempt >= max_retries:
194
+ return result
195
+ last_result = result
196
+ if attempt < max_retries:
197
+ time.sleep(2 ** attempt)
198
+
199
+ return last_result
src/edgeeda/reward.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass
4
+ from typing import Any, Dict, Optional, Tuple
5
+
6
+ from edgeeda.orfs.metrics import flatten_metrics, pick_first
7
+
8
+
9
+ @dataclass
10
+ class RewardComponents:
11
+ wns: Optional[float]
12
+ area: Optional[float]
13
+ power: Optional[float]
14
+
15
+
16
+ def compute_reward(
17
+ metrics_obj: Dict[str, Any],
18
+ wns_candidates: list[str],
19
+ area_candidates: list[str],
20
+ power_candidates: list[str],
21
+ weights: Dict[str, float],
22
+ ) -> Tuple[Optional[float], RewardComponents, Dict[str, Any]]:
23
+ """
24
+ Reward convention:
25
+ - Want larger WNS (less negative / more positive)
26
+ - Want smaller area, smaller power
27
+ Scalar reward = w_wns * WNS - w_area * log(area) - w_power * log(power)
28
+ (log makes it less sensitive across designs)
29
+ """
30
+ flat = flatten_metrics(metrics_obj)
31
+
32
+ wns = pick_first(flat, wns_candidates)
33
+ if wns is None:
34
+ fallback_wns = [
35
+ "timing__setup__ws",
36
+ "finish__timing__setup__ws",
37
+ "route__timing__setup__ws",
38
+ "cts__timing__setup__ws",
39
+ "detailedplace__timing__setup__ws",
40
+ "floorplan__timing__setup__ws",
41
+ "globalplace__timing__setup__ws",
42
+ "globalroute__timing__setup__ws",
43
+ "placeopt__timing__setup__ws",
44
+ ]
45
+ wns = pick_first(flat, fallback_wns)
46
+ area = pick_first(flat, area_candidates)
47
+ if area is None:
48
+ fallback_area = [
49
+ "synth__design__instance__area__stdcell",
50
+ "floorplan__design__instance__area__stdcell",
51
+ "globalplace__design__instance__area__stdcell",
52
+ "detailedplace__design__instance__area__stdcell",
53
+ "cts__design__instance__area__stdcell",
54
+ "finish__design__instance__area__stdcell",
55
+ "floorplan__design__die__area",
56
+ "placeopt__design__die__area",
57
+ "detailedplace__design__die__area",
58
+ "cts__design__die__area",
59
+ "globalroute__design__die__area",
60
+ "finish__design__die__area",
61
+ ]
62
+ area = pick_first(flat, fallback_area)
63
+ power = pick_first(flat, power_candidates)
64
+
65
+ if wns is None and area is None and power is None:
66
+ return None, RewardComponents(None, None, None), flat
67
+
68
+ import math
69
+
70
+ w_wns = float(weights.get("wns", 1.0))
71
+ w_area = float(weights.get("area", 0.0))
72
+ w_power = float(weights.get("power", 0.0))
73
+
74
+ # Robustify logs
75
+ area_term = 0.0 if area is None else math.log(max(area, 1e-9))
76
+ power_term = 0.0 if power is None else math.log(max(power, 1e-9))
77
+ wns_term = 0.0 if wns is None else wns
78
+
79
+ reward = (w_wns * wns_term) - (w_area * area_term) - (w_power * power_term)
80
+ return reward, RewardComponents(wns, area, power), flat
src/edgeeda/store.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import sqlite3
5
+ from dataclasses import asdict, dataclass
6
+ from typing import Any, Dict, Optional
7
+
8
+ from edgeeda.utils import ensure_dir, now_ts
9
+
10
+
11
+ SCHEMA = """
12
+ CREATE TABLE IF NOT EXISTS trials (
13
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
14
+ exp_name TEXT NOT NULL,
15
+ platform TEXT NOT NULL,
16
+ design TEXT NOT NULL,
17
+ variant TEXT NOT NULL,
18
+ fidelity TEXT NOT NULL,
19
+
20
+ knobs_json TEXT NOT NULL,
21
+ make_cmd TEXT NOT NULL,
22
+ return_code INTEGER NOT NULL,
23
+ runtime_sec REAL NOT NULL,
24
+
25
+ reward REAL,
26
+ metrics_json TEXT,
27
+ metadata_path TEXT,
28
+
29
+ created_ts REAL NOT NULL
30
+ );
31
+
32
+ CREATE INDEX IF NOT EXISTS idx_trials_exp ON trials(exp_name);
33
+ CREATE INDEX IF NOT EXISTS idx_trials_variant ON trials(platform, design, variant);
34
+ """
35
+
36
+
37
+ @dataclass
38
+ class TrialRecord:
39
+ exp_name: str
40
+ platform: str
41
+ design: str
42
+ variant: str
43
+ fidelity: str
44
+ knobs: Dict[str, Any]
45
+ make_cmd: str
46
+ return_code: int
47
+ runtime_sec: float
48
+ reward: Optional[float]
49
+ metrics: Optional[Dict[str, Any]]
50
+ metadata_path: Optional[str]
51
+
52
+
53
+ class TrialStore:
54
+ def __init__(self, db_path: str):
55
+ ensure_dir(db_path.rsplit("/", 1)[0] if "/" in db_path else ".")
56
+ self.conn = sqlite3.connect(db_path)
57
+ self.conn.execute("PRAGMA journal_mode=WAL;")
58
+ self.conn.executescript(SCHEMA)
59
+ self.conn.commit()
60
+
61
+ def add(self, r: TrialRecord) -> None:
62
+ self.conn.execute(
63
+ """
64
+ INSERT INTO trials(
65
+ exp_name, platform, design, variant, fidelity,
66
+ knobs_json, make_cmd, return_code, runtime_sec,
67
+ reward, metrics_json, metadata_path, created_ts
68
+ ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
69
+ """,
70
+ (
71
+ r.exp_name,
72
+ r.platform,
73
+ r.design,
74
+ r.variant,
75
+ r.fidelity,
76
+ json.dumps(r.knobs, sort_keys=True),
77
+ r.make_cmd,
78
+ int(r.return_code),
79
+ float(r.runtime_sec),
80
+ None if r.reward is None else float(r.reward),
81
+ None if r.metrics is None else json.dumps(r.metrics),
82
+ r.metadata_path,
83
+ now_ts(),
84
+ ),
85
+ )
86
+ self.conn.commit()
87
+
88
+ def fetch_all(self, exp_name: str):
89
+ cur = self.conn.execute(
90
+ "SELECT exp_name, platform, design, variant, fidelity, knobs_json, make_cmd, return_code, runtime_sec, reward, metrics_json, metadata_path, created_ts FROM trials WHERE exp_name=? ORDER BY id ASC",
91
+ (exp_name,),
92
+ )
93
+ return cur.fetchall()
94
+
95
+ def close(self) -> None:
96
+ self.conn.close()
src/edgeeda/utils.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ import time
5
+ import random
6
+ import re
7
+ from dataclasses import dataclass
8
+ from typing import Any, Dict, Iterable, Optional
9
+
10
+ import numpy as np
11
+
12
+
13
+ def now_ts() -> float:
14
+ return time.time()
15
+
16
+
17
+ def seed_everything(seed: int) -> None:
18
+ random.seed(seed)
19
+ np.random.seed(seed)
20
+ os.environ["PYTHONHASHSEED"] = str(seed)
21
+
22
+
23
+ def ensure_dir(path: str) -> None:
24
+ os.makedirs(path, exist_ok=True)
25
+
26
+
27
+ def clamp(x: float, lo: float, hi: float) -> float:
28
+ return max(lo, min(hi, x))
29
+
30
+
31
+ def stable_hash(s: str) -> str:
32
+ # Short stable tag for filenames/variants
33
+ import hashlib
34
+ return hashlib.sha1(s.encode("utf-8")).hexdigest()[:10]
35
+
36
+
37
+ def sanitize_variant_prefix(name: str, max_len: int = 24) -> str:
38
+ safe = re.sub(r"[^A-Za-z0-9_]+", "_", name).strip("_")
39
+ if not safe:
40
+ safe = "run"
41
+ if max_len > 0 and len(safe) > max_len:
42
+ safe = safe[:max_len]
43
+ return safe
44
+
45
+
46
+ @dataclass(frozen=True)
47
+ class TrialKey:
48
+ exp_name: str
49
+ platform: str
50
+ design: str
51
+ variant: str
52
+ fidelity: str
src/edgeeda/viz.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ import json
5
+ import sqlite3
6
+ from typing import Any, Dict
7
+
8
+ import pandas as pd
9
+ import matplotlib.pyplot as plt
10
+
11
+ from edgeeda.utils import ensure_dir
12
+
13
+
14
+ def export_trials(db_path: str) -> pd.DataFrame:
15
+ con = sqlite3.connect(db_path)
16
+ df = pd.read_sql_query("SELECT * FROM trials", con)
17
+ con.close()
18
+ return df
19
+
20
+
21
+ def make_plots(df: pd.DataFrame, out_dir: str) -> None:
22
+ ensure_dir(out_dir)
23
+
24
+ # Learning curve: best reward over time
25
+ df2 = df.copy()
26
+ df2["reward"] = pd.to_numeric(df2["reward"], errors="coerce")
27
+ df2 = df2.dropna(subset=["reward"]).sort_values("id")
28
+ if not df2.empty:
29
+ best = df2["reward"].cummax()
30
+ plt.figure()
31
+ plt.plot(df2["id"].values, best.values)
32
+ plt.xlabel("trial id")
33
+ plt.ylabel("best reward so far")
34
+ plt.tight_layout()
35
+ plt.savefig(os.path.join(out_dir, "learning_curve.png"), dpi=200)
36
+ plt.close()
37
+
38
+ # Pareto-ish scatter: area vs wns from metrics_json if available
39
+ areas, wnss = [], []
40
+ for _, r in df.iterrows():
41
+ mj = r.get("metrics_json")
42
+ if not isinstance(mj, str) or not mj:
43
+ continue
44
+ try:
45
+ m = json.loads(mj)
46
+ except Exception:
47
+ continue
48
+ # try common keys (flattened already stored by runner)
49
+ a = m.get("design__die__area") or m.get("finish__design__die__area")
50
+ w = m.get("timing__setup__wns") or m.get("finish__timing__setup__wns")
51
+ if a is None or w is None:
52
+ continue
53
+ try:
54
+ areas.append(float(a)); wnss.append(float(w))
55
+ except Exception:
56
+ pass
57
+
58
+ if areas:
59
+ plt.figure()
60
+ plt.scatter(areas, wnss)
61
+ plt.xlabel("die area")
62
+ plt.ylabel("WNS")
63
+ plt.tight_layout()
64
+ plt.savefig(os.path.join(out_dir, "area_vs_wns.png"), dpi=200)
65
+ plt.close()