vxkyyy commited on
Commit
4a6ac1a
·
1 Parent(s): ea6019d

Tier-1 upgrade: fail-closed orchestration, CI, and benchmark export

Browse files
.github/workflows/ci.yml ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: AgentIC CI
2
+
3
+ on:
4
+ pull_request:
5
+ push:
6
+ branches: [ "main", "master" ]
7
+ schedule:
8
+ - cron: "0 3 * * *"
9
+ workflow_dispatch:
10
+
11
+ jobs:
12
+ pr-smoke:
13
+ name: PR Smoke
14
+ runs-on: ubuntu-latest
15
+ steps:
16
+ - name: Checkout
17
+ uses: actions/checkout@v4
18
+
19
+ - name: Setup Python
20
+ uses: actions/setup-python@v5
21
+ with:
22
+ python-version: "3.11"
23
+
24
+ - name: Install deps
25
+ run: |
26
+ python -m pip install --upgrade pip
27
+ pip install -r requirements.txt
28
+
29
+ - name: Smoke checks
30
+ run: |
31
+ bash scripts/ci/smoke.sh
32
+
33
+ nightly-full:
34
+ name: Nightly Full Flow
35
+ if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
36
+ runs-on: ubuntu-latest
37
+ steps:
38
+ - name: Checkout
39
+ uses: actions/checkout@v4
40
+
41
+ - name: Setup Python
42
+ uses: actions/setup-python@v5
43
+ with:
44
+ python-version: "3.11"
45
+
46
+ - name: Install deps
47
+ run: |
48
+ python -m pip install --upgrade pip
49
+ pip install -r requirements.txt
50
+
51
+ - name: Nightly full workflow
52
+ env:
53
+ NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
54
+ LLM_BASE_URL: ${{ secrets.LLM_BASE_URL }}
55
+ OPENLANE_ROOT: ${{ vars.OPENLANE_ROOT }}
56
+ run: |
57
+ bash scripts/ci/nightly_full.sh
README.md CHANGED
@@ -1,406 +1,294 @@
1
- # AgentIC: AI-Powered Text-to-Silicon Compiler
2
-
3
- ![Python](https://img.shields.io/badge/Python-3.10%2B-blue) ![License](https://img.shields.io/badge/License-Proprietary-red) ![OpenLane](https://img.shields.io/badge/OpenLane-Integrated-purple) ![Verification](https://img.shields.io/badge/Formal_Verification-SVA-red)
4
-
5
- **AgentIC** transforms natural language descriptions into verified, manufacturable chip layouts (GDSII). It orchestrates a crew of specialized AI agents through a self-correcting pipeline — from RTL generation through formal verification to physical design — producing industry-standard silicon with minimal human intervention.
6
-
7
- > **"Build a radiation-hardened SPI master with TMR"** → Verified RTL → GDSII layout
8
-
9
- ---
10
-
11
- ## Architecture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ```mermaid
14
- graph TD
15
- A["User: build --name chip --desc 'description'"] --> B[Architect Agent]
16
-
17
- subgraph "Phase 1: Design"
18
- B -->|Spec| C{Golden Template Match?}
19
- C -->|Yes: Simple Design| D[Use Pre-Verified Template]
20
- C -->|No: Complex Design| E[LLM RTL Generation]
21
- D --> F[Syntax Check — Icarus Verilog]
22
- E --> F
23
- F -->|Fail| G["Autonomous Fix (regex)"]
24
- G -->|SV Compat Fixed| F
25
- G -->|Unknown Error| G2[LLM Fix Agent]
26
- G2 --> F
27
- F -->|Pass| H[Lint Check — Verilator]
28
- H -->|Fail| G
29
- H -->|Pass| H2[Pre-Synthesis Validation]
30
- H2 -->|Undriven Signals| H3[Auto-Fix: Tie to 0 / Remove]
31
- H3 --> F
32
- end
33
-
34
- subgraph "Phase 2: Verification"
35
- H2 -->|Clean| I[Testbench Generation]
36
- I --> J[Simulation — iverilog + vvp]
37
- J -->|Compile Fail| G
38
- J -->|Logic Fail| K[Error Analyst]
39
- K -->|TB Error| I
40
- K -->|RTL Error| E
41
- J -->|Pass| L[Formal Verification — SymbiYosys]
42
- L --> M[Coverage Analysis]
43
- end
44
-
45
- subgraph "Phase 3: Physical Design"
46
- M --> N["Auto-Config Generation"]
47
- N --> O[OpenLane — Synthesis to GDSII]
48
- O --> P[PPA Analysis]
49
- P -->|Violations| Q[Backend Optimizer]
50
- Q --> E
51
- P -->|Pass| R["GDSII Tapeout File"]
52
- end
53
- ```
54
-
55
- ---
56
-
57
- ## Key Features
58
-
59
- ### Industry Standards Compliance
60
- AgentIC is fully compliant with industry standards for chip production, ensuring synthesizable and verifiable designs without human intervention:
61
- - **Strict Linting:** Verilator (`-Wall`) catches implicit truncations, width mismatches, and combinational loops early.
62
- - **Simulation Signoff:** Icarus Verilog (`iverilog`) for behavioral bounding and gate-level simulation (GLS) validation.
63
- - **Formal Verification:** SymbiYosys (SBY) integration natively proves SVA properties for corner-case bug elimination.
64
- - **Physical Tapeout (RTL-to-GDSII):** Fully automated OpenLane workflow (SkyWater 130nm default) generating GDSII files with built-in DRC (Design Rule Check), LVS (Layout vs Schematic), and STA (Static Timing Analysis) signoff.
65
-
66
- ### Autonomous Self-Healing Pipeline
67
- AgentIC doesn't just generate code — it detects and fixes errors **without LLM calls** whenever possible:
68
-
69
- | Error Type | Detection | Fix | LLM Needed? |
70
- |-----------|-----------|-----|-------------|
71
- | `always_comb` in iverilog | Error log pattern match | `always_comb` → `always @(*)` | ❌ No |
72
- | Explicit type casts | `This assignment requires an explicit cast` | `type'(val)` → `(val)` | ❌ No |
73
- | `unique case` / `priority case` | Error log match | Strip qualifier | ❌ No |
74
- | Undriven signals | Pre-synthesis scan | Tie to 0 or remove | ❌ No |
75
- | TB compilation error | `Compilation failed` in output | SV→Verilog regex fix | ❌ No |
76
- | Logic bugs | `TEST FAILED` in simulation | Error Analyst + Fixer | ✅ Yes |
77
- | Unknown syntax errors | Unmatched error patterns | LLM Syntax Rectifier | ✅ Yes |
78
-
79
- ### Multi-Agent Crew
80
-
81
- | Agent | Role | Tools |
82
- |-------|------|-------|
83
- | **Architect** | Defines micro-architecture, interfaces, and FSM states | Specification generation |
84
- | **Designer** | Writes synthesizable Verilog/SystemVerilog RTL | `write_verilog`, `syntax_check` |
85
- | **Verification Engineer** | Generates SVA assertions (industry + Yosys-compatible) | `convert_sva_to_yosys`, SymbiYosys |
86
- | **Testbench Agent** | Creates self-checking testbenches with port-accurate DUT instantiation | `run_simulation` |
87
- | **Error Analyst** | Classifies failures as RTL vs testbench bugs, directs fixes | Log analysis |
88
- | **Backend Engineer** | Configures OpenLane, optimizes PPA (Power, Performance, Area) | `run_openlane` |
89
-
90
- ### Golden Reference Library
91
- Pre-verified RTL + testbench pairs for common IP blocks — **95% first-attempt success**:
92
-
93
- | Template | Description | Complexity |
94
- |----------|-------------|------------|
95
- | `counter` | N-bit up/down counter with enable and load | Simple |
96
- | `fifo` | Synchronous FIFO with parameterizable width/depth | Medium |
97
- | `uart_tx` | UART Transmitter with configurable baud rate | Medium |
98
- | `spi_master` | SPI Master with configurable CPOL/CPHA | Medium |
99
- | `fsm` | Generic FSM with configurable states | Simple |
100
- | `pwm` | PWM generator with configurable resolution | Simple |
101
- | `timer` | General-purpose timer with prescaler | Medium |
102
- | `shift_register` | Shift register with serial/parallel IO | Simple |
103
-
104
- > **Smart Matching**: Complex designs (TMR, AES, DMA, pipelined, radiation-hardened, etc.) automatically bypass templates and use full LLM generation from scratch.
105
-
106
- ### Auto-Generated OpenLane Config
107
- No manual `config.tcl` needed — the system reads the RTL file, estimates complexity, and generates appropriate die area, clock period, and synthesis settings:
108
-
109
- | RTL Size | Die Area | Utilization | Clock Period |
110
- |----------|----------|-------------|-------------|
111
- | < 100 lines (counter, PWM) | 300×300µm | 50% | 10ns |
112
- | 100-300 lines (FIFO, UART, SPI) | 500×500µm | 40% | 15ns |
113
- | 300+ lines (TMR, AES, CPU) | 800×800µm | 35% | 20ns |
114
-
115
- ### Dual-Mode Formal Verification
116
- - **Industry SVA**: Generates `property`/`assert property` assertions for commercial EDA tools
117
- - **Yosys SVA**: Auto-converts to SymbiYosys format for open-source k-induction proofs
118
-
119
- ### Strict Two-Model Policy
120
- ```
121
- NVIDIA Cloud (Qwen3-Coder-480B) → Local VeriReason (Privacy/Offline)
122
- ```
123
- AgentIC enforces a strict policy: High-performance cloud inference via NVIDIA, or fully offline privacy-preserving inference via VeriReason.
124
- See [User Guide](docs/USER_GUIDE.md) for switching instructions.
125
 
126
- ### Anti-Hallucination Engine
127
- - Strips `<think>` blocks, `Thought:`/`Action:` lines, markdown fences
128
- - Auto-converts `always_comb` `always @(*)`, `always_ff @(...)` → `always @(...)`
129
- - Auto-fixes `signed'()` → `$signed()`, type casts, `unique case` → `case`
130
- - Validates every output contains a valid `module` definition before writing
131
- - Security scan blocks `$system`, shell commands, and path traversal attacks
132
 
133
- ### Web Interface
134
- AgentIC features a production-grade Web Application designed to make autonomous chip building interactive and visually stunning.
135
- - **Frontend Stack**: React, TypeScript, Vite
136
- - **Backend Bridge**: FastAPI integrating directly with the AgentIC Python orchestrator
137
- - **Key Views**:
138
- - **Landing Page**: Immersive 3D silicon chip rendering (`@react-three/fiber`).
139
- - **Dashboard**: "Mission Control" providing real-time metrics (WNS, Area, Power) parsed directly from OpenLane `metrics.csv`, complete with AgentIC's intelligent LLM Signoff reporting.
140
- - **Design Studio**: Send prompts, view Verilog code (`react-simple-code-editor`), and read live AI agent logs.
141
- - **Fabrication**: Provides 2D/3D visualizations of hardened layouts and enables one-click GDSII tapeout downloads.
142
 
143
- ---
 
 
144
 
145
- ## VeriReason: Functioning & Industry Comparison
 
 
146
 
147
- AgentIC natively supports **VeriReason** (e.g., `VeriReason-Qwen2.5-3b-RTLCoder-Verilog-GRPO-reasoning-tb`), a highly specialized local LLM finetuned explicitly for RTL coding and formal verification.
 
 
148
 
149
- ### Core Functioning & Capabilities
150
- VeriReason drives the AgentIC autonomous loop with hardware-specific reasoning:
151
- - **Zero-Shot RTL Generation:** Translates natural language architectural specs directly into synthesizable Verilog.
152
- - **Interactive Terminal Chat:** Powers the CLI `chat` module for interactive, conversational debugging and instant structural queries.
153
- - **Testbench Crafting:** Authors edge-case aware, self-checking testbenches with precise port mapping and cycle-accurate stimulus.
154
- - **Autonomous Error Recovery:** Parses `iverilog` errors and Verilator warnings, instantly zeroing in on logic mapping issues instead of generic software fixes.
155
- - **Hardware Anti-Hallucination:** inherently avoids generalized LLM pitfalls like non-synthesizable `#10` delays, un-driven wires, and mixed blocking/non-blocking assignments.
156
 
157
- ### Testbench Generation & Error Handling: VeriReason vs. Industry Giants
 
 
158
 
159
- Generating robust, simulation-ready testbenches is typically the highest failure point for general-purpose LLMs. Below is a comparison detailing testbench reliability and error resolution between VeriReason (a specialized 3B model) and Industry Giants (e.g., GPT-4o, Claude 3.5 Sonnet):
160
 
161
- | Metric | VeriReason (3B Specialized) | Industry Giants (General Massive LLMs)| AgentIC Advantage |
162
- |--------|-----------------------------|---------------------------------------|-------------------|
163
- | **Testbench Syntax Errors** | **~8%** | ~20% | Domain-specific training prevents SV/Verilog-2005 syntax mixups. |
164
- | **Simulation Logic Bug Rate**| **~12%** | ~30% | Natively understands cycle boundaries and reset logic states. |
165
- | **Auto-Fix Iterations** | **1-2 Attempts** | 3-5 Attempts | Directly maps simulator error logs to exact RTL flaws. |
166
- | **Un-driven Nets in TB** | **< 1%** | ~15% | Properly initializes test vectors and stimulus variables. |
167
- | **Cost & Latency** | **$0.00 / Local Native** | High API Costs / Cloud Latency | Infinite free iterative validation on consumer hardware. |
168
 
169
- VeriReason's GRPO reasoning engine ensures that testbench generation and error resolution hit industry-standard verification constraints significantly faster—and cheaper—than generalized models.
170
 
171
- ---
172
 
173
- ## Performance
174
 
175
- | Metric | Golden Templates | LLM-Generated |
176
- |--------|-----------------|---------------|
177
- | First-attempt RTL success | ~95% | ~80% |
178
- | Lint pass rate | ~95% | ~90% (with auto-fix) |
179
- | Simulation pass (with retries) | ~95% | ~85% |
180
- | Formal verification | ~70% | ~30% |
181
- | Build completion | ~95% | ~85% |
182
 
183
- *Benchmarked on simple-to-medium complexity designs (counters, FIFOs, SPI, UART, FSMs, timers).*
184
 
185
- ---
 
 
 
 
 
186
 
187
- ## AgentIC vs. Traditional EDA (Cadence/Synopsys)
188
 
189
- AgentIC is designed to dramatically contrast with the legacy segmentation of traditional EDA platforms.
 
 
 
 
 
 
 
 
190
 
191
- | Feature | Legacy Big-Firm Workflow (Cadence / Synopsys) | AgentIC Autonomous Pipeline |
192
- |---------|----------------------------------------------|-----------------------------|
193
- | **Error Spotting** | Manual log analysis across fragmented tools (e.g. Verdi, Design Compiler). | Automated log-parsing and intelligent LLM-driven feedback loop catching RTL logic bugs on the fly. |
194
- | **Workflow** | Segmented, requiring expert TCL scripts for each physical design node. | End-to-end Python Orchestration: Natural Language → GDSII with zero manual intervention. |
195
- | **Time-to-Market** | Weeks to months for RTL iteration and physical verification. | Minutes to hours. Case study below achieved full tapeout in ~15 minutes. |
196
- | **Verification** | Lengthy UVM testbench writing and manual SVA creations. | Auto-generated testbenches targeting behavioral bounding, plus native SymbiYosys Formal Assertions. |
197
- | **Cost** | Multi-million dollar per-seat licensing over expensive cloud/on-prem clusters. | Open-Source EDA toolchain (Icarus, OpenLane) + Model API cost (or fully free via Local LLM). |
198
 
199
- ### Case Study: APB PWM Controller
200
- To demonstrate production readiness, an `apb_pwm_controller` (an APB interface bridging a PWM generator) was submitted to AgentIC strictly via a natural language prompt.
201
- * **RTL Generation:** Valid, synthesizable SystemVerilog generated and auto-fixed in 2 attempts.
202
- * **Verification:** Auto-generated testbench passed the simulated waveform.
203
- * **Tapeout (GDSII):** The `harden` workflow yielded a **~5.9 MB GDSII file** in approximately 15 minutes. The OpenLane LVS and DRC logs reported **0 violations**. Static Timing Analysis (STA) on the standard Sky130 nom process corner reported **0 setup violations and 0 hold violations**.
204
 
205
- ### Quantitative Benchmarks: AgentIC vs Manual Legacy Flows
206
- AgentIC intrinsically reduces logic errors by removing the human-in-the-loop variable during redundant syntax drafting and verification bounding:
207
 
208
- | Metric | Manual Legacy Approach | AgentIC (Autonomous) | Improvement Factor |
209
- |--------|-----------------------|----------------------|--------------------|
210
- | **Syntax Error Rate (Pre-Lint)** | ~15-20% | **< 5%** (LLM Pre-Trained) | 4x Reduction |
211
- | **Linting & DRC Compliance** | Manual Fixes iteratively | **100%** Auto-Resolved | Full Automation |
212
- | **Logic Bug Escape Rate** | ~5-10% (Relying on human UVM tests) | **< 1%** (Formal Verification) | 10x Accuracy Increase |
213
- | **Verification Coverage** | Dependent on Engineer Skill | Auto-generated SymbiYosys bounds | Exhaustive State Checks |
214
- | **Time to Zero-DRC GDSII** | 2-4 Weeks | **< 1 Hour** | > 100x Speedup |
215
 
216
- ---
217
 
218
- ## Contributor / New Feature Guide
219
 
220
- Before adding new intelligent agents or workflows to AgentIC, contributors MUST:
221
- 1. **Read the Full Architecture:** Thoroughly read the *Architecture* and *Key Features* sections in this README. Ensure you understand the state machine (INIT → SPEC → RTL_GEN ... → SUCCESS).
222
- 2. **Strict LLM Isolation:** If your feature requires LLM intervention, remember AgentIC's anti-hallucination paradigm. Wrap the feature so that tools handle syntax first, and LLMs are called *only* on logical boundaries (like the Error Analyst mapping log traces).
223
- 3. **No Hardcoded Paths:** Ensure no physical tool paths (like OpenLane or OpenROAD) are hardcoded in the templates. Rely on config definitions like `os.path.expanduser`.
224
- 4. **Log Observability:** Produce detailed module logs for the new agent matching the pipeline's logging format (`[YOUR_AGENT] Transitioning...`).
225
 
226
- ---
 
 
 
 
 
227
 
228
- ## Installation
229
 
230
- ### Prerequisites
231
- - **Linux/WSL2** (Ubuntu 20.04+)
232
- - **Python 3.10+**
233
- - **Icarus Verilog**: `sudo apt install iverilog`
234
- - **Verilator**: `sudo apt install verilator` (for lint checks)
235
- - **Docker** (for OpenLane physical design)
236
- - **SymbiYosys** (optional, for formal verification — via [OSS CAD Suite](https://github.com/YosysHQ/oss-cad-suite-build))
237
 
238
- ### Setup
239
 
240
  ```bash
241
- git clone https://github.com/Vickyrrrrrr/AgentIC.git
242
- cd AgentIC
243
- python -m venv .venv && source .venv/bin/activate
244
- pip install -r requirements.txt
 
 
245
  ```
246
 
247
- Create `.env` in the project root:
248
- ```bash
249
- # At least one LLM API key required
250
- NVIDIA_API_KEY="nvapi-..." # Primary (recommended)
251
- GROQ_API_KEY="gsk_..." # Fallback
252
- # OPENAI_API_KEY="sk-..." # Optional
253
-
254
- # Tool paths (defaults usually work)
255
- # OPENLANE_ROOT="/home/user/OpenLane"
256
- # PDK_ROOT="/home/user/pdk"
257
  ```
258
 
259
- ---
260
 
261
- ## Usage
262
 
263
- ## Usage
264
 
265
- > **Full Documentation**: See [User Guide](docs/USER_GUIDE.md) for advanced usage and LLM switching.
266
 
267
- ### Build a Chip (Full Pipeline)
268
- ```bash
269
- python main.py build \
270
- --name my_spi_controller \
271
- --desc "SPI master with configurable clock polarity and 8-bit data width"
272
- ```
273
 
274
- ### Quick RTL Iteration (Skip Physical Design)
275
- ```bash
276
- python main.py build \
277
- --name fast_counter \
278
- --desc "32-bit counter with overflow detection" \
279
- --skip-openlane
280
- ```
281
 
282
- ### Complex Design (Full LLM Generation)
283
- ```bash
284
- python main.py build \
285
- --name tmr_processor \
286
- --desc "Radiation-hardened ALU with Triple Modular Redundancy and majority voting" \
287
- --skip-openlane --max-retries 5
288
- ```
289
 
290
- ### Other Commands
291
- ```bash
292
- # Simulate existing design
293
- python main.py simulate --name my_design --max-retries 10
 
 
 
 
294
 
295
- # Run OpenLane hardening only (auto-generates config.tcl)
296
- python main.py harden --name my_design
297
 
 
298
 
299
- ```
 
300
 
301
- ### CLI Options
302
- | Option | Description | Default |
303
- |--------|-------------|---------|
304
- | `--name` | Design/module name | Required |
305
- | `--desc` | Natural language description | Required |
306
- | `--skip-openlane` | Stop after verification (skip GDSII) | `False` |
307
- | `--show-thinking` | Display LLM reasoning (CoT) | `False` |
308
- | `--max-retries` | Max auto-fix attempts per stage | `5` |
309
 
310
- ---
 
311
 
312
- ## Project Structure
313
 
314
- ```
315
- AgentIC/
316
- ├── main.py # Entry point
317
- ├── requirements.txt
318
- ├── .env # API keys (not committed)
319
- └── src/agentic/
320
- ├── cli.py # CLI commands (build, simulate, harden, chat)
321
- ├── config.py # LLM & path configuration
322
- ├── orchestrator.py # Build state machine & pipeline orchestration
323
- ├── agents/
324
- │ ├── designer.py # RTL generation & fixing agent
325
- │ ├── testbench_designer.py # Testbench generation agent
326
- │ └── verifier.py # SVA & error analysis agents
327
- ├── golden_lib/
328
- │ ├── template_matcher.py # Keyword + complexity-aware template matching
329
- │ └── templates/ # 8 pre-verified RTL + testbench pairs
330
- └── tools/
331
- └── vlsi_tools.py # write_verilog, syntax/lint/sim/formal/coverage
332
- ```
333
 
334
- ---
335
 
336
- ## Build Pipeline
 
 
 
 
 
 
337
 
338
- ```
339
- INIT → SPEC → RTL_GEN → RTL_FIX → VERIFICATION → FORMAL_VERIFY → COVERAGE → HARDENING → SUCCESS
340
- ↑ | |
341
- └──────────┘ │ (on failure)
342
- ↑ │
343
- └──────────────────────┘
344
- ```
345
 
346
- ### RTL_FIX Stage (Autonomous)
347
- ```
348
- Syntax Error? → Check if known SV↔Verilog pattern
349
- ├── YES → Regex fix instantly (0 LLM calls)
350
- └── NO → LLM Fix Agent (with iverilog hints in prompt)
351
-
352
- Lint Passed? → Pre-Synthesis Validation
353
- ├── Undriven signal used? → Tie to 0
354
- ├── Undriven signal unused? → Remove declaration
355
- └── All clean → Proceed to Verification
356
  ```
357
 
358
- ### Verification Stage (Autonomous)
359
- ```
360
- Simulation failed with "Compilation failed"?
361
- ├── TB file in error → Auto-fix SV issues in TB
362
- ├── RTL file in error → Auto-fix SV issues in RTL
363
- └── Unknown error → LLM Error Analyst → LLM Fixer
364
- ```
365
 
366
- If SystemVerilog fails after max retries, the system automatically pivots to Verilog-2005 style and restarts.
367
 
368
- ---
 
 
 
 
 
 
369
 
370
- ## Troubleshooting
371
 
372
- | Problem | Cause | Solution |
373
- |---------|-------|----------|
374
- | Lint fails on unused signals | Verilator `-Wall` too strict | Fixed: uses `-Wno-UNUSED` now |
375
- | `always_comb` errors in iverilog | SV construct not fully supported | Fixed: auto-converted to `always @(*)` |
376
- | Template used for complex design | Keyword matcher too aggressive | Fixed: complexity indicators block simple templates |
377
- | Undriven signal in synthesis | LLM declared but forgot to assign | Fixed: pre-synthesis validation auto-fixes |
378
- | OpenLane deprecated variable error | Old config.tcl format | Fixed: auto-generates modern config |
379
- | "Docker Error" during hardening | Docker not running or PDK mismatch | Run `docker ps`, check `PDK_ROOT` |
380
- | "LLM API Failed" | Invalid key or service down | Auto-fallback: NVIDIA → Groq → Local |
381
- | Simulation timeout | Infinite loop in generated RTL | Increase timeout or simplify description |
382
 
383
- ---
384
 
385
- ## Security
 
 
 
 
 
 
 
 
 
386
 
387
- - **Input Sanitization**: Blocks `$system`, shell injection, and path traversal
388
- - **Air-Gapped Deployment**: Supports fully local LLM inference via Ollama
389
- - **Auditable Output**: All generated code is human-readable Verilog/SystemVerilog
390
- - **No Binary Blobs**: Every artifact is inspectable plain text
391
 
392
- ---
 
393
 
394
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
395
 
396
- **Proprietary and Confidential.**
397
- Copyright (c) 2026 Vicky Nishad. All Rights Reserved.
398
 
399
- You may NOT use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of this software in any form. Any unauthorized use of this software, in whole or in part, without express written permission is strictly prohibited.
400
 
401
- ## References
402
- - [OpenLane Documentation](https://openlane.readthedocs.io/)
403
- - [SkyWater 130nm PDK](https://skywater-pdk.readthedocs.io/)
404
- - [SymbiYosys Documentation](https://symbiyosys.readthedocs.io/)
405
- - [Icarus Verilog](http://iverilog.icarus.com/)
406
- - [Verilator](https://www.veripool.org/verilator/)
 
1
+ # AgentIC: Tier-1 Autonomous Text-to-Silicon
2
+
3
+ ![Python](https://img.shields.io/badge/Python-3.10%2B-blue)
4
+ ![Flow](https://img.shields.io/badge/Flow-Fail--Closed-critical)
5
+ ![Signoff](https://img.shields.io/badge/Signoff-Multi--Corner_STA%20%2B%20LEC-success)
6
+ ![PDK](https://img.shields.io/badge/PDK-Sky130%20%7C%20GF180-informational)
7
+
8
+ AgentIC converts natural-language hardware intent into RTL, verification artifacts, and OpenLane physical implementation with autonomous repair loops.
9
+
10
+ This README reflects the **Tier-1 upgrade**: strict fail-closed gates, bounded loop control, semantic rigor checks, multi-corner timing parsing, LEC integration, floorplan/convergence/ECO stages, and adapter-based OSS-PDK portability.
11
+
12
+ ## Why this version is different
13
+
14
+ AgentIC is now built to avoid two expensive failure modes:
15
+
16
+ 1. **Silent quality regression**: weak checks passing bad designs.
17
+ 2. **Infinite churn**: retrying the same failing strategy forever.
18
+
19
+ Tier-1 addresses both.
20
+
21
+ ## Tier-1 upgrade highlights
22
+
23
+ - **Fail-closed mode is first-class** (`--strict-gates` default).
24
+ - **Startup toolchain self-check** before build starts.
25
+ - **Deterministic semantic preflight** for:
26
+ - width mismatch diagnostics,
27
+ - port shadowing rejection.
28
+ - **Loop safety controls**:
29
+ - failure fingerprint detection,
30
+ - per-state retries,
31
+ - global step budget,
32
+ - capped strategy pivots.
33
+ - **EDA intelligence layer** to summarize large logs into structured top issues.
34
+ - **Physical feedback loop** with:
35
+ - floorplan stage,
36
+ - congestion assessment,
37
+ - convergence assessor,
38
+ - ECO stage.
39
+ - **Signoff upgrades**:
40
+ - multi-corner STA parsing (setup + hold),
41
+ - numeric power + IR-drop parsing,
42
+ - EQY-based LEC check.
43
+ - **Hierarchy/IP scaling scaffold**:
44
+ - auto hierarchy planner,
45
+ - per-block artifact emission,
46
+ - reusable `ip_manifest.json`.
47
+ - **CI split**:
48
+ - PR smoke checks,
49
+ - nightly full-flow path.
50
+
51
+ ## Architecture (easy view)
52
 
53
  ```mermaid
54
+ flowchart TD
55
+ U[User Prompt] --> CLI[CLI Build Command]
56
+ CLI --> SC[Startup Self-Check]
57
+ SC -->|pass| INIT[INIT]
58
+ SC -->|fail + strict| FAIL[FAIL]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
+ INIT --> SPEC[SPEC]
61
+ SPEC --> RTLGEN[RTL_GEN]
62
+ RTLGEN --> RIGOR[RTL_FIX + Semantic Rigor Gate]
 
 
 
63
 
64
+ RIGOR -->|syntax/lint/semantic fail| FIXLOOP[Autonomous Fix Loop]
65
+ FIXLOOP --> RIGOR
 
 
 
 
 
 
 
66
 
67
+ RIGOR --> VERIF[VERIFICATION + TB Strict Gate]
68
+ VERIF -->|sim fail| ANALYZE[Error Analyst + Focused Fix]
69
+ ANALYZE --> VERIF
70
 
71
+ VERIF --> FORMAL[FORMAL_VERIFY]
72
+ FORMAL --> COV[COVERAGE_CHECK]
73
+ COV --> REG[REGRESSION]
74
 
75
+ REG --> FLOOR[FLOORPLAN]
76
+ FLOOR --> HARDEN[HARDENING OpenLane]
77
+ HARDEN --> CONV[CONVERGENCE_REVIEW]
78
 
79
+ CONV -->|congestion/stagnation| PIVOT[Strategy Pivot]
80
+ PIVOT --> FLOOR
 
 
 
 
 
81
 
82
+ CONV --> SIGN[SIGNOFF DRC/LVS/STA/Power/IR/LEC]
83
+ SIGN -->|fail| ECO[ECO_PATCH]
84
+ ECO --> HARDEN
85
 
86
+ SIGN -->|pass| OK[SUCCESS]
87
 
88
+ FIXLOOP -->|fingerprint repeats / budgets exceeded| FAIL
89
+ PIVOT -->|pivot cap exceeded| FAIL
90
+ ```
 
 
 
 
91
 
92
+ ## Autonomous repair model
93
 
94
+ AgentIC is not just an error printer. It has repair loops with decision logic.
95
 
96
+ ### Loop behavior
97
 
98
+ - For compile/sim failures, it classifies cause (**TB vs RTL**) and applies targeted fixes.
99
+ - For large logs, it passes a **structured summary** instead of dumping raw text into prompts.
100
+ - If the same `(state + error + artifact fingerprint)` repeats, it fails closed instead of spinning.
 
 
 
 
101
 
102
+ ### Convergence behavior
103
 
104
+ - Tracks timing/congestion snapshots per iteration.
105
+ - If WNS stagnates (< 0.01ns improvement for 2 consecutive iterations), triggers strategy pivot:
106
+ 1. timing constraint tune,
107
+ 2. area expansion,
108
+ 3. logic decoupling hint (register slicing),
109
+ 4. fail closed if capped pivots are exhausted.
110
 
111
+ ## Quality gates (strict mode)
112
 
113
+ | Stage | Gate |
114
+ |---|---|
115
+ | Startup | required tools + environment must resolve |
116
+ | RTL Fix | syntax + lint + semantic rigor must pass |
117
+ | Verification | TB contract + simulation must pass |
118
+ | Formal | formal result is blocking in strict mode |
119
+ | Coverage | minimum coverage threshold is blocking |
120
+ | Regression | regression failures are blocking |
121
+ | Signoff | DRC/LVS/STA/power/IR/LEC all contribute to final pass/fail |
122
 
123
+ ## PDK portability model
 
 
 
 
 
 
124
 
125
+ AgentIC uses an adapter-style OSS-PDK profile model.
 
 
 
 
126
 
127
+ Supported profiles now:
 
128
 
129
+ - `sky130`
130
+ - `gf180`
 
 
 
 
 
131
 
132
+ This is **portability support**, not foundry certification.
133
 
134
+ ## CLI quick start
135
 
136
+ ### 1) Standard strict build
 
 
 
 
137
 
138
+ ```bash
139
+ python3 main.py build \
140
+ --name my_chip \
141
+ --desc "32-bit APB timer with interrupt" \
142
+ --full-signoff
143
+ ```
144
 
145
+ ### 2) Portable profile selection
146
 
147
+ ```bash
148
+ python3 main.py build \
149
+ --name my_fifo \
150
+ --desc "Dual-clock FIFO with status flags" \
151
+ --pdk-profile sky130 \
152
+ --strict-gates
153
+ ```
154
 
155
+ ### 3) Tune convergence controls
156
 
157
  ```bash
158
+ python3 main.py build \
159
+ --name deep_pipeline \
160
+ --desc "Pipelined datapath with valid/ready" \
161
+ --max-pivots 2 \
162
+ --congestion-threshold 10 \
163
+ --hierarchical auto
164
  ```
165
 
166
+ ## Build command options (Tier-1)
167
+
168
+ ```text
169
+ --strict-gates / --no-strict-gates (default: strict)
170
+ --pdk-profile {sky130,gf180} (default: sky130)
171
+ --max-pivots N (default: 2)
172
+ --congestion-threshold FLOAT (default: 10.0)
173
+ --hierarchical {auto,off,on} (default: auto)
 
 
174
  ```
175
 
176
+ Existing options remain (`--skip-openlane`, `--full-signoff`, `--min-coverage`, `--max-retries`, etc.).
177
 
178
+ ## Human-readable architecture internals
179
 
180
+ ### Orchestrator states
181
 
182
+ `INIT -> SPEC -> RTL_GEN -> RTL_FIX -> VERIFICATION -> FORMAL_VERIFY -> COVERAGE_CHECK -> REGRESSION -> FLOORPLAN -> HARDENING -> CONVERGENCE_REVIEW -> SIGNOFF -> SUCCESS/FAIL`
183
 
184
+ With optional recovery path:
 
 
 
 
 
185
 
186
+ `SIGNOFF fail -> ECO_PATCH -> HARDENING -> CONVERGENCE_REVIEW`
 
 
 
 
 
 
187
 
188
+ ### Key generated artifacts
 
 
 
 
 
 
189
 
190
+ - `config.tcl` (OpenLane config)
191
+ - `macro_placement.tcl` (floorplan macro scaffold)
192
+ - `<design>.eqy` (LEC config)
193
+ - `<design>_eco_patch.tcl` (ECO patch artifact)
194
+ - `ip_manifest.json` (reusable block metadata)
195
+ - `src/blocks/*.v` (hierarchy-enabled block artifacts)
196
+ - `metircs/<design>/latest.json` (industry benchmark snapshot)
197
+ - `metircs/<design>/latest.md` (human-readable benchmark table)
198
 
199
+ ## CI model
 
200
 
201
+ ### PR smoke checks
202
 
203
+ - Python compile check for `src/agentic`
204
+ - Tier-1 unit tests (`tests/test_tier1_upgrade.py`)
205
 
206
+ ### Nightly full checks
 
 
 
 
 
 
 
207
 
208
+ - Runs smoke first
209
+ - Attempts full build+signoff path when environment is available
210
 
211
+ Files:
212
 
213
+ - `.github/workflows/ci.yml`
214
+ - `scripts/ci/smoke.sh`
215
+ - `scripts/ci/nightly_full.sh`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
216
 
217
+ ## Tests included for Tier-1
218
 
219
+ - conflict marker integrity checks
220
+ - semantic gate checks (port shadowing)
221
+ - log parser behavior on large synthetic logs
222
+ - multi-corner STA parser correctness
223
+ - congestion parser correctness
224
+ - loop fingerprint guard behavior
225
+ - hierarchy threshold activation
226
 
227
+ Run locally:
 
 
 
 
 
 
228
 
229
+ ```bash
230
+ bash scripts/ci/smoke.sh
 
 
 
 
 
 
 
 
231
  ```
232
 
233
+ ## Installation
 
 
 
 
 
 
234
 
235
+ ### Prerequisites
236
 
237
+ - Linux / WSL2
238
+ - Python 3.10+
239
+ - Docker
240
+ - Verilator
241
+ - Icarus Verilog (`iverilog`, `vvp`)
242
+ - OpenLane installation
243
+ - OSS CAD tools for formal/LEC (`sby`, `yosys`, `eqy`)
244
 
245
+ ### Setup
246
 
247
+ ```bash
248
+ git clone https://github.com/Vickyrrrrrr/AgentIC.git
249
+ cd AgentIC
250
+ python3 -m venv .venv
251
+ source .venv/bin/activate
252
+ pip install -r requirements.txt
253
+ ```
 
 
 
254
 
255
+ Configure `.env` (minimum):
256
 
257
+ ```bash
258
+ # LLM backend
259
+ NVIDIA_API_KEY="..." # cloud path
260
+ # or
261
+ LLM_BASE_URL="http://localhost:11434" # local path
262
+
263
+ # Physical flow roots
264
+ OPENLANE_ROOT="/home/user/OpenLane"
265
+ PDK_ROOT="/home/user/.ciel"
266
+ ```
267
 
268
+ ## Practical boundaries (current implementation)
 
 
 
269
 
270
+ - ECO and hierarchy are production-oriented scaffolds in this phase, with concrete artifacts and control flow, but not yet a full foundry-tuned incremental optimization stack.
271
+ - Portability means adapter-based OSS-PDK support, not tapeout certification claims.
272
 
273
+ ## Project layout
274
+
275
+ ```text
276
+ AgentIC/
277
+ ├── main.py
278
+ ├── src/agentic/
279
+ │ ├── cli.py
280
+ │ ├── config.py
281
+ │ ├── orchestrator.py
282
+ │ ├── agents/
283
+ │ └── tools/vlsi_tools.py
284
+ ├── tests/test_tier1_upgrade.py
285
+ ├── scripts/ci/
286
+ └── .github/workflows/ci.yml
287
+ ```
288
 
289
+ ## License
 
290
 
291
+ Proprietary and Confidential.
292
 
293
+ Copyright (c) 2026 Vicky Nishad.
294
+ All rights reserved.
 
 
 
 
scripts/ci/nightly_full.sh ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
5
+ cd "$REPO_ROOT"
6
+
7
+ # Always run smoke first.
8
+ "$REPO_ROOT/scripts/ci/smoke.sh"
9
+
10
+ # Full flow requires local EDA/runtime environment and an available LLM backend.
11
+ for bin in docker verilator iverilog vvp; do
12
+ if ! command -v "$bin" >/dev/null 2>&1; then
13
+ echo "[nightly] missing required binary: $bin (skipping full flow)"
14
+ exit 0
15
+ fi
16
+ done
17
+
18
+ if [[ ! -d "${OPENLANE_ROOT:-$HOME/OpenLane}" ]]; then
19
+ echo "[nightly] OPENLANE_ROOT not present (skipping full flow)"
20
+ exit 0
21
+ fi
22
+
23
+ if [[ -z "${NVIDIA_API_KEY:-}" && -z "${LLM_BASE_URL:-}" ]]; then
24
+ echo "[nightly] no LLM backend configured (skipping full flow)"
25
+ exit 0
26
+ fi
27
+
28
+ # End-to-end strict run on reference design.
29
+ python3 main.py build \
30
+ --name ci_nightly_counter \
31
+ --desc "8-bit counter with enable and async reset" \
32
+ --full-signoff \
33
+ --strict-gates \
34
+ --pdk-profile sky130 \
35
+ --max-retries 2 \
36
+ --min-coverage 80
scripts/ci/smoke.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
5
+ cd "$REPO_ROOT"
6
+
7
+ python3 -m compileall -q src/agentic
8
+ python3 -m unittest discover -s tests -p 'test_tier1_upgrade.py'
src/agentic/agents/designer.py CHANGED
@@ -15,19 +15,24 @@ def get_designer_agent(llm, goal, verbose=False, strategy="SV_MODULAR"):
15
  You write rock-solid Verilog-2005 code that works on any simulator (Icarus, Verilator, commercial).
16
  You avoid 'logic', 'always_ff', and 'enum'. You use 'reg', 'wire', 'eval', and 'localparam'.
17
  Your code is simple, flat, and robust."""
18
- else:
19
- role = "SystemVerilog Architect"
20
- backstory = """You are a Principal ASIC Architect at a top-tier semiconductor company (NVIDIA/Intel).
21
- You DO NOT write "toy" code or "student" projects. You write PRODUCTION-READY RTL.
22
-
23
  Your Principles:
24
  1. **Completeness**: You NEVER use "placeholders", "simplified logic", or "magic numbers".
25
  - If a NPU has 4x4 cells, you implement ALL 16 cells and the data paths to feed them.
26
  - If a FIFO needs memory, you implement the full pointer logic and mem array.
27
- 2. **Scalability**: You ALWAYS use `parameter` for dimensions (e.g., `DATA_WIDTH`, `FIFO_DEPTH`).
28
- 3. **Standard Interfaces**: You use AXI-Stream (`tvalid`, `tready`, `tdata`) or APB/AHB for control.
29
- 4. **Modern SystemVerilog**: You use `logic`, `always_ff`, `always_comb`, `enum`, and `struct`.
30
- """
 
 
 
 
 
31
 
32
  return Agent(
33
  role=role,
@@ -36,4 +41,4 @@ def get_designer_agent(llm, goal, verbose=False, strategy="SV_MODULAR"):
36
  llm=llm,
37
  verbose=verbose,
38
  allow_delegation=False
39
- )
 
15
  You write rock-solid Verilog-2005 code that works on any simulator (Icarus, Verilator, commercial).
16
  You avoid 'logic', 'always_ff', and 'enum'. You use 'reg', 'wire', 'eval', and 'localparam'.
17
  Your code is simple, flat, and robust."""
18
+ else:
19
+ role = "SystemVerilog Architect"
20
+ backstory = """You are a Principal ASIC Architect at a top-tier semiconductor company (NVIDIA/Intel).
21
+ You DO NOT write "toy" code or "student" projects. You write PRODUCTION-READY RTL.
22
+
23
  Your Principles:
24
  1. **Completeness**: You NEVER use "placeholders", "simplified logic", or "magic numbers".
25
  - If a NPU has 4x4 cells, you implement ALL 16 cells and the data paths to feed them.
26
  - If a FIFO needs memory, you implement the full pointer logic and mem array.
27
+ 2. **Scalability**: You ALWAYS use `parameter` for dimensions (e.g., `DATA_WIDTH`, `FIFO_DEPTH`).
28
+ 3. **Standard Interfaces**: You use AXI-Stream (`tvalid`, `tready`, `tdata`) or APB/AHB for control.
29
+ 4. **Modern SystemVerilog**: You use `logic`, `always_ff`, `always_comb`, `enum`, and `struct`.
30
+ 5. **Hardware Rigor (Mandatory)**:
31
+ - **Port Inventory First**: Before internal declarations, list all module ports and avoid any duplicate declaration.
32
+ - **Port Shadowing Forbidden**: Never redeclare a name that already exists in module ports.
33
+ - **Bit-Width Safety**: Ensure LHS and RHS bit widths are compatible for every assignment.
34
+ - **Reset Intent**: Every state-holding register must be explicitly initialized in reset logic.
35
+ """
36
 
37
  return Agent(
38
  role=role,
 
41
  llm=llm,
42
  verbose=verbose,
43
  allow_delegation=False
44
+ )
src/agentic/agents/testbench_designer.py CHANGED
@@ -30,9 +30,13 @@ def get_testbench_agent(llm, goal, verbose=False, strategy="SV_MODULAR"):
30
  d. `class Environment` (Depends on Driver, Monitor, Scoreboard)
31
  e. `module tb` (The top level)
32
 
33
- 3. **Self-Checking**: You NEVER rely on waveform inspection. The testbench MUST print "TEST PASSED" only if all checks pass.
34
- 4. **Coverage**: You use `covergroup` and `bins` to ensure all states and transitions are hit.
35
- """
 
 
 
 
36
 
37
  return Agent(
38
  role=role,
 
30
  d. `class Environment` (Depends on Driver, Monitor, Scoreboard)
31
  e. `module tb` (The top level)
32
 
33
+ 3. **Self-Checking**: You NEVER rely on waveform inspection. The testbench MUST print "TEST PASSED" only if all checks pass.
34
+ 4. **Coverage**: You use `covergroup` and `bins` to ensure all states and transitions are hit.
35
+ 5. **Strict Gate Contract**:
36
+ - Include `class Transaction` and at least one of `class Driver`/`class Monitor`/`class Scoreboard`.
37
+ - Emit explicit PASS/FAIL markers (`TEST PASSED` and `TEST FAILED` paths).
38
+ - Return complete compilable testbench code only.
39
+ """
40
 
41
  return Agent(
42
  role=role,
src/agentic/cli.py CHANGED
@@ -18,13 +18,18 @@ from rich.panel import Panel
18
  from rich.progress import Progress, SpinnerColumn, TextColumn
19
  from crewai import Agent, Task, Crew, LLM
20
 
21
- # Local imports
22
- # Local imports
23
- <<<<<<< HEAD
24
- from .config import OPENLANE_ROOT, LLM_MODEL, LLM_BASE_URL, LLM_API_KEY, NVIDIA_CONFIG, LOCAL_CONFIG, GLM5_CONFIG, PDK
25
- =======
26
- from .config import OPENLANE_ROOT, LLM_MODEL, LLM_BASE_URL, LLM_API_KEY, NVIDIA_CONFIG, LOCAL_CONFIG, NEMOTRON_CONFIG, PDK
27
- >>>>>>> 1e4247e (Update README with VeriReason benchmarks and functioning details)
 
 
 
 
 
28
  from .agents.designer import get_designer_agent
29
  from .agents.testbench_designer import get_testbench_agent
30
  from .agents.verifier import get_verification_agent, get_error_analyst_agent
@@ -42,34 +47,29 @@ from .tools.vlsi_tools import (
42
  write_sby_config,
43
  run_formal_verification,
44
  check_physical_metrics,
45
- run_lint_check,
46
- run_gls_simulation,
47
- signoff_check_tool
48
- )
 
49
 
50
  # --- INITIALIZE ---
51
  app = typer.Typer()
52
  console = Console()
53
 
54
  # Setup Brain
55
- def get_llm():
56
  """Returns the LLM instance. Strict 3-Model Policy:
57
  1. NVIDIA Nemotron Cloud (Primary)
58
  2. NVIDIA Qwen Cloud (High Perf)
59
  3. VeriReason Local (Fallback)
60
  """
61
 
62
- configs = [
63
- <<<<<<< HEAD
64
- ("NVIDIA Nemotron Cloud", NVIDIA_CONFIG),
65
- ("Backup GLM5 Cloud", GLM5_CONFIG),
66
- ("VeriReason Local", LOCAL_CONFIG),
67
- =======
68
- ("NVIDIA Nemotron Cloud", NEMOTRON_CONFIG), # Primary (Mapped via NEMOTRON_CONFIG in config.py)
69
- ("Backup GLM5 Cloud", NVIDIA_CONFIG), # Backup (Mapped via NVIDIA_CONFIG in config.py)
70
- ("VeriReason Local", LOCAL_CONFIG),
71
- >>>>>>> 1e4247e (Update README with VeriReason benchmarks and functioning details)
72
- ]
73
 
74
  for name, cfg in configs:
75
  key = cfg.get("api_key", "")
@@ -107,9 +107,22 @@ def get_llm():
107
  console.print(f"[yellow]⚠ {name} init failed: {e}[/yellow]")
108
 
109
  # Critical Failure if both fail
110
- console.print(f"[bold red]CRITICAL: No valid LLM backend found.[/bold red]")
111
- console.print(f"Please set [bold]NVIDIA_API_KEY[/bold] for Cloud or configure [bold]LLM_BASE_URL[/bold] for Local.")
112
- raise typer.Exit(1)
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
 
115
  @app.command()
@@ -420,15 +433,20 @@ def harden(
420
 
421
  # --- THE BUILD COMMAND ---
422
  @app.command()
423
- def build(
424
- name: str = typer.Option(..., "--name", "-n", help="Design name (e.g., counter)"),
425
- desc: str = typer.Option(..., "--desc", "-d", help="Natural language description"),
426
- max_retries: int = typer.Option(5, "--max-retries", "-r", min=0, help="Max auto-fix retries for RTL/TB/sim failures"),
427
  skip_openlane: bool = typer.Option(False, "--skip-openlane", help="Stop after simulation (no RTL→GDSII hardening)"),
428
- show_thinking: bool = typer.Option(False, "--show-thinking", help="Print DeepSeek <think> reasoning for each generation/fix step"),
429
- full_signoff: bool = typer.Option(False, "--full-signoff", help="Run full industry signoff (formal + coverage + regression + DRC/LVS)"),
430
- min_coverage: float = typer.Option(80.0, "--min-coverage", help="Minimum line coverage percentage to pass verification")
431
- ):
 
 
 
 
 
432
  """Build a chip from natural language description (Autonomous Orchestrator 2.0)."""
433
 
434
  from .orchestrator import BuildOrchestrator
@@ -441,18 +459,24 @@ def build(
441
  title="🚀 Starting Autonomous Orchestrator"
442
  ))
443
 
444
- llm = get_llm()
 
445
 
446
  orchestrator = BuildOrchestrator(
447
  name=name,
448
  desc=desc,
449
  llm=llm,
450
  max_retries=max_retries,
451
- verbose=show_thinking,
452
- skip_openlane=skip_openlane,
453
- full_signoff=full_signoff,
454
- min_coverage=min_coverage
455
- )
 
 
 
 
 
456
 
457
  orchestrator.run()
458
 
@@ -466,4 +490,4 @@ def verify(name: str = typer.Argument(..., help="Design name to verify")):
466
 
467
 
468
  if __name__ == "__main__":
469
- app()
 
18
  from rich.progress import Progress, SpinnerColumn, TextColumn
19
  from crewai import Agent, Task, Crew, LLM
20
 
21
+ # Local imports
22
+ from .config import (
23
+ OPENLANE_ROOT,
24
+ LLM_MODEL,
25
+ LLM_BASE_URL,
26
+ LLM_API_KEY,
27
+ NVIDIA_CONFIG,
28
+ LOCAL_CONFIG,
29
+ NEMOTRON_CONFIG,
30
+ GLM5_CONFIG,
31
+ PDK,
32
+ )
33
  from .agents.designer import get_designer_agent
34
  from .agents.testbench_designer import get_testbench_agent
35
  from .agents.verifier import get_verification_agent, get_error_analyst_agent
 
47
  write_sby_config,
48
  run_formal_verification,
49
  check_physical_metrics,
50
+ run_lint_check,
51
+ run_gls_simulation,
52
+ signoff_check_tool,
53
+ startup_self_check,
54
+ )
55
 
56
  # --- INITIALIZE ---
57
  app = typer.Typer()
58
  console = Console()
59
 
60
  # Setup Brain
61
+ def get_llm():
62
  """Returns the LLM instance. Strict 3-Model Policy:
63
  1. NVIDIA Nemotron Cloud (Primary)
64
  2. NVIDIA Qwen Cloud (High Perf)
65
  3. VeriReason Local (Fallback)
66
  """
67
 
68
+ configs = [
69
+ ("NVIDIA Nemotron Cloud", NEMOTRON_CONFIG),
70
+ ("Backup GLM5 Cloud", GLM5_CONFIG),
71
+ ("VeriReason Local", LOCAL_CONFIG),
72
+ ]
 
 
 
 
 
 
73
 
74
  for name, cfg in configs:
75
  key = cfg.get("api_key", "")
 
107
  console.print(f"[yellow]⚠ {name} init failed: {e}[/yellow]")
108
 
109
  # Critical Failure if both fail
110
+ console.print(f"[bold red]CRITICAL: No valid LLM backend found.[/bold red]")
111
+ console.print(f"Please set [bold]NVIDIA_API_KEY[/bold] for Cloud or configure [bold]LLM_BASE_URL[/bold] for Local.")
112
+ raise typer.Exit(1)
113
+
114
+
115
+ def run_startup_diagnostics(strict: bool = True):
116
+ diag = startup_self_check()
117
+ ok = bool(diag.get("ok", False))
118
+ status = "[green]PASS[/green]" if ok else "[red]FAIL[/red]"
119
+ console.print(Panel(f"Startup Toolchain Check: {status}", title="🔧 Environment"))
120
+ if not ok:
121
+ for check in diag.get("checks", []):
122
+ if not check.get("ok"):
123
+ console.print(f" [red]✗ {check.get('tool')}[/red] -> {check.get('resolved')}")
124
+ if strict:
125
+ raise typer.Exit(1)
126
 
127
 
128
  @app.command()
 
433
 
434
  # --- THE BUILD COMMAND ---
435
  @app.command()
436
+ def build(
437
+ name: str = typer.Option(..., "--name", "-n", help="Design name (e.g., counter)"),
438
+ desc: str = typer.Option(..., "--desc", "-d", help="Natural language description"),
439
+ max_retries: int = typer.Option(5, "--max-retries", "-r", min=0, help="Max auto-fix retries for RTL/TB/sim failures"),
440
  skip_openlane: bool = typer.Option(False, "--skip-openlane", help="Stop after simulation (no RTL→GDSII hardening)"),
441
+ show_thinking: bool = typer.Option(False, "--show-thinking", help="Print DeepSeek <think> reasoning for each generation/fix step"),
442
+ full_signoff: bool = typer.Option(False, "--full-signoff", help="Run full industry signoff (formal + coverage + regression + DRC/LVS)"),
443
+ min_coverage: float = typer.Option(80.0, "--min-coverage", help="Minimum line coverage percentage to pass verification"),
444
+ strict_gates: bool = typer.Option(True, "--strict-gates/--no-strict-gates", help="Enable strict fail-closed gating"),
445
+ pdk_profile: str = typer.Option("sky130", "--pdk-profile", help="PDK adapter profile: sky130 or gf180"),
446
+ max_pivots: int = typer.Option(2, "--max-pivots", min=0, help="Maximum strategy pivots before fail-closed"),
447
+ congestion_threshold: float = typer.Option(10.0, "--congestion-threshold", help="Routing congestion threshold (%)"),
448
+ hierarchical: str = typer.Option("auto", "--hierarchical", help="Hierarchical mode: auto, off, on"),
449
+ ):
450
  """Build a chip from natural language description (Autonomous Orchestrator 2.0)."""
451
 
452
  from .orchestrator import BuildOrchestrator
 
459
  title="🚀 Starting Autonomous Orchestrator"
460
  ))
461
 
462
+ run_startup_diagnostics(strict=strict_gates)
463
+ llm = get_llm()
464
 
465
  orchestrator = BuildOrchestrator(
466
  name=name,
467
  desc=desc,
468
  llm=llm,
469
  max_retries=max_retries,
470
+ verbose=show_thinking,
471
+ skip_openlane=skip_openlane,
472
+ full_signoff=full_signoff,
473
+ min_coverage=min_coverage,
474
+ strict_gates=strict_gates,
475
+ pdk_profile=pdk_profile,
476
+ max_pivots=max_pivots,
477
+ congestion_threshold=congestion_threshold,
478
+ hierarchical_mode=hierarchical,
479
+ )
480
 
481
  orchestrator.run()
482
 
 
490
 
491
 
492
  if __name__ == "__main__":
493
+ app()
src/agentic/config.py CHANGED
@@ -1,4 +1,5 @@
1
  import os
 
2
  from dotenv import load_dotenv
3
 
4
  # Project Paths
@@ -10,54 +11,119 @@ OPENLANE_ROOT = os.environ.get("OPENLANE_ROOT", os.path.expanduser("~/OpenLane")
10
  DESIGNS_DIR = os.path.join(OPENLANE_ROOT, "designs")
11
  SCRIPTS_DIR = os.path.join(WORKSPACE_ROOT, "scripts")
12
 
13
- # Strict Three-Model Policy:
14
- <<<<<<< HEAD
15
- # 1. Backup GLM5 Cloud
16
- GLM5_CONFIG = {
17
- "model": os.environ.get("BACKUP_MODEL", "openai/z-ai/glm5"),
18
- "base_url": os.environ.get("BACKUP_BASE_URL", "https://integrate.api.nvidia.com/v1"),
19
- "api_key": os.environ.get("NVIDIA_API_KEY", "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")
20
- }
21
-
22
- # 2. NVIDIA Qwen Cloud (High Performance) -> Now nemotron-3-nano
23
- NVIDIA_CONFIG = {
24
- "model": os.environ.get("NVIDIA_MODEL", "nvidia/nemotron-3-nano-30b-a3b"),
25
- "base_url": os.environ.get("NVIDIA_BASE_URL", "https://integrate.api.nvidia.com/v1"),
26
- "api_key": os.environ.get("NVIDIA_API_KEY", "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")
27
- =======
28
- # 1. NVIDIA Nemotron Cloud (Primary) - Integrated into AgentIC ReAct Loop
29
  NEMOTRON_CONFIG = {
30
  "model": os.environ.get("NVIDIA_MODEL", "nvidia/nemotron-3-nano-30b-a3b"),
31
  "base_url": os.environ.get("NVIDIA_BASE_URL", "https://integrate.api.nvidia.com/v1"),
32
- "api_key": os.environ.get("NVIDIA_API_KEY") # API Key hidden for security
33
  }
34
 
35
- # 2. Backup GLM5 Cloud
36
- NVIDIA_CONFIG = {
37
  "model": os.environ.get("BACKUP_MODEL", "openai/z-ai/glm5"),
38
  "base_url": os.environ.get("BACKUP_BASE_URL", "https://integrate.api.nvidia.com/v1"),
39
- "api_key": os.environ.get("NVIDIA_API_KEY") # API Key hidden for security
40
- >>>>>>> 1e4247e (Update README with VeriReason benchmarks and functioning details)
41
  }
42
 
43
- # 3. VeriReason Local (Fallback)
44
- # Explicitly uses the VeriReason model defined in .env
45
  LOCAL_CONFIG = {
46
- "model": os.environ.get("LLM_MODEL", "ollama/hf.co/mradermacher/VeriReason-Qwen2.5-3b-RTLCoder-Verilog-GRPO-reasoning-tb-GGUF:Q4_K_M"),
 
 
 
47
  "base_url": os.environ.get("LLM_BASE_URL", "http://localhost:11434"),
48
- "api_key": os.environ.get("LLM_API_KEY", "NA")
49
  }
50
 
51
- # Expose 'active' config variables (Defaults to Local if NVIDIA missing, but CLI handles logic)
 
 
 
52
  LLM_MODEL = LOCAL_CONFIG["model"]
53
  LLM_BASE_URL = LOCAL_CONFIG["base_url"]
54
  LLM_API_KEY = LOCAL_CONFIG["api_key"]
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  # Tool Settings
57
- PDK_ROOT = os.environ.get('PDK_ROOT', os.path.expanduser('~/.ciel'))
58
- PDK = os.environ.get('PDK', 'sky130A') # Default to SkyWater 130nm
59
  OPENLANE_IMAGE = "ghcr.io/the-openroad-project/openlane:ff5509f65b17bfa4068d5336495ab1718987ff69-amd64"
60
 
61
- # OSS CAD Suite (SymbiYosys, Yosys) - Self-contained within AgentIC
62
- OSS_CAD_SUITE_ROOT = os.environ.get('OSS_CAD_SUITE_HOME', os.path.join(WORKSPACE_ROOT, 'oss-cad-suite'))
63
- SBY_BIN = os.path.join(OSS_CAD_SUITE_ROOT, 'bin', 'sby')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  import os
2
+ from typing import Dict, Any, Optional
3
  from dotenv import load_dotenv
4
 
5
  # Project Paths
 
11
  DESIGNS_DIR = os.path.join(OPENLANE_ROOT, "designs")
12
  SCRIPTS_DIR = os.path.join(WORKSPACE_ROOT, "scripts")
13
 
14
+ # LLM backends (env-only secrets)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  NEMOTRON_CONFIG = {
16
  "model": os.environ.get("NVIDIA_MODEL", "nvidia/nemotron-3-nano-30b-a3b"),
17
  "base_url": os.environ.get("NVIDIA_BASE_URL", "https://integrate.api.nvidia.com/v1"),
18
+ "api_key": os.environ.get("NVIDIA_API_KEY", ""),
19
  }
20
 
21
+ GLM5_CONFIG = {
 
22
  "model": os.environ.get("BACKUP_MODEL", "openai/z-ai/glm5"),
23
  "base_url": os.environ.get("BACKUP_BASE_URL", "https://integrate.api.nvidia.com/v1"),
24
+ "api_key": os.environ.get("BACKUP_API_KEY", os.environ.get("NVIDIA_API_KEY", "")),
 
25
  }
26
 
 
 
27
  LOCAL_CONFIG = {
28
+ "model": os.environ.get(
29
+ "LLM_MODEL",
30
+ "ollama/hf.co/mradermacher/VeriReason-Qwen2.5-3b-RTLCoder-Verilog-GRPO-reasoning-tb-GGUF:Q4_K_M",
31
+ ),
32
  "base_url": os.environ.get("LLM_BASE_URL", "http://localhost:11434"),
33
+ "api_key": os.environ.get("LLM_API_KEY", "NA"),
34
  }
35
 
36
+ # Backward-compat alias used by parts of the codebase/docs
37
+ NVIDIA_CONFIG = GLM5_CONFIG
38
+
39
+ # Expose active defaults (CLI chooses concrete backend)
40
  LLM_MODEL = LOCAL_CONFIG["model"]
41
  LLM_BASE_URL = LOCAL_CONFIG["base_url"]
42
  LLM_API_KEY = LOCAL_CONFIG["api_key"]
43
 
44
+ # Portable OSS-PDK profiles (adapter-style)
45
+ PDK_PROFILES: Dict[str, Dict[str, Any]] = {
46
+ "sky130": {
47
+ "pdk": "sky130A",
48
+ "std_cell_library": "sky130_fd_sc_hd",
49
+ "default_clock_period": "10.0",
50
+ },
51
+ "gf180": {
52
+ "pdk": "gf180mcuC",
53
+ "std_cell_library": "gf180mcu_fd_sc_mcu7t5v0",
54
+ "default_clock_period": "15.0",
55
+ },
56
+ }
57
+
58
+ DEFAULT_PDK_PROFILE = os.environ.get("PDK_PROFILE", "sky130").strip().lower()
59
+ if DEFAULT_PDK_PROFILE not in PDK_PROFILES:
60
+ DEFAULT_PDK_PROFILE = "sky130"
61
+
62
  # Tool Settings
63
+ PDK_ROOT = os.environ.get("PDK_ROOT", os.path.expanduser("~/.ciel"))
64
+ PDK = os.environ.get("PDK", PDK_PROFILES[DEFAULT_PDK_PROFILE]["pdk"])
65
  OPENLANE_IMAGE = "ghcr.io/the-openroad-project/openlane:ff5509f65b17bfa4068d5336495ab1718987ff69-amd64"
66
 
67
+
68
+ def _resolve_tool_binary(bin_name: str, env_var: Optional[str] = None) -> str:
69
+ """Resolve tool binary using configured roots before PATH.
70
+
71
+ Fallback order:
72
+ 1) Explicit env var for that tool (if provided)
73
+ 2) OSS_CAD_SUITE_HOME/bin
74
+ 3) WORKSPACE_ROOT/oss-cad-suite/bin
75
+ 4) /home/vickynishad/oss-cad-suite/bin
76
+ 5) bin_name from PATH
77
+ """
78
+ explicit = os.environ.get(env_var, "").strip() if env_var else ""
79
+ if explicit and os.path.exists(explicit):
80
+ return explicit
81
+
82
+ roots = []
83
+ oss_home = os.environ.get("OSS_CAD_SUITE_HOME", "").strip()
84
+ if oss_home:
85
+ roots.append(oss_home)
86
+ roots.append(os.path.join(WORKSPACE_ROOT, "oss-cad-suite"))
87
+ roots.append("/home/vickynishad/oss-cad-suite")
88
+
89
+ for root in roots:
90
+ candidate = os.path.join(root, "bin", bin_name)
91
+ if os.path.exists(candidate):
92
+ return candidate
93
+
94
+ return bin_name
95
+
96
+
97
+ OSS_CAD_SUITE_ROOT = os.environ.get("OSS_CAD_SUITE_HOME", os.path.join(WORKSPACE_ROOT, "oss-cad-suite"))
98
+ SBY_BIN = _resolve_tool_binary("sby", env_var="SBY_BIN")
99
+ YOSYS_BIN = _resolve_tool_binary("yosys", env_var="YOSYS_BIN")
100
+ EQY_BIN = _resolve_tool_binary("eqy", env_var="EQY_BIN")
101
+
102
+
103
+ def get_pdk_profile(profile: str) -> Dict[str, Any]:
104
+ key = (profile or DEFAULT_PDK_PROFILE).strip().lower()
105
+ if key not in PDK_PROFILES:
106
+ key = "sky130"
107
+ data = dict(PDK_PROFILES[key])
108
+ data["profile"] = key
109
+ return data
110
+
111
+
112
+ def get_toolchain_diagnostics() -> Dict[str, Any]:
113
+ """Return resolved toolchain paths and existence info for startup checks."""
114
+ bins = {
115
+ "sby": SBY_BIN,
116
+ "yosys": YOSYS_BIN,
117
+ "eqy": EQY_BIN,
118
+ }
119
+ return {
120
+ "workspace_root": WORKSPACE_ROOT,
121
+ "openlane_root": OPENLANE_ROOT,
122
+ "pdk_root": PDK_ROOT,
123
+ "pdk": PDK,
124
+ "oss_cad_suite_home": os.environ.get("OSS_CAD_SUITE_HOME", ""),
125
+ "bins": {
126
+ name: {"path": path, "exists": os.path.exists(path) if os.path.isabs(path) else False}
127
+ for name, path in bins.items()
128
+ },
129
+ }
src/agentic/orchestrator.py CHANGED
@@ -3,13 +3,16 @@ import time
3
  import logging
4
  import os
5
  import re
 
 
 
6
  from typing import Optional, Dict, Any, List
7
  from rich.console import Console
8
  from rich.panel import Panel
9
  from crewai import Agent, Task, Crew, LLM
10
 
11
  # Local imports
12
- from .config import OPENLANE_ROOT, LLM_MODEL, LLM_BASE_URL, LLM_API_KEY, PDK
13
  from .agents.designer import get_designer_agent
14
  from .agents.testbench_designer import get_testbench_agent
15
  from .agents.verifier import get_verification_agent, get_error_analyst_agent, get_regression_agent
@@ -34,7 +37,13 @@ from .tools.vlsi_tools import (
34
  check_physical_metrics,
35
  run_cdc_check,
36
  generate_design_doc,
37
- convert_sva_to_yosys
 
 
 
 
 
 
38
  )
39
 
40
  console = Console()
@@ -52,13 +61,49 @@ class BuildState(enum.Enum):
52
  FORMAL_VERIFY = "Formal Property Verification"
53
  COVERAGE_CHECK = "Coverage Analysis"
54
  REGRESSION = "Regression Testing"
 
55
  HARDENING = "GDSII Hardening"
 
 
56
  SIGNOFF = "DRC/LVS Signoff"
57
  SUCCESS = "Build Complete"
58
  FAIL = "Build Failed"
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  class BuildOrchestrator:
61
- def __init__(self, name: str, desc: str, llm: LLM, max_retries: int = 5, verbose: bool = True, skip_openlane: bool = False, full_signoff: bool = False, min_coverage: float = 80.0):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  self.name = name
63
  self.desc = desc
64
  self.llm = llm
@@ -67,10 +112,25 @@ class BuildOrchestrator:
67
  self.skip_openlane = skip_openlane
68
  self.full_signoff = full_signoff
69
  self.min_coverage = min_coverage
 
 
 
 
 
 
70
 
71
  self.state = BuildState.INIT
72
  self.strategy = BuildStrategy.SV_MODULAR
73
  self.retry_count = 0
 
 
 
 
 
 
 
 
 
74
  self.artifacts = {} # Store paths to gathered files
75
  self.history = [] # Log of state transitions and errors
76
  self.errors = [] # List of error messages
@@ -94,7 +154,9 @@ class BuildOrchestrator:
94
 
95
  def log(self, message: str, refined: bool = False):
96
  """Logs a message to the console (if refined) and file (always)."""
97
- self.history.append({"state": self.state.name, "msg": message, "time": time.time()})
 
 
98
 
99
  # File Log
100
  if hasattr(self, 'logger'):
@@ -113,6 +175,32 @@ class BuildOrchestrator:
113
  self.state = new_state
114
  if not preserve_retries:
115
  self.retry_count = 0 # Reset retries on state change
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
 
117
  def run(self):
118
  """Main execution loop."""
@@ -120,6 +208,11 @@ class BuildOrchestrator:
120
 
121
  try:
122
  while self.state != BuildState.SUCCESS and self.state != BuildState.FAIL:
 
 
 
 
 
123
  if self.state == BuildState.INIT:
124
  self.do_init()
125
  elif self.state == BuildState.SPEC:
@@ -136,8 +229,14 @@ class BuildOrchestrator:
136
  self.do_coverage_check()
137
  elif self.state == BuildState.REGRESSION:
138
  self.do_regression()
 
 
139
  elif self.state == BuildState.HARDENING:
140
  self.do_hardening()
 
 
 
 
141
  elif self.state == BuildState.SIGNOFF:
142
  self.do_signoff()
143
  else:
@@ -151,7 +250,10 @@ class BuildOrchestrator:
151
  self.state = BuildState.FAIL
152
 
153
  if self.state == BuildState.SUCCESS:
154
- import json
 
 
 
155
  # Create a clean summary of just the paths
156
  summary = {k: v for k, v in self.artifacts.items() if 'code' not in k and 'spec' not in k}
157
 
@@ -170,6 +272,19 @@ class BuildOrchestrator:
170
  # Setup directories, check tools
171
  self.artifacts['root'] = f"{OPENLANE_ROOT}/designs/{self.name}"
172
  self.setup_logger() # Setup logging to file
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  time.sleep(1) # Visual pause
174
  self.transition(BuildState.SPEC)
175
 
@@ -216,6 +331,7 @@ Outputs:
216
  - **NO PLACEHOLDERS**: Do not write `// Simplified check` or `assign data = 0;`. Implement the ACTUAL LOGIC.
217
  - **NO PARTIAL IMPLEMENTATIONS**: If it's a 4x4 array, enable ALL cells.
218
  - **NO HARDCODING**: Use `parameter` for widths and depths.
 
219
  """
220
  else:
221
  return """
@@ -291,6 +407,204 @@ Outputs:
291
  header_match = re.search(r'(module\s+\w+[\s\S]*?;)', rtl_code)
292
  return header_match.group(1) if header_match else "Could not extract ports — see full RTL below."
293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
294
  def do_rtl_gen(self):
295
  # Check Golden Reference Library for a matching template
296
  from .golden_lib import get_best_template
@@ -333,6 +647,9 @@ SPECIFICATION:
333
  STRATEGY GUIDELINES:
334
  {strategy_prompt}
335
 
 
 
 
336
  CRITICAL RULES:
337
  1. Module name must be "{self.name}"
338
  2. Async active-low reset `rst_n`
@@ -362,6 +679,8 @@ CRITICAL RULES:
362
  # Store the CLEANED code (read back from file), not raw LLM output
363
  with open(path, 'r') as f:
364
  self.artifacts['rtl_code'] = f.read()
 
 
365
  self.transition(BuildState.RTL_FIX)
366
 
367
  def do_rtl_fix(self):
@@ -393,9 +712,23 @@ CRITICAL RULES:
393
  self.artifacts['rtl_code'] = f.read()
394
  # Re-check syntax after fix (stay in RTL_FIX)
395
  return
396
-
397
- self.transition(BuildState.VERIFICATION)
398
- return
 
 
 
 
 
 
 
 
 
 
 
 
 
 
399
  else:
400
  self.log(f"Lint Failed. Check log for details.", refined=True)
401
  errors = f"SYNTAX OK, BUT LINT FAILED:\n{lint_report}"
@@ -406,6 +739,10 @@ CRITICAL RULES:
406
 
407
  # Handle Syntax/Lint Errors that need LLM
408
  self.logger.info(f"SYNTAX/LINT ERRORS:\n{errors}")
 
 
 
 
409
  self.retry_count += 1
410
  if self.retry_count > self.max_retries:
411
  self.log("Max Retries Exceeded for Syntax/Lint Fix.", refined=True)
@@ -422,12 +759,13 @@ CRITICAL RULES:
422
  return
423
 
424
  self.log(f"Fixing Code (Attempt {self.retry_count}/{self.max_retries})", refined=True)
 
425
 
426
  # Agents fix syntax
427
  fix_prompt = f"""Fix Syntax/Lint Errors in "{self.name}".
428
 
429
  Error Log:
430
- {errors}
431
 
432
  Strategy: {self.strategy.name} (Keep consistency!)
433
 
@@ -566,6 +904,29 @@ RULES:
566
  # It should be there from generation or reading above
567
  with open(self.artifacts['tb_path'], 'r') as f:
568
  tb_code = f.read()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
569
 
570
  # Run Sim
571
  with console.status("[bold magenta]Running Simulation...[/bold magenta]"):
@@ -579,15 +940,25 @@ RULES:
579
  self.log("Skipping Hardening (--skip-openlane).", refined=True)
580
  self.transition(BuildState.FORMAL_VERIFY)
581
  else:
582
- # Interactive Prompt for Hardening
583
  import typer
584
- console.print()
585
- if typer.confirm("Simulation Passed. Proceed to OpenLane Hardening (takes 10-30 mins)?", default=True):
 
586
  self.transition(BuildState.FORMAL_VERIFY)
587
  else:
588
- self.log("Skipping Hardening (User Cancelled).", refined=True)
589
- self.transition(BuildState.FORMAL_VERIFY)
 
 
 
 
590
  else:
 
 
 
 
 
591
  self.retry_count += 1
592
  if self.retry_count > self.max_retries:
593
  self.log(f"Max Sim Retries ({self.max_retries}) Exceeded. Simulation Failed.", refined=True)
@@ -604,7 +975,7 @@ RULES:
604
  analysis_task = Task(
605
  description=f'''Analyze this Verification Failure.
606
  Error Log:
607
- {output}
608
  Is this a:
609
  A) TESTBENCH_ERROR (Syntax, $monitor usage, race condition, compilation fail)
610
  B) RTL_LOGIC_ERROR (Mismatch, Wrong State, Functional Failure)
@@ -626,7 +997,7 @@ Reply with ONLY "A" or "B".''',
626
  fix_prompt = f"""Fix the Testbench logic/syntax.
627
 
628
  ERROR LOG:
629
- {output}
630
 
631
  MODULE INTERFACE (use EXACT port names):
632
  {port_info}
@@ -660,7 +1031,7 @@ CRITICAL:
660
  {error_summary}
661
 
662
  Full Log:
663
- {output}
664
 
665
  Current RTL:
666
  ```verilog
@@ -813,12 +1184,17 @@ CRITICAL:
813
  self.artifacts['formal_result'] = 'PASS'
814
  else:
815
  self.log(f"Formal Verification: {result[:200]}", refined=True)
816
- self.artifacts['formal_result'] = 'FAIL (non-blocking)'
817
- # Formal failure is non-blocking — we log it but continue
818
- # Industry note: In production, this would be blocking
 
 
819
  except Exception as e:
820
- self.log(f"Formal verification error: {str(e)}. Continuing.", refined=True)
821
  self.artifacts['formal_result'] = f'ERROR: {str(e)}'
 
 
 
822
 
823
  # 4. Run CDC check
824
  with console.status("[bold cyan]Running CDC Analysis...[/bold cyan]"):
@@ -830,7 +1206,11 @@ CRITICAL:
830
  if cdc_clean:
831
  self.log("CDC Analysis: CLEAN", refined=True)
832
  else:
833
- self.log(f"CDC Analysis: warnings found (non-blocking)", refined=True)
 
 
 
 
834
 
835
  self.transition(BuildState.COVERAGE_CHECK)
836
 
@@ -871,7 +1251,7 @@ CRITICAL:
871
  elif self.skip_openlane:
872
  self.transition(BuildState.SUCCESS)
873
  else:
874
- self.transition(BuildState.HARDENING)
875
  else:
876
  self.retry_count += 1
877
  # Cap coverage retries at 2 — the metric is heuristic-based (iverilog
@@ -883,14 +1263,17 @@ CRITICAL:
883
  self.log(f"Restoring Best Testbench ({self.best_coverage:.1f}%) before proceeding.", refined=True)
884
  import shutil
885
  shutil.copy(self.best_tb_backup, self.artifacts['tb_path'])
886
-
 
 
 
887
  self.log(f"Coverage below threshold after {coverage_max_retries} attempts. Proceeding anyway.", refined=True)
888
  if self.full_signoff:
889
  self.transition(BuildState.REGRESSION)
890
  elif self.skip_openlane:
891
  self.transition(BuildState.SUCCESS)
892
  else:
893
- self.transition(BuildState.HARDENING)
894
  return
895
 
896
  # Ask LLM to generate additional tests to improve coverage
@@ -1075,13 +1458,187 @@ CRITICAL:
1075
  self.log(f"All {len(test_results)} regression tests PASSED!", refined=True)
1076
  else:
1077
  passed_count = sum(1 for tr in test_results if tr['status'] == 'PASS')
1078
- self.log(f"Regression: {passed_count}/{len(test_results)} passed (non-blocking)", refined=True)
 
 
 
 
1079
 
1080
  # Regression failures are non-blocking (logged but proceed)
1081
  if self.skip_openlane:
1082
  self.transition(BuildState.SUCCESS)
1083
  else:
1084
- self.transition(BuildState.HARDENING)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1085
 
1086
  def do_hardening(self):
1087
  # 1. Generate config.tcl (CRITICAL: Required for OpenLane)
@@ -1101,18 +1658,19 @@ CRITICAL:
1101
  # Modern OpenLane Config Template
1102
  # Note: We use GRT_ADJUSTMENT instead of deprecated GLB_RT_ADJUSTMENT
1103
 
1104
- # Determine STD_CELL_LIBRARY based on PDK (or default to sky130_fd_sc_hd)
1105
- # This should ideally come from global config
1106
- std_cell_lib = "sky130_fd_sc_hd"
1107
- if "gf180" in PDK:
1108
- std_cell_lib = "gf180mcu_fd_sc_mcu7t5v0"
 
1109
 
1110
  config_tcl = f"""
1111
  # User config
1112
  set ::env(DESIGN_NAME) "{self.name}"
1113
 
1114
  # PDK Setup
1115
- set ::env(PDK) "{PDK}"
1116
  set ::env(STD_CELL_LIBRARY) "{std_cell_lib}"
1117
 
1118
  # Verilog Files
@@ -1121,16 +1679,17 @@ set ::env(VERILOG_FILES) [glob $::env(DESIGN_DIR)/src/{self.name}.v]
1121
  # Clock Configuration
1122
  set ::env(CLOCK_PORT) "{clock_port}"
1123
  set ::env(CLOCK_NET) "{clock_port}"
1124
- set ::env(CLOCK_PERIOD) "10.0"
1125
 
1126
  # Synthesis
1127
  set ::env(SYNTH_STRATEGY) "AREA 0"
1128
  set ::env(SYNTH_SIZING) 1
1129
 
1130
  # Floorplanning
1131
- set ::env(FP_SIZING) "relative"
1132
- set ::env(FP_CORE_UTIL) 40
1133
- set ::env(PL_TARGET_DENSITY) 0.55
 
1134
 
1135
  # Routing
1136
  set ::env(GRT_ADJUSTMENT) 0.15
@@ -1148,14 +1707,22 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1148
  return
1149
 
1150
  # 2. Run OpenLane
 
 
1151
  with console.status("[bold blue]Hardening Layout (OpenLane)...[/bold blue]"):
1152
- success, result = run_openlane(self.name, background=False)
 
 
 
 
 
 
1153
 
1154
  if success:
1155
  self.artifacts['gds'] = result
 
1156
  self.log(f"GDSII generated: {result}", refined=True)
1157
- # Always proceed to Signoff for final checks
1158
- self.transition(BuildState.SIGNOFF)
1159
  else:
1160
  self.log(f"Hardening Failed: {result}")
1161
  self.state = BuildState.FAIL
@@ -1164,6 +1731,21 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1164
  """Performs full fabrication-readiness signoff: DRC/LVS, timing closure, power analysis."""
1165
  self.log("Running Fabrication Readiness Signoff...", refined=True)
1166
  fab_ready = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1167
 
1168
  # ── 1. DRC / LVS ──
1169
  with console.status("[bold blue]Checking DRC/LVS Reports...[/bold blue]"):
@@ -1191,6 +1773,7 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1191
 
1192
  if sta.get('error'):
1193
  self.log(f"STA: {sta['error']}", refined=True)
 
1194
  else:
1195
  for c in sta['corners']:
1196
  status = "✓" if (c['setup_slack'] >= 0 and c['hold_slack'] >= 0) else "✗"
@@ -1278,6 +1861,18 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1278
  self.log(f"Datasheet generated: {doc_path}", refined=True)
1279
  except Exception as e:
1280
  self.log(f"Error writing datasheet: {e}", refined=True)
 
 
 
 
 
 
 
 
 
 
 
 
1281
 
1282
  # FINAL VERDICT
1283
  timing_status = "MET" if sta.get('timing_met') else "FAILED" if not sta.get('error') else "N/A"
@@ -1292,6 +1887,7 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1292
  f"Timing: {timing_status} (WNS={sta.get('worst_setup', 0):.2f}ns)\n"
1293
  f"Power: {power_status}\n"
1294
  f"IR-Drop: {irdrop_status}\n"
 
1295
  f"Coverage: {self.artifacts.get('coverage', {}).get('line_pct', 'N/A')}%\n"
1296
  f"Formal: {self.artifacts.get('formal_result', 'SKIPPED')}\n\n"
1297
  f"{'[bold green]FABRICATION READY ✓[/]' if fab_ready else '[bold red]NOT FABRICATION READY ✗[/]'}",
@@ -1303,6 +1899,11 @@ set ::env(MAGIC_DRC_USE_GDS) 1
1303
  self.artifacts['signoff_result'] = 'PASS'
1304
  self.transition(BuildState.SUCCESS)
1305
  else:
 
 
 
 
 
1306
  self.log("❌ SIGNOFF FAILED (Violations Found)", refined=True)
1307
  self.artifacts['signoff_result'] = 'FAIL'
1308
  self.errors.append("Signoff failed (see report).")
 
3
  import logging
4
  import os
5
  import re
6
+ import hashlib
7
+ import json
8
+ from dataclasses import dataclass, asdict
9
  from typing import Optional, Dict, Any, List
10
  from rich.console import Console
11
  from rich.panel import Panel
12
  from crewai import Agent, Task, Crew, LLM
13
 
14
  # Local imports
15
+ from .config import OPENLANE_ROOT, LLM_MODEL, LLM_BASE_URL, LLM_API_KEY, PDK, WORKSPACE_ROOT, get_pdk_profile
16
  from .agents.designer import get_designer_agent
17
  from .agents.testbench_designer import get_testbench_agent
18
  from .agents.verifier import get_verification_agent, get_error_analyst_agent, get_regression_agent
 
37
  check_physical_metrics,
38
  run_cdc_check,
39
  generate_design_doc,
40
+ convert_sva_to_yosys,
41
+ startup_self_check,
42
+ run_semantic_rigor_check,
43
+ parse_eda_log_summary,
44
+ parse_congestion_metrics,
45
+ run_eqy_lec,
46
+ apply_eco_patch,
47
  )
48
 
49
  console = Console()
 
61
  FORMAL_VERIFY = "Formal Property Verification"
62
  COVERAGE_CHECK = "Coverage Analysis"
63
  REGRESSION = "Regression Testing"
64
+ FLOORPLAN = "Floorplanning"
65
  HARDENING = "GDSII Hardening"
66
+ CONVERGENCE_REVIEW = "Convergence Review"
67
+ ECO_PATCH = "ECO Patch"
68
  SIGNOFF = "DRC/LVS Signoff"
69
  SUCCESS = "Build Complete"
70
  FAIL = "Build Failed"
71
 
72
+
73
+ @dataclass
74
+ class ConvergenceSnapshot:
75
+ iteration: int
76
+ wns: float
77
+ tns: float
78
+ congestion: float
79
+ area_um2: float
80
+ power_w: float
81
+
82
+
83
+ @dataclass
84
+ class BuildHistory:
85
+ state: str
86
+ message: str
87
+ timestamp: float
88
+
89
  class BuildOrchestrator:
90
+ def __init__(
91
+ self,
92
+ name: str,
93
+ desc: str,
94
+ llm: LLM,
95
+ max_retries: int = 5,
96
+ verbose: bool = True,
97
+ skip_openlane: bool = False,
98
+ full_signoff: bool = False,
99
+ min_coverage: float = 80.0,
100
+ strict_gates: bool = True,
101
+ pdk_profile: str = "sky130",
102
+ max_pivots: int = 2,
103
+ congestion_threshold: float = 10.0,
104
+ hierarchical_mode: str = "auto",
105
+ global_step_budget: int = 120,
106
+ ):
107
  self.name = name
108
  self.desc = desc
109
  self.llm = llm
 
112
  self.skip_openlane = skip_openlane
113
  self.full_signoff = full_signoff
114
  self.min_coverage = min_coverage
115
+ self.strict_gates = strict_gates
116
+ self.pdk_profile = get_pdk_profile(pdk_profile)
117
+ self.max_pivots = max_pivots
118
+ self.congestion_threshold = congestion_threshold
119
+ self.hierarchical_mode = hierarchical_mode
120
+ self.global_step_budget = global_step_budget
121
 
122
  self.state = BuildState.INIT
123
  self.strategy = BuildStrategy.SV_MODULAR
124
  self.retry_count = 0
125
+ self.state_retry_counts: Dict[str, int] = {}
126
+ self.failure_fingerprint_history: Dict[str, int] = {}
127
+ self.global_step_count = 0
128
+ self.pivot_count = 0
129
+ self.strategy_pivot_stage = 0
130
+ self.convergence_history: List[ConvergenceSnapshot] = []
131
+ self.build_history: List[BuildHistory] = []
132
+ self.floorplan_attempts = 0
133
+ self.eco_attempts = 0
134
  self.artifacts = {} # Store paths to gathered files
135
  self.history = [] # Log of state transitions and errors
136
  self.errors = [] # List of error messages
 
154
 
155
  def log(self, message: str, refined: bool = False):
156
  """Logs a message to the console (if refined) and file (always)."""
157
+ now = time.time()
158
+ self.history.append({"state": self.state.name, "msg": message, "time": now})
159
+ self.build_history.append(BuildHistory(state=self.state.name, message=message, timestamp=now))
160
 
161
  # File Log
162
  if hasattr(self, 'logger'):
 
175
  self.state = new_state
176
  if not preserve_retries:
177
  self.retry_count = 0 # Reset retries on state change
178
+ self.state_retry_counts[new_state.name] = 0
179
+
180
+ def _bump_state_retry(self) -> int:
181
+ count = self.state_retry_counts.get(self.state.name, 0) + 1
182
+ self.state_retry_counts[self.state.name] = count
183
+ return count
184
+
185
+ def _artifact_fingerprint(self) -> str:
186
+ rtl = self.artifacts.get("rtl_code", "")
187
+ tb = ""
188
+ tb_path = self.artifacts.get("tb_path", "")
189
+ if tb_path and os.path.exists(tb_path):
190
+ try:
191
+ with open(tb_path, "r") as f:
192
+ tb = f.read()
193
+ except OSError:
194
+ tb = ""
195
+ digest = hashlib.sha256((rtl + "\n" + tb).encode("utf-8", errors="ignore")).hexdigest()
196
+ return digest[:16]
197
+
198
+ def _record_failure_fingerprint(self, error_text: str) -> bool:
199
+ base = f"{self.state.name}|{error_text[:500]}|{self._artifact_fingerprint()}"
200
+ fp = hashlib.sha256(base.encode("utf-8", errors="ignore")).hexdigest()
201
+ count = self.failure_fingerprint_history.get(fp, 0) + 1
202
+ self.failure_fingerprint_history[fp] = count
203
+ return count >= 2
204
 
205
  def run(self):
206
  """Main execution loop."""
 
208
 
209
  try:
210
  while self.state != BuildState.SUCCESS and self.state != BuildState.FAIL:
211
+ self.global_step_count += 1
212
+ if self.global_step_count > self.global_step_budget:
213
+ self.log(f"Global step budget exceeded ({self.global_step_budget}). Failing closed.", refined=True)
214
+ self.state = BuildState.FAIL
215
+ break
216
  if self.state == BuildState.INIT:
217
  self.do_init()
218
  elif self.state == BuildState.SPEC:
 
229
  self.do_coverage_check()
230
  elif self.state == BuildState.REGRESSION:
231
  self.do_regression()
232
+ elif self.state == BuildState.FLOORPLAN:
233
+ self.do_floorplan()
234
  elif self.state == BuildState.HARDENING:
235
  self.do_hardening()
236
+ elif self.state == BuildState.CONVERGENCE_REVIEW:
237
+ self.do_convergence_review()
238
+ elif self.state == BuildState.ECO_PATCH:
239
+ self.do_eco_patch()
240
  elif self.state == BuildState.SIGNOFF:
241
  self.do_signoff()
242
  else:
 
250
  self.state = BuildState.FAIL
251
 
252
  if self.state == BuildState.SUCCESS:
253
+ try:
254
+ self._save_industry_benchmark_metrics()
255
+ except Exception as e:
256
+ self.log(f"Benchmark metrics export warning: {e}", refined=True)
257
  # Create a clean summary of just the paths
258
  summary = {k: v for k, v in self.artifacts.items() if 'code' not in k and 'spec' not in k}
259
 
 
272
  # Setup directories, check tools
273
  self.artifacts['root'] = f"{OPENLANE_ROOT}/designs/{self.name}"
274
  self.setup_logger() # Setup logging to file
275
+ self.artifacts["pdk_profile"] = self.pdk_profile
276
+ self.log(
277
+ f"PDK profile: {self.pdk_profile.get('profile')} "
278
+ f"(PDK={self.pdk_profile.get('pdk')}, LIB={self.pdk_profile.get('std_cell_library')})",
279
+ refined=True,
280
+ )
281
+ diag = startup_self_check()
282
+ self.artifacts["startup_check"] = diag
283
+ self.logger.info(f"STARTUP SELF CHECK: {diag}")
284
+ if self.strict_gates and not diag.get("ok", False):
285
+ self.log("Startup self-check failed in strict mode.", refined=True)
286
+ self.state = BuildState.FAIL
287
+ return
288
  time.sleep(1) # Visual pause
289
  self.transition(BuildState.SPEC)
290
 
 
331
  - **NO PLACEHOLDERS**: Do not write `// Simplified check` or `assign data = 0;`. Implement the ACTUAL LOGIC.
332
  - **NO PARTIAL IMPLEMENTATIONS**: If it's a 4x4 array, enable ALL cells.
333
  - **NO HARDCODING**: Use `parameter` for widths and depths.
334
+ - **HARDWARE RIGOR**: Validate bit-width compatibility on every assignment and never shadow module ports with internal signals.
335
  """
336
  else:
337
  return """
 
407
  header_match = re.search(r'(module\s+\w+[\s\S]*?;)', rtl_code)
408
  return header_match.group(1) if header_match else "Could not extract ports — see full RTL below."
409
 
410
+ @staticmethod
411
+ def _tb_meets_strict_contract(tb_code: str, strategy: BuildStrategy) -> tuple:
412
+ missing = []
413
+ text = tb_code or ""
414
+ if "TEST PASSED" not in text:
415
+ missing.append("Missing TEST PASSED marker")
416
+ if "TEST FAILED" not in text:
417
+ missing.append("Missing TEST FAILED marker")
418
+ if strategy == BuildStrategy.SV_MODULAR:
419
+ if "class Transaction" not in text:
420
+ missing.append("Missing class Transaction")
421
+ if all(token not in text for token in ["class Driver", "class Monitor", "class Scoreboard"]):
422
+ missing.append("Missing transaction flow classes")
423
+ return len(missing) == 0, missing
424
+
425
+ def _condense_failure_log(self, raw_text: str, kind: str) -> str:
426
+ if not raw_text:
427
+ return raw_text
428
+ if len(raw_text) < 12000:
429
+ return raw_text
430
+ src_dir = f"{OPENLANE_ROOT}/designs/{self.name}/src"
431
+ os.makedirs(src_dir, exist_ok=True)
432
+ log_path = os.path.join(src_dir, f"{self.name}_{kind}_failure.log")
433
+ try:
434
+ with open(log_path, "w") as f:
435
+ f.write(raw_text)
436
+ summary = parse_eda_log_summary(log_path, kind=kind, top_n=10)
437
+ return f"LOG_SUMMARY: {summary}"
438
+ except OSError:
439
+ return raw_text[-12000:]
440
+
441
+ def _evaluate_hierarchy(self, rtl_code: str):
442
+ module_count = len(re.findall(r"\bmodule\b", rtl_code))
443
+ rtl_lines = len([l for l in rtl_code.splitlines() if l.strip()])
444
+ if self.hierarchical_mode == "on":
445
+ enabled = True
446
+ elif self.hierarchical_mode == "off":
447
+ enabled = False
448
+ else:
449
+ enabled = module_count >= 3 and rtl_lines >= 600
450
+ self.artifacts["hierarchy_plan"] = {
451
+ "mode": self.hierarchical_mode,
452
+ "enabled": enabled,
453
+ "module_count": module_count,
454
+ "rtl_lines": rtl_lines,
455
+ "thresholds": {"module_count": 3, "rtl_lines": 600},
456
+ }
457
+ if enabled:
458
+ self.log("Hierarchical synthesis planner: enabled.", refined=True)
459
+ else:
460
+ self.log("Hierarchical synthesis planner: disabled.", refined=True)
461
+
462
+ def _write_ip_manifest(self):
463
+ rtl_path = self.artifacts.get("rtl_path", "")
464
+ if not rtl_path or not os.path.exists(rtl_path):
465
+ return
466
+ with open(rtl_path, "r") as f:
467
+ rtl_code = f.read()
468
+ modules = re.findall(r"module\s+([A-Za-z_]\w*)", rtl_code)
469
+ params = re.findall(r"parameter\s+([A-Za-z_]\w*)\s*=\s*([^,;\)]+)", rtl_code)
470
+ ports = re.findall(
471
+ r"\b(input|output|inout)\s+(?:reg|wire|logic)?\s*(?:\[[^\]]+\])?\s*([A-Za-z_]\w*)",
472
+ rtl_code,
473
+ )
474
+ manifest = {
475
+ "ip_name": self.name,
476
+ "version": "1.0.0",
477
+ "clock_reset": {"clock": "clk", "reset": "rst_n", "reset_active_low": True},
478
+ "modules": modules,
479
+ "dependencies": [m for m in modules if m != self.name],
480
+ "parameters": [{"name": n, "default": v.strip()} for n, v in params],
481
+ "ports": [{"direction": d, "name": n} for d, n in ports],
482
+ "verification_status": {
483
+ "simulation": "PASS" if self.state == BuildState.SUCCESS else "UNKNOWN",
484
+ "formal": self.artifacts.get("formal_result", "UNKNOWN"),
485
+ "signoff": self.artifacts.get("signoff_result", "UNKNOWN"),
486
+ },
487
+ "ipxact_bridge_ready": True,
488
+ }
489
+ out = os.path.join(OPENLANE_ROOT, "designs", self.name, "ip_manifest.json")
490
+ with open(out, "w") as f:
491
+ json.dump(manifest, f, indent=2)
492
+ self.artifacts["ip_manifest"] = out
493
+
494
+ def _build_industry_benchmark_snapshot(self) -> Dict[str, Any]:
495
+ metrics = self.artifacts.get("metrics", {}) or {}
496
+ sta = self.artifacts.get("sta_signoff", {}) or {}
497
+ power = self.artifacts.get("power_signoff", {}) or {}
498
+ signoff = self.artifacts.get("signoff", {}) or {}
499
+ congestion = self.artifacts.get("congestion", {}) or {}
500
+ coverage = self.artifacts.get("coverage", {}) or {}
501
+ regression_results = self.artifacts.get("regression_results", []) or []
502
+
503
+ regression_pass = sum(1 for x in regression_results if x.get("status") == "PASS")
504
+ regression_total = len(regression_results)
505
+
506
+ snapshot = {
507
+ "design_name": self.name,
508
+ "generated_at_epoch": int(time.time()),
509
+ "build_status": self.state.name,
510
+ "signoff_result": self.artifacts.get("signoff_result", "UNKNOWN"),
511
+ "pdk_profile": self.pdk_profile.get("profile"),
512
+ "pdk": self.pdk_profile.get("pdk"),
513
+ "std_cell_library": self.pdk_profile.get("std_cell_library"),
514
+ "industry_benchmark": {
515
+ "area_um2": metrics.get("chip_area_um2", 0.0),
516
+ "cell_count": metrics.get("area", 0.0),
517
+ "utilization_pct": metrics.get("utilization", 0.0),
518
+ "timing_wns_ns": sta.get("worst_setup", metrics.get("timing_wns", 0.0)),
519
+ "timing_tns_ns": metrics.get("timing_tns", 0.0),
520
+ "hold_slack_ns": sta.get("worst_hold", 0.0),
521
+ "drc_violations": signoff.get("drc_violations", -1),
522
+ "lvs_errors": signoff.get("lvs_errors", -1),
523
+ "antenna_violations": signoff.get("antenna_violations", -1),
524
+ "total_power_mw": float(power.get("total_power_w", 0.0)) * 1000.0,
525
+ "internal_power_mw": float(power.get("internal_power_w", 0.0)) * 1000.0,
526
+ "switching_power_mw": float(power.get("switching_power_w", 0.0)) * 1000.0,
527
+ "leakage_power_uw": float(power.get("leakage_power_w", 0.0)) * 1e6,
528
+ "irdrop_vpwr_mv": float(power.get("irdrop_max_vpwr", 0.0)) * 1000.0,
529
+ "irdrop_vgnd_mv": float(power.get("irdrop_max_vgnd", 0.0)) * 1000.0,
530
+ "congestion_usage_pct": congestion.get("total_usage_pct", 0.0),
531
+ "congestion_overflow": congestion.get("total_overflow", 0),
532
+ "coverage_line_pct": coverage.get("line_pct", 0.0),
533
+ "formal_result": self.artifacts.get("formal_result", "UNKNOWN"),
534
+ "lec_result": self.artifacts.get("lec_result", "UNKNOWN"),
535
+ "regression_passed": regression_pass,
536
+ "regression_total": regression_total,
537
+ "clock_period_ns": self.artifacts.get("clock_period_override", self.pdk_profile.get("default_clock_period")),
538
+ "pivots_used": self.pivot_count,
539
+ "global_steps": self.global_step_count,
540
+ },
541
+ }
542
+ return snapshot
543
+
544
+ def _save_industry_benchmark_metrics(self):
545
+ """Write benchmark metrics after successful chip creation to metircs/."""
546
+ snapshot = self._build_industry_benchmark_snapshot()
547
+ metrics_root = os.path.join(WORKSPACE_ROOT, "metircs")
548
+ design_dir = os.path.join(metrics_root, self.name)
549
+ os.makedirs(design_dir, exist_ok=True)
550
+
551
+ stamp = time.strftime("%Y%m%d_%H%M%S")
552
+ json_path = os.path.join(design_dir, f"{self.name}_industry_benchmark_{stamp}.json")
553
+ md_path = os.path.join(design_dir, f"{self.name}_industry_benchmark_{stamp}.md")
554
+ latest_json = os.path.join(design_dir, "latest.json")
555
+ latest_md = os.path.join(design_dir, "latest.md")
556
+
557
+ with open(json_path, "w") as f:
558
+ json.dump(snapshot, f, indent=2)
559
+
560
+ ib = snapshot["industry_benchmark"]
561
+ lines = [
562
+ f"# {self.name} Industry Benchmark Metrics",
563
+ "",
564
+ f"- Generated At (epoch): `{snapshot['generated_at_epoch']}`",
565
+ f"- Build Status: `{snapshot['build_status']}`",
566
+ f"- Signoff Result: `{snapshot['signoff_result']}`",
567
+ f"- PDK Profile: `{snapshot['pdk_profile']}`",
568
+ "",
569
+ "| Metric | Value |",
570
+ "|---|---|",
571
+ ]
572
+ for k, v in ib.items():
573
+ lines.append(f"| `{k}` | `{v}` |")
574
+ with open(md_path, "w") as f:
575
+ f.write("\n".join(lines) + "\n")
576
+
577
+ # Keep a latest pointer as plain copied files for easy consumption.
578
+ with open(latest_json, "w") as f:
579
+ json.dump(snapshot, f, indent=2)
580
+ with open(latest_md, "w") as f:
581
+ f.write("\n".join(lines) + "\n")
582
+
583
+ self.artifacts["benchmark_metrics_json"] = json_path
584
+ self.artifacts["benchmark_metrics_md"] = md_path
585
+ self.artifacts["benchmark_metrics_dir"] = design_dir
586
+ self.log(f"Saved industry benchmark metrics to {design_dir}", refined=True)
587
+
588
+ def _emit_hierarchical_block_artifacts(self):
589
+ plan = self.artifacts.get("hierarchy_plan", {})
590
+ if not plan.get("enabled"):
591
+ return
592
+ rtl_code = self.artifacts.get("rtl_code", "")
593
+ block_dir = os.path.join(OPENLANE_ROOT, "designs", self.name, "src", "blocks")
594
+ os.makedirs(block_dir, exist_ok=True)
595
+ modules = re.findall(r"(module\s+[A-Za-z_]\w*[\s\S]*?endmodule)", rtl_code)
596
+ block_files = []
597
+ for mod in modules:
598
+ m = re.search(r"module\s+([A-Za-z_]\w*)", mod)
599
+ if not m:
600
+ continue
601
+ mod_name = m.group(1)
602
+ path = os.path.join(block_dir, f"{mod_name}.v")
603
+ with open(path, "w") as f:
604
+ f.write(mod.strip() + "\n")
605
+ block_files.append(path)
606
+ self.artifacts["hierarchy_blocks"] = block_files
607
+
608
  def do_rtl_gen(self):
609
  # Check Golden Reference Library for a matching template
610
  from .golden_lib import get_best_template
 
647
  STRATEGY GUIDELINES:
648
  {strategy_prompt}
649
 
650
+ LOGIC DECOUPLING HINT:
651
+ {self.artifacts.get('logic_decoupling_hint', 'N/A')}
652
+
653
  CRITICAL RULES:
654
  1. Module name must be "{self.name}"
655
  2. Async active-low reset `rst_n`
 
679
  # Store the CLEANED code (read back from file), not raw LLM output
680
  with open(path, 'r') as f:
681
  self.artifacts['rtl_code'] = f.read()
682
+ self._evaluate_hierarchy(self.artifacts['rtl_code'])
683
+ self._emit_hierarchical_block_artifacts()
684
  self.transition(BuildState.RTL_FIX)
685
 
686
  def do_rtl_fix(self):
 
712
  self.artifacts['rtl_code'] = f.read()
713
  # Re-check syntax after fix (stay in RTL_FIX)
714
  return
715
+
716
+ sem_ok, sem_report = run_semantic_rigor_check(path)
717
+ self.logger.info(f"SEMANTIC RIGOR: {sem_report}")
718
+ if not sem_ok:
719
+ if self.strict_gates:
720
+ self.log("Semantic rigor gate failed. Routing back to RTL fix.", refined=True)
721
+ errors = f"SEMANTIC_RIGOR_FAILURE: {sem_report}"
722
+ else:
723
+ self.log("Semantic rigor warnings detected (non-blocking).", refined=True)
724
+ self.artifacts["semantic_report"] = sem_report
725
+ self.transition(BuildState.VERIFICATION)
726
+ return
727
+ else:
728
+ self.artifacts["semantic_report"] = sem_report
729
+ self.transition(BuildState.VERIFICATION)
730
+ return
731
+
732
  else:
733
  self.log(f"Lint Failed. Check log for details.", refined=True)
734
  errors = f"SYNTAX OK, BUT LINT FAILED:\n{lint_report}"
 
739
 
740
  # Handle Syntax/Lint Errors that need LLM
741
  self.logger.info(f"SYNTAX/LINT ERRORS:\n{errors}")
742
+ if self._record_failure_fingerprint(str(errors)):
743
+ self.log("Detected repeated syntax/lint failure fingerprint. Failing closed.", refined=True)
744
+ self.state = BuildState.FAIL
745
+ return
746
  self.retry_count += 1
747
  if self.retry_count > self.max_retries:
748
  self.log("Max Retries Exceeded for Syntax/Lint Fix.", refined=True)
 
759
  return
760
 
761
  self.log(f"Fixing Code (Attempt {self.retry_count}/{self.max_retries})", refined=True)
762
+ errors_for_llm = self._condense_failure_log(str(errors), kind="timing")
763
 
764
  # Agents fix syntax
765
  fix_prompt = f"""Fix Syntax/Lint Errors in "{self.name}".
766
 
767
  Error Log:
768
+ {errors_for_llm}
769
 
770
  Strategy: {self.strategy.name} (Keep consistency!)
771
 
 
904
  # It should be there from generation or reading above
905
  with open(self.artifacts['tb_path'], 'r') as f:
906
  tb_code = f.read()
907
+
908
+ if self.strict_gates:
909
+ tb_ok, tb_issues = self._tb_meets_strict_contract(tb_code, self.strategy)
910
+ if self.artifacts.get("golden_template"):
911
+ # Golden library TBs are pre-verified and may be procedural.
912
+ tb_issues = [i for i in tb_issues if "class" not in i.lower()]
913
+ tb_ok = len(tb_issues) == 0
914
+ if not tb_ok:
915
+ self.log(f"Testbench strict gate failed: {tb_issues}", refined=True)
916
+ self.logger.info(f"TB STRICT GATE FAILURE: {tb_issues}")
917
+ self.retry_count += 1
918
+ if self.retry_count > self.max_retries:
919
+ self.log("Max TB generation retries exceeded.", refined=True)
920
+ self.state = BuildState.FAIL
921
+ return
922
+ # Force regeneration path on next loop
923
+ if 'tb_path' in self.artifacts:
924
+ try:
925
+ os.remove(self.artifacts['tb_path'])
926
+ except OSError:
927
+ pass
928
+ self.artifacts.pop('tb_path', None)
929
+ return
930
 
931
  # Run Sim
932
  with console.status("[bold magenta]Running Simulation...[/bold magenta]"):
 
940
  self.log("Skipping Hardening (--skip-openlane).", refined=True)
941
  self.transition(BuildState.FORMAL_VERIFY)
942
  else:
943
+ # Interactive Prompt for Hardening
944
  import typer
945
+ import sys
946
+ if not sys.stdin.isatty():
947
+ self.log("Non-interactive session: auto-proceeding after simulation pass.", refined=True)
948
  self.transition(BuildState.FORMAL_VERIFY)
949
  else:
950
+ console.print()
951
+ if typer.confirm("Simulation Passed. Proceed to OpenLane Hardening (takes 10-30 mins)?", default=True):
952
+ self.transition(BuildState.FORMAL_VERIFY)
953
+ else:
954
+ self.log("Skipping Hardening (User Cancelled).", refined=True)
955
+ self.transition(BuildState.FORMAL_VERIFY)
956
  else:
957
+ output_for_llm = self._condense_failure_log(output, kind="timing")
958
+ if self._record_failure_fingerprint(output_for_llm):
959
+ self.log("Detected repeated simulation failure fingerprint. Failing closed.", refined=True)
960
+ self.state = BuildState.FAIL
961
+ return
962
  self.retry_count += 1
963
  if self.retry_count > self.max_retries:
964
  self.log(f"Max Sim Retries ({self.max_retries}) Exceeded. Simulation Failed.", refined=True)
 
975
  analysis_task = Task(
976
  description=f'''Analyze this Verification Failure.
977
  Error Log:
978
+ {output_for_llm}
979
  Is this a:
980
  A) TESTBENCH_ERROR (Syntax, $monitor usage, race condition, compilation fail)
981
  B) RTL_LOGIC_ERROR (Mismatch, Wrong State, Functional Failure)
 
997
  fix_prompt = f"""Fix the Testbench logic/syntax.
998
 
999
  ERROR LOG:
1000
+ {output_for_llm}
1001
 
1002
  MODULE INTERFACE (use EXACT port names):
1003
  {port_info}
 
1031
  {error_summary}
1032
 
1033
  Full Log:
1034
+ {output_for_llm}
1035
 
1036
  Current RTL:
1037
  ```verilog
 
1184
  self.artifacts['formal_result'] = 'PASS'
1185
  else:
1186
  self.log(f"Formal Verification: {result[:200]}", refined=True)
1187
+ self.artifacts['formal_result'] = 'FAIL'
1188
+ if self.strict_gates:
1189
+ self.log("Formal verification failed under strict mode.", refined=True)
1190
+ self.state = BuildState.FAIL
1191
+ return
1192
  except Exception as e:
1193
+ self.log(f"Formal verification error: {str(e)}.", refined=True)
1194
  self.artifacts['formal_result'] = f'ERROR: {str(e)}'
1195
+ if self.strict_gates:
1196
+ self.state = BuildState.FAIL
1197
+ return
1198
 
1199
  # 4. Run CDC check
1200
  with console.status("[bold cyan]Running CDC Analysis...[/bold cyan]"):
 
1206
  if cdc_clean:
1207
  self.log("CDC Analysis: CLEAN", refined=True)
1208
  else:
1209
+ self.log(f"CDC Analysis: warnings found", refined=True)
1210
+ if self.strict_gates:
1211
+ self.log("CDC issues are blocking under strict mode.", refined=True)
1212
+ self.state = BuildState.FAIL
1213
+ return
1214
 
1215
  self.transition(BuildState.COVERAGE_CHECK)
1216
 
 
1251
  elif self.skip_openlane:
1252
  self.transition(BuildState.SUCCESS)
1253
  else:
1254
+ self.transition(BuildState.FLOORPLAN)
1255
  else:
1256
  self.retry_count += 1
1257
  # Cap coverage retries at 2 — the metric is heuristic-based (iverilog
 
1263
  self.log(f"Restoring Best Testbench ({self.best_coverage:.1f}%) before proceeding.", refined=True)
1264
  import shutil
1265
  shutil.copy(self.best_tb_backup, self.artifacts['tb_path'])
1266
+ if self.strict_gates:
1267
+ self.log(f"Coverage below threshold after {coverage_max_retries} attempts. Failing strict gate.", refined=True)
1268
+ self.state = BuildState.FAIL
1269
+ return
1270
  self.log(f"Coverage below threshold after {coverage_max_retries} attempts. Proceeding anyway.", refined=True)
1271
  if self.full_signoff:
1272
  self.transition(BuildState.REGRESSION)
1273
  elif self.skip_openlane:
1274
  self.transition(BuildState.SUCCESS)
1275
  else:
1276
+ self.transition(BuildState.FLOORPLAN)
1277
  return
1278
 
1279
  # Ask LLM to generate additional tests to improve coverage
 
1458
  self.log(f"All {len(test_results)} regression tests PASSED!", refined=True)
1459
  else:
1460
  passed_count = sum(1 for tr in test_results if tr['status'] == 'PASS')
1461
+ self.log(f"Regression: {passed_count}/{len(test_results)} passed", refined=True)
1462
+ if self.strict_gates:
1463
+ self.log("Regression failures are blocking under strict mode.", refined=True)
1464
+ self.state = BuildState.FAIL
1465
+ return
1466
 
1467
  # Regression failures are non-blocking (logged but proceed)
1468
  if self.skip_openlane:
1469
  self.transition(BuildState.SUCCESS)
1470
  else:
1471
+ self.transition(BuildState.FLOORPLAN)
1472
+
1473
+ def do_floorplan(self):
1474
+ """Generate floorplan artifacts and feed hardening with spatial intent."""
1475
+ self.floorplan_attempts += 1
1476
+ self.log(f"Preparing floorplan attempt {self.floorplan_attempts}...", refined=True)
1477
+ src_dir = f"{OPENLANE_ROOT}/designs/{self.name}/src"
1478
+ os.makedirs(src_dir, exist_ok=True)
1479
+
1480
+ rtl_code = self.artifacts.get("rtl_code", "")
1481
+ line_count = max(1, len([l for l in rtl_code.splitlines() if l.strip()]))
1482
+ cell_count_est = max(100, line_count * 4)
1483
+
1484
+ base_die = 300 if line_count < 100 else 500 if line_count < 300 else 800
1485
+ area_scale = self.artifacts.get("area_scale", 1.0)
1486
+ die = int(base_die * area_scale)
1487
+ util = 40 if line_count >= 200 else 50
1488
+ clock_period = self.artifacts.get("clock_period_override", self.pdk_profile.get("default_clock_period", "10.0"))
1489
+
1490
+ macro_placement_tcl = os.path.join(src_dir, "macro_placement.tcl")
1491
+ with open(macro_placement_tcl, "w") as f:
1492
+ f.write(
1493
+ "# Auto-generated macro placement skeleton\n"
1494
+ f"# die_area={die}x{die} cell_count_est={cell_count_est}\n"
1495
+ "set macros {}\n"
1496
+ "foreach m $macros {\n"
1497
+ " # placeholder for macro coordinates\n"
1498
+ "}\n"
1499
+ )
1500
+
1501
+ floorplan_tcl = os.path.join(src_dir, f"{self.name}_floorplan.tcl")
1502
+ with open(floorplan_tcl, "w") as f:
1503
+ f.write(
1504
+ f"set ::env(DESIGN_NAME) \"{self.name}\"\n"
1505
+ f"set ::env(PDK) \"{self.pdk_profile.get('pdk', PDK)}\"\n"
1506
+ f"set ::env(STD_CELL_LIBRARY) \"{self.pdk_profile.get('std_cell_library', 'sky130_fd_sc_hd')}\"\n"
1507
+ f"set ::env(VERILOG_FILES) [glob $::env(DESIGN_DIR)/src/{self.name}.v]\n"
1508
+ "set ::env(FP_SIZING) \"absolute\"\n"
1509
+ f"set ::env(DIE_AREA) \"0 0 {die} {die}\"\n"
1510
+ f"set ::env(FP_CORE_UTIL) {util}\n"
1511
+ f"set ::env(PL_TARGET_DENSITY) {util / 100 + 0.05:.2f}\n"
1512
+ f"set ::env(CLOCK_PERIOD) \"{clock_period}\"\n"
1513
+ f"set ::env(CLOCK_PORT) \"clk\"\n"
1514
+ "set ::env(GRT_ADJUSTMENT) 0.15\n"
1515
+ )
1516
+
1517
+ self.artifacts["macro_placement_tcl"] = macro_placement_tcl
1518
+ self.artifacts["floorplan_tcl"] = floorplan_tcl
1519
+ self.artifacts["floorplan_meta"] = {
1520
+ "die_area": die,
1521
+ "cell_count_est": cell_count_est,
1522
+ "clock_period": clock_period,
1523
+ "attempt": self.floorplan_attempts,
1524
+ }
1525
+ self.transition(BuildState.HARDENING)
1526
+
1527
+ def _pivot_strategy(self, reason: str):
1528
+ self.pivot_count += 1
1529
+ self.log(f"Strategy pivot triggered ({self.pivot_count}/{self.max_pivots}): {reason}", refined=True)
1530
+ if self.pivot_count > self.max_pivots:
1531
+ self.log("Pivot budget exceeded. Failing closed.", refined=True)
1532
+ self.state = BuildState.FAIL
1533
+ return
1534
+
1535
+ stage = self.strategy_pivot_stage % 3
1536
+ self.strategy_pivot_stage += 1
1537
+
1538
+ if stage == 0:
1539
+ old = float(self.artifacts.get("clock_period_override", self.pdk_profile.get("default_clock_period", "10.0")))
1540
+ new = round(old * 1.10, 2)
1541
+ self.artifacts["clock_period_override"] = str(new)
1542
+ self.log(f"Pivot step: timing constraint tune ({old}ns -> {new}ns).", refined=True)
1543
+ self.transition(BuildState.FLOORPLAN, preserve_retries=True)
1544
+ return
1545
+
1546
+ if stage == 1:
1547
+ old_scale = float(self.artifacts.get("area_scale", 1.0))
1548
+ new_scale = round(old_scale * 1.15, 3)
1549
+ self.artifacts["area_scale"] = new_scale
1550
+ self.log(f"Pivot step: area expansion ({old_scale}x -> {new_scale}x).", refined=True)
1551
+ self.transition(BuildState.FLOORPLAN, preserve_retries=True)
1552
+ return
1553
+
1554
+ # stage 2: logic decoupling prompt
1555
+ self.artifacts["logic_decoupling_hint"] = (
1556
+ "Apply register slicing / pipeline decoupling on critical path; "
1557
+ "reduce combinational depth while preserving behavior."
1558
+ )
1559
+ self.log("Pivot step: requesting logic decoupling (register slicing) in RTL regen.", refined=True)
1560
+ self.transition(BuildState.RTL_GEN, preserve_retries=True)
1561
+
1562
+ def do_convergence_review(self):
1563
+ """Assess congestion/timing convergence and prevent futile loops."""
1564
+ self.log("Assessing convergence (timing + congestion + PPA)...", refined=True)
1565
+
1566
+ congestion = parse_congestion_metrics(self.name, run_tag=self.artifacts.get("run_tag", "agentrun"))
1567
+ sta = parse_sta_signoff(self.name)
1568
+ power = parse_power_signoff(self.name)
1569
+ metrics, _ = check_physical_metrics(self.name)
1570
+
1571
+ self.artifacts["congestion"] = congestion
1572
+ self.artifacts["sta_signoff"] = sta
1573
+ self.artifacts["power_signoff"] = power
1574
+ if metrics:
1575
+ self.artifacts["metrics"] = metrics
1576
+
1577
+ wns = float(sta.get("worst_setup", 0.0)) if not sta.get("error") else -999.0
1578
+ tns = 0.0
1579
+ area_um2 = float(metrics.get("chip_area_um2", 0.0)) if metrics else 0.0
1580
+ power_w = float(power.get("total_power_w", 0.0))
1581
+ cong_pct = float(congestion.get("total_usage_pct", 0.0))
1582
+ snap = ConvergenceSnapshot(
1583
+ iteration=len(self.convergence_history) + 1,
1584
+ wns=wns,
1585
+ tns=tns,
1586
+ congestion=cong_pct,
1587
+ area_um2=area_um2,
1588
+ power_w=power_w,
1589
+ )
1590
+ self.convergence_history.append(snap)
1591
+ self.artifacts["convergence_history"] = [asdict(x) for x in self.convergence_history]
1592
+
1593
+ self.log(
1594
+ f"Convergence snapshot: WNS={wns:.3f}ns, congestion={cong_pct:.2f}%, area={area_um2:.1f}um^2, power={power_w:.6f}W",
1595
+ refined=True,
1596
+ )
1597
+
1598
+ # Congestion-driven loop
1599
+ if cong_pct > self.congestion_threshold:
1600
+ self.log(
1601
+ f"Congestion {cong_pct:.2f}% exceeds threshold {self.congestion_threshold:.2f}%.",
1602
+ refined=True,
1603
+ )
1604
+ # area expansion up to 2 times, then pivot logic
1605
+ area_expansions = int(self.artifacts.get("area_expansions", 0))
1606
+ if area_expansions < 2:
1607
+ self.artifacts["area_expansions"] = area_expansions + 1
1608
+ self.artifacts["area_scale"] = round(float(self.artifacts.get("area_scale", 1.0)) * 1.15, 3)
1609
+ self.log("Applying +15% area expansion due to congestion.", refined=True)
1610
+ self.transition(BuildState.FLOORPLAN, preserve_retries=True)
1611
+ return
1612
+ self._pivot_strategy("congestion persisted after area expansions")
1613
+ return
1614
+
1615
+ # WNS stagnation logic
1616
+ if len(self.convergence_history) >= 3:
1617
+ w_prev = self.convergence_history[-2].wns
1618
+ w_curr = self.convergence_history[-1].wns
1619
+ w_prev2 = self.convergence_history[-3].wns
1620
+ improve1 = w_curr - w_prev
1621
+ improve2 = w_prev - w_prev2
1622
+ if improve1 < 0.01 and improve2 < 0.01:
1623
+ self._pivot_strategy("WNS stagnated for 2 iterations (<0.01ns)")
1624
+ return
1625
+
1626
+ self.transition(BuildState.SIGNOFF)
1627
+
1628
+ def do_eco_patch(self):
1629
+ """Dual-mode ECO: attempt gate patch first, fallback to RTL micro-patch."""
1630
+ self.eco_attempts += 1
1631
+ self.log(f"Running ECO attempt {self.eco_attempts}...", refined=True)
1632
+ strategy = "gate" if self.eco_attempts == 1 else "rtl"
1633
+ ok, patch_result = apply_eco_patch(self.name, strategy=strategy)
1634
+ self.artifacts["eco_patch"] = patch_result
1635
+ if not ok:
1636
+ self.log(f"ECO patch failed: {patch_result}", refined=True)
1637
+ self.state = BuildState.FAIL
1638
+ return
1639
+ # For now, ECO patch is represented as artifact + rerun hardening/signoff.
1640
+ self.log(f"ECO artifact generated: {patch_result}", refined=True)
1641
+ self.transition(BuildState.HARDENING, preserve_retries=True)
1642
 
1643
  def do_hardening(self):
1644
  # 1. Generate config.tcl (CRITICAL: Required for OpenLane)
 
1658
  # Modern OpenLane Config Template
1659
  # Note: We use GRT_ADJUSTMENT instead of deprecated GLB_RT_ADJUSTMENT
1660
 
1661
+ std_cell_lib = self.pdk_profile.get("std_cell_library", "sky130_fd_sc_hd")
1662
+ pdk_name = self.pdk_profile.get("pdk", PDK)
1663
+ clock_period = str(self.artifacts.get("clock_period_override", self.pdk_profile.get("default_clock_period", "10.0")))
1664
+ floor_meta = self.artifacts.get("floorplan_meta", {})
1665
+ die = int(floor_meta.get("die_area", 500))
1666
+ util = 40 if die >= 500 else 50
1667
 
1668
  config_tcl = f"""
1669
  # User config
1670
  set ::env(DESIGN_NAME) "{self.name}"
1671
 
1672
  # PDK Setup
1673
+ set ::env(PDK) "{pdk_name}"
1674
  set ::env(STD_CELL_LIBRARY) "{std_cell_lib}"
1675
 
1676
  # Verilog Files
 
1679
  # Clock Configuration
1680
  set ::env(CLOCK_PORT) "{clock_port}"
1681
  set ::env(CLOCK_NET) "{clock_port}"
1682
+ set ::env(CLOCK_PERIOD) "{clock_period}"
1683
 
1684
  # Synthesis
1685
  set ::env(SYNTH_STRATEGY) "AREA 0"
1686
  set ::env(SYNTH_SIZING) 1
1687
 
1688
  # Floorplanning
1689
+ set ::env(FP_SIZING) "absolute"
1690
+ set ::env(DIE_AREA) "0 0 {die} {die}"
1691
+ set ::env(FP_CORE_UTIL) {util}
1692
+ set ::env(PL_TARGET_DENSITY) {util / 100 + 0.05:.2f}
1693
 
1694
  # Routing
1695
  set ::env(GRT_ADJUSTMENT) 0.15
 
1707
  return
1708
 
1709
  # 2. Run OpenLane
1710
+ run_tag = f"agentrun_{self.global_step_count}"
1711
+ floorplan_tcl = self.artifacts.get("floorplan_tcl", "")
1712
  with console.status("[bold blue]Hardening Layout (OpenLane)...[/bold blue]"):
1713
+ success, result = run_openlane(
1714
+ self.name,
1715
+ background=False,
1716
+ run_tag=run_tag,
1717
+ floorplan_tcl=floorplan_tcl,
1718
+ pdk_name=pdk_name,
1719
+ )
1720
 
1721
  if success:
1722
  self.artifacts['gds'] = result
1723
+ self.artifacts['run_tag'] = run_tag
1724
  self.log(f"GDSII generated: {result}", refined=True)
1725
+ self.transition(BuildState.CONVERGENCE_REVIEW)
 
1726
  else:
1727
  self.log(f"Hardening Failed: {result}")
1728
  self.state = BuildState.FAIL
 
1731
  """Performs full fabrication-readiness signoff: DRC/LVS, timing closure, power analysis."""
1732
  self.log("Running Fabrication Readiness Signoff...", refined=True)
1733
  fab_ready = True
1734
+ run_tag = self.artifacts.get("run_tag", "agentrun")
1735
+ gate_netlist = f"{OPENLANE_ROOT}/designs/{self.name}/runs/{run_tag}/results/final/verilog/gl/{self.name}.v"
1736
+ rtl_path = self.artifacts.get("rtl_path", f"{OPENLANE_ROOT}/designs/{self.name}/src/{self.name}.v")
1737
+ if os.path.exists(rtl_path) and os.path.exists(gate_netlist):
1738
+ lec_ok, lec_log = run_eqy_lec(self.name, rtl_path, gate_netlist)
1739
+ self.artifacts["lec_result"] = "PASS" if lec_ok else "FAIL"
1740
+ self.logger.info(f"LEC RESULT:\n{lec_log}")
1741
+ if lec_ok:
1742
+ self.log("LEC: PASS", refined=True)
1743
+ else:
1744
+ self.log("LEC: FAIL", refined=True)
1745
+ fab_ready = False
1746
+ else:
1747
+ self.artifacts["lec_result"] = "SKIP"
1748
+ self.log("LEC: skipped (missing RTL or gate netlist)", refined=True)
1749
 
1750
  # ── 1. DRC / LVS ──
1751
  with console.status("[bold blue]Checking DRC/LVS Reports...[/bold blue]"):
 
1773
 
1774
  if sta.get('error'):
1775
  self.log(f"STA: {sta['error']}", refined=True)
1776
+ fab_ready = False
1777
  else:
1778
  for c in sta['corners']:
1779
  status = "✓" if (c['setup_slack'] >= 0 and c['hold_slack'] >= 0) else "✗"
 
1861
  self.log(f"Datasheet generated: {doc_path}", refined=True)
1862
  except Exception as e:
1863
  self.log(f"Error writing datasheet: {e}", refined=True)
1864
+
1865
+ try:
1866
+ self._write_ip_manifest()
1867
+ except Exception as e:
1868
+ self.log(f"IP manifest generation warning: {e}", refined=True)
1869
+
1870
+ if self.strict_gates:
1871
+ if self.artifacts.get("formal_result", "").startswith("FAIL") or str(self.artifacts.get("formal_result", "")).startswith("ERROR"):
1872
+ fab_ready = False
1873
+ cov = self.artifacts.get("coverage", {})
1874
+ if cov and float(cov.get("line_pct", 0.0)) < float(self.min_coverage):
1875
+ fab_ready = False
1876
 
1877
  # FINAL VERDICT
1878
  timing_status = "MET" if sta.get('timing_met') else "FAILED" if not sta.get('error') else "N/A"
 
1887
  f"Timing: {timing_status} (WNS={sta.get('worst_setup', 0):.2f}ns)\n"
1888
  f"Power: {power_status}\n"
1889
  f"IR-Drop: {irdrop_status}\n"
1890
+ f"LEC: {self.artifacts.get('lec_result', 'N/A')}\n"
1891
  f"Coverage: {self.artifacts.get('coverage', {}).get('line_pct', 'N/A')}%\n"
1892
  f"Formal: {self.artifacts.get('formal_result', 'SKIPPED')}\n\n"
1893
  f"{'[bold green]FABRICATION READY ✓[/]' if fab_ready else '[bold red]NOT FABRICATION READY ✗[/]'}",
 
1899
  self.artifacts['signoff_result'] = 'PASS'
1900
  self.transition(BuildState.SUCCESS)
1901
  else:
1902
+ # Trigger ECO path before final fail when strict gates are enabled.
1903
+ if self.strict_gates and self.eco_attempts < 2:
1904
+ self.log("Signoff failed. Triggering ECO patch stage.", refined=True)
1905
+ self.transition(BuildState.ECO_PATCH, preserve_retries=True)
1906
+ return
1907
  self.log("❌ SIGNOFF FAILED (Violations Found)", refined=True)
1908
  self.artifacts['signoff_result'] = 'FAIL'
1909
  self.errors.append("Signoff failed (see report).")
src/agentic/tools/vlsi_tools.py CHANGED
@@ -1,11 +1,26 @@
1
  # tools/vlsi_tools.py
2
- import os
3
- import re
4
- import subprocess
5
- from crewai.tools import tool
6
- from ..config import OPENLANE_ROOT, SCRIPTS_DIR, PDK_ROOT, PDK, OPENLANE_IMAGE, SBY_BIN
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
- def SecurityCheck(rtl_code: str) -> tuple:
9
  """
10
  Performs a static security analysis on the generated RTL.
11
  Returns (True, "Safe") if safe, or (False, "reason") if malicious patterns detected.
@@ -21,7 +36,68 @@ def SecurityCheck(rtl_code: str) -> tuple:
21
  if re.search(pattern, rtl_code, re.IGNORECASE):
22
  return False, f"Detected potentially malicious pattern: {pattern}"
23
 
24
- return True, "Safe"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  def write_config(design_name: str, code: str) -> str:
27
  """Writes config.tcl to the OpenLane design directory."""
@@ -187,7 +263,7 @@ def run_syntax_check(file_path: str) -> tuple:
187
  except FileNotFoundError:
188
  return False, "Verilator not found. Please install Verilator 5.0+."
189
 
190
- def run_lint_check(file_path: str) -> tuple:
191
  """
192
  Runs Verilator --lint-only for stricter static analysis.
193
  Returns: (True, "OK") or (False, ErrorLog)
@@ -217,8 +293,73 @@ def run_lint_check(file_path: str) -> tuple:
217
 
218
  except FileNotFoundError:
219
  return True, "Verilator not found (Skipping Lint)"
220
- except subprocess.TimeoutExpired:
221
- return False, "Lint check timed out."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
 
223
 
224
  def validate_rtl_for_synthesis(file_path: str) -> tuple:
@@ -522,7 +663,7 @@ def run_formal_verification(design_name):
522
  return False, "SBY configuration file not found."
523
 
524
  # Run SBY (using bundled binary)
525
- sby_cmd = SBY_BIN if os.path.exists(SBY_BIN) else "sby"
526
  try:
527
  result = subprocess.run(
528
  [sby_cmd, "-f", f"{design_name}.sby"],
@@ -579,10 +720,10 @@ def check_physical_metrics(design_name):
579
  if not os.path.exists(metrics_path):
580
  return None, "Metrics file not found. OpenLane might have failed."
581
 
582
- try:
583
- with open(metrics_path, 'r') as f:
584
- reader = csv.DictReader(f)
585
- data = next(reader) # Only one row usually
586
 
587
  # Extract key metrics safely handling both OpenLane 1 and 2 keys
588
  area = float(data.get("Total_Physical_Cells", data.get("synth_cell_count", 0)))
@@ -605,19 +746,17 @@ def check_physical_metrics(design_name):
605
  if utilization < 1.0: # OL1 might report as 0.45 instead of 45%
606
  utilization *= 100
607
 
608
- metrics = {
609
- "area": area,
610
- "chip_area_um2": chip_area_um2,
611
- "timing_tns": tns, # Total Negative Slack
612
- "timing_wns": wns, # Worst Negative Slack
613
- "power_total": power_total,
614
- "utilization": utilization
615
- }
616
- return metrics, "OK"
617
- except Exception as e:
618
- return metrics, "OK"
619
- except Exception as e:
620
- return None, f"Error parsing metrics: {str(e)}"
621
 
622
  @tool("Signoff Checker")
623
  def signoff_check_tool(design_name: str):
@@ -728,12 +867,19 @@ def run_simulation(design_name: str) -> tuple:
728
 
729
  return False, sim_text
730
 
731
- def run_openlane(design_name: str, background: bool = False):
732
- """Triggers the OpenLane flow via Docker."""
 
 
 
 
 
 
733
 
734
  # --- Autonomous Environment Fix ---
735
  # If PDK_ROOT is not set, try to find it in common locations
736
- effective_pdk_root = PDK_ROOT
 
737
  if not effective_pdk_root or not os.path.exists(effective_pdk_root):
738
  common_paths = [
739
  os.path.expanduser("~/.ciel"),
@@ -745,10 +891,10 @@ def run_openlane(design_name: str, background: bool = False):
745
  found = False
746
  for path in common_paths:
747
  # Check for generic PDK structure, not just sky130A
748
- if os.path.exists(path) and (os.path.exists(os.path.join(path, PDK)) or os.path.exists(os.path.join(path, "sky130A"))):
749
- effective_pdk_root = path
750
- found = True
751
- break
752
 
753
  if not found:
754
  return False, f"PDK_ROOT not found in environment or common paths ({common_paths}). Please set PDK_ROOT."
@@ -760,16 +906,18 @@ def run_openlane(design_name: str, background: bool = False):
760
 
761
  # Direct Docker command (non-interactive)
762
  # Using the configured PDK variable
763
- cmd = [
764
- "docker", "run", "--rm",
765
- "-v", f"{OPENLANE_ROOT}:/openlane",
766
- "-v", f"{effective_pdk_root}:{effective_pdk_root}",
767
- "-e", f"PDK_ROOT={effective_pdk_root}",
768
- "-e", f"PDK={PDK}",
769
- "-e", "PWD=/openlane",
770
- OPENLANE_IMAGE,
771
- "./flow.tcl", "-design", design_name, "-tag", "agentrun", "-overwrite", "-ignore_mismatches"
772
- ]
 
 
773
 
774
  if background:
775
  log_file_path = os.path.join(design_dir, "harden.log")
@@ -797,7 +945,7 @@ def run_openlane(design_name: str, background: bool = False):
797
  return False, "OpenLane Hardening Timed Out (Exceeded 60 mins)."
798
 
799
  # Check if GDS was created
800
- gds_path = f"{OPENLANE_ROOT}/designs/{design_name}/runs/agentrun/results/final/gds/{design_name}.gds"
801
  success = os.path.exists(gds_path)
802
 
803
  if success:
@@ -839,7 +987,7 @@ def run_verification(design_name: str) -> str:
839
  except Exception as e:
840
  return f"Error running verification: {str(e)}"
841
 
842
- def run_gls_simulation(design_name: str) -> tuple:
843
  """Compiles and runs the Gate-Level Simulation (GLS) for the design."""
844
  src_dir = f"{OPENLANE_ROOT}/designs/{design_name}/src"
845
  tb_file = f"{src_dir}/{design_name}_tb.v"
@@ -914,8 +1062,240 @@ def run_gls_simulation(design_name: str) -> tuple:
914
  if "TEST PASSED" in sim_text:
915
  return True, f"GLS Simulation PASSED.\n{sim_text}"
916
  return False, f"GLS Simulation FAILED or missing PASS marker.\n{sim_text}"
917
- except subprocess.TimeoutExpired:
918
- return False, "GLS Simulation Timed Out."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
919
 
920
  # ============================================================
921
  # INDUSTRY-STANDARD TOOLS (Coverage, CDC, DRC/LVS, Documentation)
@@ -1453,73 +1833,83 @@ def generate_design_doc(design_name: str, spec: str = "", metrics: dict = None)
1453
  except Exception as e:
1454
  return f"Error writing documentation: {e}"
1455
 
1456
- def parse_sta_signoff(design_name: str) -> dict:
1457
- """Parses OpenLane STA (Static Timing Analysis) reports for signoff.
1458
-
1459
- Args:
1460
- design_name (str): Name of the design.
1461
-
1462
- Returns:
1463
- dict: A dictionary containing timing metrics and violation status.
1464
- """
1465
- try:
1466
- # Locate the latest run directory
1467
- runs_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "runs")
1468
- if not os.path.exists(runs_dir):
1469
- return {"error": "No runs directory found", "timing_met": False}
1470
-
1471
- # Get the latest run
1472
- latest_run = sorted([d for d in os.listdir(runs_dir) if os.path.isdir(os.path.join(runs_dir, d))])[-1]
1473
- report_dir = os.path.join(runs_dir, latest_run, "reports", "signoff")
1474
-
1475
- # Check for STA report
1476
- sta_report = None
1477
- if os.path.exists(report_dir):
1478
- for f in os.listdir(report_dir):
1479
- if f.endswith(".sta.rpt") or "sta" in f:
1480
- sta_report = os.path.join(report_dir, f)
1481
- break
1482
-
1483
- if not sta_report:
1484
- return {"error": "STA report not found", "timing_met": False}
1485
-
1486
- with open(sta_report, 'r') as f:
1487
- content = f.read()
1488
-
1489
- # Parse slack (WNS = Worst Negative Slack -> Setup)
1490
- wns_match = re.search(r'wns\s+([-\d.]+)', content, re.IGNORECASE)
1491
- tns_match = re.search(r'tns\s+([-\d.]+)', content, re.IGNORECASE)
1492
-
1493
- wns = float(wns_match.group(1)) if wns_match else 0.0
1494
- tns = float(tns_match.group(1)) if tns_match else 0.0
1495
-
1496
- # In a real flow, we'd parse hold slack too. For now assume hold is OK if wns is OK, or parse if available.
1497
- # OpenLane often puts hold analysis in a separate Min/Fast corner file.
1498
- # For simplicity/robustness, we'll map WNS to setup slack.
1499
-
1500
- setup_slack = wns
1501
- hold_slack = 0.0 # Placeholder if not parsed
1502
-
1503
- timing_met = (setup_slack >= 0.0)
1504
-
1505
- return {
1506
- "timing_met": timing_met,
1507
- "worst_setup": setup_slack,
1508
- "worst_hold": hold_slack,
1509
- "corners": [
1510
- {
1511
- "name": "Typical",
1512
- "setup_slack": setup_slack,
1513
- "hold_slack": hold_slack
1514
- }
1515
- ],
1516
- "report_path": sta_report
1517
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
1518
 
1519
- except Exception as e:
1520
- return {"error": str(e), "timing_met": False}
1521
-
1522
- def parse_power_signoff(design_name: str) -> dict:
1523
  """Parses OpenLane Power Signoff reports.
1524
 
1525
  Args:
@@ -1528,58 +1918,92 @@ def parse_power_signoff(design_name: str) -> dict:
1528
  Returns:
1529
  dict: A dictionary containing power metrics.
1530
  """
1531
- try:
1532
- # Default empty result
1533
- result = {
1534
- "total_power_w": 0.0,
1535
- "internal_power_w": 0.0,
1536
- "switching_power_w": 0.0,
1537
- "leakage_power_w": 0.0,
1538
- "sequential_pct": 0.0,
1539
- "combinational_pct": 0.0,
1540
- "irdrop_max_vpwr": 0.0,
1541
- "irdrop_max_vgnd": 0.0,
1542
- "power_ok": True
1543
- }
1544
-
1545
- runs_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "runs")
1546
- if not os.path.exists(runs_dir):
1547
- return result
1548
-
1549
- latest_run = sorted([d for d in os.listdir(runs_dir) if os.path.isdir(os.path.join(runs_dir, d))])[-1]
1550
- report_dir = os.path.join(runs_dir, latest_run, "reports", "signoff")
1551
-
1552
- # 1. Parse Power Report (e.g., .power.rpt)
1553
- power_report = None
1554
- if os.path.exists(report_dir):
1555
- for f in os.listdir(report_dir):
1556
- if "power" in f and f.endswith(".rpt"):
1557
- power_report = os.path.join(report_dir, f)
1558
- break
1559
-
1560
- if power_report:
1561
- with open(power_report, 'r') as f:
1562
- content = f.read()
1563
- # Simple regex parsing for Total Power
1564
- # Format often: "Total Power: 1.23e-03"
1565
- total_match = re.search(r'Total Power.*?([\d.e+-]+)', content, re.IGNORECASE)
1566
- if total_match:
1567
- result["total_power_w"] = float(total_match.group(1))
1568
-
1569
- # Breakdowns (simplified)
1570
- internal_match = re.search(r'Internal Power.*?([\d.e+-]+)', content, re.IGNORECASE)
1571
- if internal_match: result["internal_power_w"] = float(internal_match.group(1))
1572
-
1573
- switching_match = re.search(r'Switching Power.*?([\d.e+-]+)', content, re.IGNORECASE)
1574
- if switching_match: result["switching_power_w"] = float(switching_match.group(1))
1575
-
1576
- leakage_match = re.search(r'Leakage Power.*?([\d.e+-]+)', content, re.IGNORECASE)
1577
- if leakage_match: result["leakage_power_w"] = float(leakage_match.group(1))
1578
-
1579
- # 2. Parse IR Drop (Simplified placeholder)
1580
- # Real flow would parse openroad reports. for now assume safe.
1581
- return result
1582
-
1583
- except Exception as e:
1584
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1585
 
 
1
  # tools/vlsi_tools.py
2
+ import os
3
+ import re
4
+ import json
5
+ import hashlib
6
+ import subprocess
7
+ from collections import Counter, defaultdict, deque
8
+ from typing import Dict, Any, List, Tuple
9
+ import shutil
10
+ from crewai.tools import tool
11
+ from ..config import (
12
+ OPENLANE_ROOT,
13
+ SCRIPTS_DIR,
14
+ PDK_ROOT,
15
+ PDK,
16
+ OPENLANE_IMAGE,
17
+ SBY_BIN,
18
+ YOSYS_BIN,
19
+ EQY_BIN,
20
+ get_pdk_profile,
21
+ )
22
 
23
+ def SecurityCheck(rtl_code: str) -> tuple:
24
  """
25
  Performs a static security analysis on the generated RTL.
26
  Returns (True, "Safe") if safe, or (False, "reason") if malicious patterns detected.
 
36
  if re.search(pattern, rtl_code, re.IGNORECASE):
37
  return False, f"Detected potentially malicious pattern: {pattern}"
38
 
39
+ return True, "Safe"
40
+
41
+
42
+ def _resolve_binary(bin_hint: str) -> str:
43
+ """Resolve a tool path from hint/path/PATH."""
44
+ if not bin_hint:
45
+ return ""
46
+ if os.path.isabs(bin_hint) and os.path.exists(bin_hint):
47
+ return bin_hint
48
+ found = shutil.which(bin_hint)
49
+ if found:
50
+ return found
51
+ return bin_hint
52
+
53
+
54
+ def startup_self_check() -> Dict[str, Any]:
55
+ """Validate required tooling and environment before running the flow."""
56
+ checks: List[Dict[str, Any]] = []
57
+ required_bins = {
58
+ "verilator": "verilator",
59
+ "iverilog": "iverilog",
60
+ "vvp": "vvp",
61
+ "docker": "docker",
62
+ "yosys": YOSYS_BIN,
63
+ "sby": SBY_BIN,
64
+ "eqy": EQY_BIN,
65
+ }
66
+ all_pass = True
67
+
68
+ for name, hint in required_bins.items():
69
+ resolved = _resolve_binary(hint)
70
+ exists = bool(resolved and (os.path.isabs(resolved) and os.path.exists(resolved) or shutil.which(resolved)))
71
+ checks.append(
72
+ {
73
+ "tool": name,
74
+ "hint": hint,
75
+ "resolved": resolved,
76
+ "ok": exists,
77
+ }
78
+ )
79
+ if not exists:
80
+ all_pass = False
81
+
82
+ env_checks = {
83
+ "OPENLANE_ROOT": OPENLANE_ROOT,
84
+ "PDK_ROOT": PDK_ROOT,
85
+ "PDK": PDK,
86
+ }
87
+ env_status = {
88
+ key: {"value": value, "exists": os.path.exists(value) if key.endswith("_ROOT") else True}
89
+ for key, value in env_checks.items()
90
+ }
91
+ for key, info in env_status.items():
92
+ if not info["exists"]:
93
+ all_pass = False
94
+ checks.append({"tool": key, "hint": info["value"], "resolved": info["value"], "ok": False})
95
+
96
+ return {
97
+ "ok": all_pass,
98
+ "checks": checks,
99
+ "env": env_status,
100
+ }
101
 
102
  def write_config(design_name: str, code: str) -> str:
103
  """Writes config.tcl to the OpenLane design directory."""
 
263
  except FileNotFoundError:
264
  return False, "Verilator not found. Please install Verilator 5.0+."
265
 
266
+ def run_lint_check(file_path: str) -> tuple:
267
  """
268
  Runs Verilator --lint-only for stricter static analysis.
269
  Returns: (True, "OK") or (False, ErrorLog)
 
293
 
294
  except FileNotFoundError:
295
  return True, "Verilator not found (Skipping Lint)"
296
+ except subprocess.TimeoutExpired:
297
+ return False, "Lint check timed out."
298
+
299
+
300
+ def run_semantic_rigor_check(file_path: str) -> Tuple[bool, Dict[str, Any]]:
301
+ """Deterministic semantic preflight for width-safety and port-shadowing."""
302
+ report: Dict[str, Any] = {
303
+ "ok": True,
304
+ "width_issues": [],
305
+ "port_shadowing": [],
306
+ "details": "",
307
+ }
308
+
309
+ if not os.path.exists(file_path):
310
+ report["ok"] = False
311
+ report["details"] = f"File not found: {file_path}"
312
+ return False, report
313
+
314
+ with open(file_path, "r") as f:
315
+ code = f.read()
316
+
317
+ # --- Port shadowing detection ---
318
+ port_names = set()
319
+ module_match = re.search(r"module\s+\w+\s*(?:#\s*\(.*?\))?\s*\((.*?)\)\s*;", code, re.DOTALL)
320
+ if module_match:
321
+ port_block = module_match.group(1)
322
+ for m in re.finditer(r"\b(?:input|output|inout)\b[^;,\)]*\b([A-Za-z_]\w*)\b", port_block):
323
+ port_names.add(m.group(1))
324
+
325
+ shadowing = []
326
+ for m in re.finditer(
327
+ r"^\s*(?:reg|wire|logic)\s+(?:signed\s+)?(?:\[[^]]+\]\s+)?([A-Za-z_]\w*)\b",
328
+ code,
329
+ re.MULTILINE,
330
+ ):
331
+ sig = m.group(1)
332
+ if sig in port_names:
333
+ shadowing.append(sig)
334
+ if shadowing:
335
+ report["port_shadowing"] = sorted(set(shadowing))
336
+
337
+ # --- Width mismatch detection via Verilator diagnostics ---
338
+ width_patterns = (
339
+ "WIDTHTRUNC",
340
+ "WIDTHEXPAND",
341
+ "WIDTH",
342
+ "UNSIGNED",
343
+ "signed",
344
+ "truncat",
345
+ )
346
+ cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wall", file_path]
347
+ try:
348
+ result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
349
+ stderr = result.stderr or ""
350
+ width_lines = []
351
+ for line in stderr.splitlines():
352
+ upper = line.upper()
353
+ if any(p.upper() in upper for p in width_patterns):
354
+ width_lines.append(line.strip())
355
+ if width_lines:
356
+ report["width_issues"] = width_lines[:20]
357
+ report["details"] = "\n".join(width_lines[:20])
358
+ except Exception as exc:
359
+ report["details"] = f"Semantic width scan fallback triggered: {exc}"
360
+
361
+ report["ok"] = not report["port_shadowing"] and not report["width_issues"]
362
+ return report["ok"], report
363
 
364
 
365
  def validate_rtl_for_synthesis(file_path: str) -> tuple:
 
663
  return False, "SBY configuration file not found."
664
 
665
  # Run SBY (using bundled binary)
666
+ sby_cmd = _resolve_binary(SBY_BIN)
667
  try:
668
  result = subprocess.run(
669
  [sby_cmd, "-f", f"{design_name}.sby"],
 
720
  if not os.path.exists(metrics_path):
721
  return None, "Metrics file not found. OpenLane might have failed."
722
 
723
+ try:
724
+ with open(metrics_path, 'r') as f:
725
+ reader = csv.DictReader(f)
726
+ data = next(reader) # Only one row usually
727
 
728
  # Extract key metrics safely handling both OpenLane 1 and 2 keys
729
  area = float(data.get("Total_Physical_Cells", data.get("synth_cell_count", 0)))
 
746
  if utilization < 1.0: # OL1 might report as 0.45 instead of 45%
747
  utilization *= 100
748
 
749
+ metrics = {
750
+ "area": area,
751
+ "chip_area_um2": chip_area_um2,
752
+ "timing_tns": tns, # Total Negative Slack
753
+ "timing_wns": wns, # Worst Negative Slack
754
+ "power_total": power_total,
755
+ "utilization": utilization
756
+ }
757
+ return metrics, "OK"
758
+ except Exception as e:
759
+ return None, f"Error parsing metrics: {str(e)}"
 
 
760
 
761
  @tool("Signoff Checker")
762
  def signoff_check_tool(design_name: str):
 
867
 
868
  return False, sim_text
869
 
870
+ def run_openlane(
871
+ design_name: str,
872
+ background: bool = False,
873
+ run_tag: str = "agentrun",
874
+ floorplan_tcl: str = "",
875
+ pdk_name: str = "",
876
+ ):
877
+ """Triggers the OpenLane flow via Docker."""
878
 
879
  # --- Autonomous Environment Fix ---
880
  # If PDK_ROOT is not set, try to find it in common locations
881
+ effective_pdk_root = PDK_ROOT
882
+ selected_pdk = pdk_name or PDK
883
  if not effective_pdk_root or not os.path.exists(effective_pdk_root):
884
  common_paths = [
885
  os.path.expanduser("~/.ciel"),
 
891
  found = False
892
  for path in common_paths:
893
  # Check for generic PDK structure, not just sky130A
894
+ if os.path.exists(path) and (os.path.exists(os.path.join(path, selected_pdk)) or os.path.exists(os.path.join(path, "sky130A"))):
895
+ effective_pdk_root = path
896
+ found = True
897
+ break
898
 
899
  if not found:
900
  return False, f"PDK_ROOT not found in environment or common paths ({common_paths}). Please set PDK_ROOT."
 
906
 
907
  # Direct Docker command (non-interactive)
908
  # Using the configured PDK variable
909
+ cmd = [
910
+ "docker", "run", "--rm",
911
+ "-v", f"{OPENLANE_ROOT}:/openlane",
912
+ "-v", f"{effective_pdk_root}:{effective_pdk_root}",
913
+ "-e", f"PDK_ROOT={effective_pdk_root}",
914
+ "-e", f"PDK={selected_pdk}",
915
+ "-e", "PWD=/openlane",
916
+ OPENLANE_IMAGE,
917
+ "./flow.tcl", "-design", design_name, "-tag", run_tag, "-overwrite", "-ignore_mismatches"
918
+ ]
919
+ if floorplan_tcl:
920
+ cmd.extend(["-config_file", floorplan_tcl])
921
 
922
  if background:
923
  log_file_path = os.path.join(design_dir, "harden.log")
 
945
  return False, "OpenLane Hardening Timed Out (Exceeded 60 mins)."
946
 
947
  # Check if GDS was created
948
+ gds_path = f"{OPENLANE_ROOT}/designs/{design_name}/runs/{run_tag}/results/final/gds/{design_name}.gds"
949
  success = os.path.exists(gds_path)
950
 
951
  if success:
 
987
  except Exception as e:
988
  return f"Error running verification: {str(e)}"
989
 
990
+ def run_gls_simulation(design_name: str) -> tuple:
991
  """Compiles and runs the Gate-Level Simulation (GLS) for the design."""
992
  src_dir = f"{OPENLANE_ROOT}/designs/{design_name}/src"
993
  tb_file = f"{src_dir}/{design_name}_tb.v"
 
1062
  if "TEST PASSED" in sim_text:
1063
  return True, f"GLS Simulation PASSED.\n{sim_text}"
1064
  return False, f"GLS Simulation FAILED or missing PASS marker.\n{sim_text}"
1065
+ except subprocess.TimeoutExpired:
1066
+ return False, "GLS Simulation Timed Out."
1067
+
1068
+
1069
+ def parse_eda_log_summary(log_path: str, kind: str, top_n: int = 10) -> Dict[str, Any]:
1070
+ """Stream parse EDA logs and return normalized top issues for LLM-safe context."""
1071
+ summary: Dict[str, Any] = {
1072
+ "kind": kind,
1073
+ "path": log_path,
1074
+ "top_issues": [],
1075
+ "counts": {},
1076
+ "total_lines": 0,
1077
+ "error": "",
1078
+ }
1079
+ if not os.path.exists(log_path):
1080
+ summary["error"] = f"log not found: {log_path}"
1081
+ return summary
1082
+
1083
+ patterns = {
1084
+ "timing": [
1085
+ (r"\bwns\b|\btns\b|slack|setup|hold", "timing_violation", "high", "timing_tune"),
1086
+ (r"unconstrained|no clock", "constraint_issue", "medium", "constraints"),
1087
+ ],
1088
+ "routing": [
1089
+ (r"overflow|congestion|gcell|resource|usage", "routing_congestion", "high", "area_or_floorplan"),
1090
+ (r"antenna", "antenna_issue", "medium", "routing_rule_fix"),
1091
+ ],
1092
+ "drc": [
1093
+ (r"violation|error|drc", "drc_violation", "high", "layout_fix"),
1094
+ ],
1095
+ "lvs": [
1096
+ (r"mismatch|lvs|error", "lvs_mismatch", "high", "netlist_match_fix"),
1097
+ ],
1098
+ "cdc": [
1099
+ (r"cdc|clock domain|metastab|sync", "cdc_warning", "medium", "synchronizer_fix"),
1100
+ ],
1101
+ "formal": [
1102
+ (r"assert|prove|fail|counterexample", "formal_failure", "high", "property_or_logic_fix"),
1103
+ ],
1104
+ }
1105
+ selected = patterns.get(kind.lower(), patterns["timing"])
1106
+ counters: Counter = Counter()
1107
+ examples: Dict[str, deque] = defaultdict(lambda: deque(maxlen=3))
1108
+ fixes: Dict[str, str] = {}
1109
+ severities: Dict[str, str] = {}
1110
+
1111
+ with open(log_path, "r", errors="ignore") as f:
1112
+ for line in f:
1113
+ summary["total_lines"] += 1
1114
+ text = line.strip()
1115
+ if not text:
1116
+ continue
1117
+ for pattern, issue_type, severity, fix_cat in selected:
1118
+ if re.search(pattern, text, re.IGNORECASE):
1119
+ counters[issue_type] += 1
1120
+ examples[issue_type].append(text[:240])
1121
+ fixes[issue_type] = fix_cat
1122
+ severities[issue_type] = severity
1123
+ break
1124
+
1125
+ summary["counts"] = dict(counters)
1126
+ for issue_type, count in counters.most_common(top_n):
1127
+ ex = next(iter(examples[issue_type]), "")
1128
+ summary["top_issues"].append(
1129
+ {
1130
+ "issue_type": issue_type,
1131
+ "severity": severities.get(issue_type, "medium"),
1132
+ "count": count,
1133
+ "representative_snippet": ex,
1134
+ "probable_fix_category": fixes.get(issue_type, "general_fix"),
1135
+ }
1136
+ )
1137
+ return summary
1138
+
1139
+
1140
+ def extract_top_sta_paths(sta_report_path: str, top_n: int = 10) -> List[Dict[str, Any]]:
1141
+ """Extract top failing paths/endpoints from STA report text."""
1142
+ results: List[Dict[str, Any]] = []
1143
+ if not os.path.exists(sta_report_path):
1144
+ return results
1145
+
1146
+ slack_re = re.compile(r"slack\s*\(?VIOLATED\)?\s*([-\d.]+)", re.IGNORECASE)
1147
+ end_re = re.compile(r"endpoint:\s*(\S+)", re.IGNORECASE)
1148
+ start_re = re.compile(r"startpoint:\s*(\S+)", re.IGNORECASE)
1149
+ current: Dict[str, Any] = {}
1150
+
1151
+ with open(sta_report_path, "r", errors="ignore") as f:
1152
+ for line in f:
1153
+ text = line.strip()
1154
+ m = start_re.search(text)
1155
+ if m:
1156
+ current["startpoint"] = m.group(1)
1157
+ m = end_re.search(text)
1158
+ if m:
1159
+ current["endpoint"] = m.group(1)
1160
+ m = slack_re.search(text)
1161
+ if m:
1162
+ try:
1163
+ current["slack"] = float(m.group(1))
1164
+ except ValueError:
1165
+ current["slack"] = 0.0
1166
+ if current:
1167
+ results.append(dict(current))
1168
+ current = {}
1169
+
1170
+ results.sort(key=lambda x: x.get("slack", 0.0))
1171
+ return results[:top_n]
1172
+
1173
+
1174
+ def parse_congestion_metrics(design_name: str, run_tag: str = "agentrun") -> Dict[str, Any]:
1175
+ """Parse global routing congestion from OpenLane routing logs."""
1176
+ log_path = os.path.join(
1177
+ OPENLANE_ROOT,
1178
+ "designs",
1179
+ design_name,
1180
+ "runs",
1181
+ run_tag,
1182
+ "logs",
1183
+ "routing",
1184
+ "19-global.log",
1185
+ )
1186
+ result = {
1187
+ "log_path": log_path,
1188
+ "total_usage_pct": 0.0,
1189
+ "total_overflow": 0,
1190
+ "layers": [],
1191
+ "error": "",
1192
+ }
1193
+ if not os.path.exists(log_path):
1194
+ result["error"] = "global routing log missing"
1195
+ return result
1196
+
1197
+ line_re = re.compile(
1198
+ r"^(?P<layer>[A-Za-z0-9_]+)\s+(?P<resource>\d+)\s+(?P<demand>\d+)\s+(?P<usage>[\d.]+)%\s+(?P<overflow>\d+\s*/\s*\d+\s*/\s*\d+)$"
1199
+ )
1200
+ total_re = re.compile(
1201
+ r"^Total\s+(?P<resource>\d+)\s+(?P<demand>\d+)\s+(?P<usage>[\d.]+)%\s+(?P<overflow>\d+\s*/\s*\d+\s*/\s*\d+)$"
1202
+ )
1203
+ with open(log_path, "r", errors="ignore") as f:
1204
+ for raw in f:
1205
+ text = raw.strip()
1206
+ m = total_re.match(text)
1207
+ if m:
1208
+ overflow_triplet = m.group("overflow").split("/")
1209
+ result["total_usage_pct"] = float(m.group("usage"))
1210
+ result["total_overflow"] = int(overflow_triplet[-1].strip())
1211
+ continue
1212
+ m = line_re.match(text)
1213
+ if m:
1214
+ if m.group("layer").lower() == "total":
1215
+ continue
1216
+ overflow_triplet = m.group("overflow").split("/")
1217
+ total_over = int(overflow_triplet[-1].strip())
1218
+ result["layers"].append(
1219
+ {
1220
+ "layer": m.group("layer"),
1221
+ "usage_pct": float(m.group("usage")),
1222
+ "overflow_total": total_over,
1223
+ }
1224
+ )
1225
+ continue
1226
+ return result
1227
+
1228
+
1229
+ def run_eqy_lec(design_name: str, gold_rtl: str, gate_netlist: str) -> Tuple[bool, str]:
1230
+ """Run EQY logical equivalence between reference RTL and gate netlist."""
1231
+ if not os.path.exists(gold_rtl):
1232
+ return False, f"gold rtl missing: {gold_rtl}"
1233
+ if not os.path.exists(gate_netlist):
1234
+ return False, f"gate netlist missing: {gate_netlist}"
1235
+
1236
+ eqy_bin = _resolve_binary(EQY_BIN)
1237
+ yosys_bin = _resolve_binary(YOSYS_BIN)
1238
+ if not shutil.which(eqy_bin) and not os.path.exists(eqy_bin):
1239
+ return False, f"eqy not found ({EQY_BIN})"
1240
+ if not shutil.which(yosys_bin) and not os.path.exists(yosys_bin):
1241
+ return False, f"yosys not found ({YOSYS_BIN})"
1242
+
1243
+ src_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "src")
1244
+ os.makedirs(src_dir, exist_ok=True)
1245
+ eqy_cfg = os.path.join(src_dir, f"{design_name}.eqy")
1246
+ top = design_name
1247
+ cfg = f"""[options]
1248
+ multiclock on
1249
+
1250
+ [gold]
1251
+ read_verilog {gold_rtl}
1252
+ prep -top {top}
1253
+
1254
+ [gate]
1255
+ read_verilog {gate_netlist}
1256
+ prep -top {top}
1257
+
1258
+ [strategy simple]
1259
+ """
1260
+ with open(eqy_cfg, "w") as f:
1261
+ f.write(cfg)
1262
+
1263
+ try:
1264
+ result = subprocess.run(
1265
+ [eqy_bin, eqy_cfg],
1266
+ cwd=src_dir,
1267
+ capture_output=True,
1268
+ text=True,
1269
+ timeout=900,
1270
+ )
1271
+ except subprocess.TimeoutExpired:
1272
+ return False, "EQY timed out (>900s)"
1273
+ except FileNotFoundError:
1274
+ return False, "EQY binary not executable"
1275
+
1276
+ text = (result.stdout or "") + ("\n" + result.stderr if result.stderr else "")
1277
+ if result.returncode == 0 and re.search(r"PASS|equivalent|success", text, re.IGNORECASE):
1278
+ return True, text[-2000:]
1279
+ return False, text[-4000:]
1280
+
1281
+
1282
+ def apply_eco_patch(design_name: str, target_net: str = "", strategy: str = "gate") -> Tuple[bool, str]:
1283
+ """Apply a localized ECO patch placeholder; returns patch artifact path."""
1284
+ src_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "src")
1285
+ os.makedirs(src_dir, exist_ok=True)
1286
+ patch_path = os.path.join(src_dir, f"{design_name}_eco_patch.tcl")
1287
+ patch_note = (
1288
+ f"# ECO patch strategy={strategy}\\n"
1289
+ f"# target_net={target_net or 'AUTO_SELECT'}\\n"
1290
+ "# This patch is generated by AgentIC and intended for incremental routing/repair.\\n"
1291
+ "puts \"Applying localized ECO patch\"\\n"
1292
+ )
1293
+ try:
1294
+ with open(patch_path, "w") as f:
1295
+ f.write(patch_note)
1296
+ return True, patch_path
1297
+ except OSError as exc:
1298
+ return False, f"ECO patch write failed: {exc}"
1299
 
1300
  # ============================================================
1301
  # INDUSTRY-STANDARD TOOLS (Coverage, CDC, DRC/LVS, Documentation)
 
1833
  except Exception as e:
1834
  return f"Error writing documentation: {e}"
1835
 
1836
+ def parse_sta_signoff(design_name: str) -> dict:
1837
+ """Parse multi-corner STA summary reports and aggregate worst setup/hold."""
1838
+ try:
1839
+ runs_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "runs")
1840
+ if not os.path.exists(runs_dir):
1841
+ return {"error": "No runs directory found", "timing_met": False}
1842
+
1843
+ latest_run = sorted([d for d in os.listdir(runs_dir) if os.path.isdir(os.path.join(runs_dir, d))])[-1]
1844
+ signoff_dir = os.path.join(runs_dir, latest_run, "reports", "signoff")
1845
+ if not os.path.exists(signoff_dir):
1846
+ return {"error": "Signoff report directory not found", "timing_met": False}
1847
+
1848
+ summary_reports: List[str] = []
1849
+ for root, _, files in os.walk(signoff_dir):
1850
+ for fname in files:
1851
+ if fname.endswith(".summary.rpt") and "sta" in fname.lower():
1852
+ summary_reports.append(os.path.join(root, fname))
1853
+ elif fname.endswith(".sta.rpt"):
1854
+ summary_reports.append(os.path.join(root, fname))
1855
+ summary_reports = sorted(set(summary_reports))
1856
+ if not summary_reports:
1857
+ return {"error": "STA report not found", "timing_met": False}
1858
+
1859
+ corners = []
1860
+ worst_setup = float("inf")
1861
+ worst_hold = float("inf")
1862
+ top_paths: List[Dict[str, Any]] = []
1863
+
1864
+ for sta_report in summary_reports:
1865
+ corner_name = os.path.basename(os.path.dirname(sta_report))
1866
+ if corner_name == "signoff":
1867
+ corner_name = os.path.basename(sta_report).replace(".summary.rpt", "")
1868
+
1869
+ with open(sta_report, "r", errors="ignore") as f:
1870
+ content = f.read()
1871
+
1872
+ setup_match = re.search(r"report_worst_slack -max.*?worst slack\s+([-\d.]+)", content, re.IGNORECASE | re.DOTALL)
1873
+ hold_match = re.search(r"report_worst_slack -min.*?worst slack\s+([-\d.]+)", content, re.IGNORECASE | re.DOTALL)
1874
+ wns_match = re.search(r"\bwns\s+([-\d.]+)", content, re.IGNORECASE)
1875
+ all_worst = re.findall(r"worst slack\s+([-\d.]+)", content, re.IGNORECASE)
1876
+
1877
+ setup_slack = float(setup_match.group(1)) if setup_match else (float(wns_match.group(1)) if wns_match else (float(all_worst[0]) if all_worst else 0.0))
1878
+ hold_slack = float(hold_match.group(1)) if hold_match else (float(all_worst[1]) if len(all_worst) > 1 else 0.0)
1879
+
1880
+ worst_setup = min(worst_setup, setup_slack)
1881
+ worst_hold = min(worst_hold, hold_slack)
1882
+
1883
+ corners.append(
1884
+ {
1885
+ "name": corner_name,
1886
+ "setup_slack": setup_slack,
1887
+ "hold_slack": hold_slack,
1888
+ "report_path": sta_report,
1889
+ }
1890
+ )
1891
+ top_paths.extend(extract_top_sta_paths(sta_report, top_n=3))
1892
+
1893
+ if worst_setup == float("inf"):
1894
+ worst_setup = 0.0
1895
+ if worst_hold == float("inf"):
1896
+ worst_hold = 0.0
1897
+
1898
+ timing_met = all((c["setup_slack"] >= 0.0 and c["hold_slack"] >= 0.0) for c in corners)
1899
+ top_paths = sorted(top_paths, key=lambda x: x.get("slack", 0.0))[:10]
1900
+
1901
+ return {
1902
+ "timing_met": timing_met,
1903
+ "worst_setup": worst_setup,
1904
+ "worst_hold": worst_hold,
1905
+ "corners": corners,
1906
+ "top_paths": top_paths,
1907
+ "report_dir": signoff_dir,
1908
+ }
1909
+ except Exception as e:
1910
+ return {"error": str(e), "timing_met": False}
1911
 
1912
+ def parse_power_signoff(design_name: str) -> dict:
 
 
 
1913
  """Parses OpenLane Power Signoff reports.
1914
 
1915
  Args:
 
1918
  Returns:
1919
  dict: A dictionary containing power metrics.
1920
  """
1921
+ # Default empty result
1922
+ result = {
1923
+ "total_power_w": 0.0,
1924
+ "internal_power_w": 0.0,
1925
+ "switching_power_w": 0.0,
1926
+ "leakage_power_w": 0.0,
1927
+ "sequential_pct": 0.0,
1928
+ "combinational_pct": 0.0,
1929
+ "irdrop_max_vpwr": 0.0,
1930
+ "irdrop_max_vgnd": 0.0,
1931
+ "power_ok": True,
1932
+ "power_report": "",
1933
+ }
1934
+ try:
1935
+ runs_dir = os.path.join(OPENLANE_ROOT, "designs", design_name, "runs")
1936
+ if not os.path.exists(runs_dir):
1937
+ return result
1938
+
1939
+ latest_run = sorted([d for d in os.listdir(runs_dir) if os.path.isdir(os.path.join(runs_dir, d))])[-1]
1940
+ report_dir = os.path.join(runs_dir, latest_run, "reports", "signoff")
1941
+ if not os.path.exists(report_dir):
1942
+ return result
1943
+
1944
+ # Parse *.power.rpt
1945
+ power_report = None
1946
+ for root, _, files in os.walk(report_dir):
1947
+ for f in files:
1948
+ if "power" in f.lower() and f.endswith(".rpt"):
1949
+ power_report = os.path.join(root, f)
1950
+ break
1951
+ if power_report:
1952
+ break
1953
+
1954
+ if power_report and os.path.exists(power_report):
1955
+ result["power_report"] = power_report
1956
+ with open(power_report, "r", errors="ignore") as f:
1957
+ content = f.read()
1958
+ total_match = re.search(r"Total\\s+([\\d.eE+\\-]+)\\s+([\\d.eE+\\-]+)\\s+([\\d.eE+\\-]+)\\s+([\\d.eE+\\-]+)", content)
1959
+ if total_match:
1960
+ result["internal_power_w"] = float(total_match.group(1))
1961
+ result["switching_power_w"] = float(total_match.group(2))
1962
+ result["leakage_power_w"] = float(total_match.group(3))
1963
+ result["total_power_w"] = float(total_match.group(4))
1964
+ else:
1965
+ total_match = re.search(r"Total\\s+Power.*?([\\d.eE+\\-]+)", content, re.IGNORECASE)
1966
+ if total_match:
1967
+ result["total_power_w"] = float(total_match.group(1))
1968
+
1969
+ seq_match = re.search(r"Sequential.*?\\s([\\d.]+)%", content)
1970
+ comb_match = re.search(r"Combinational.*?\\s([\\d.]+)%", content)
1971
+ if seq_match:
1972
+ result["sequential_pct"] = float(seq_match.group(1))
1973
+ if comb_match:
1974
+ result["combinational_pct"] = float(comb_match.group(1))
1975
+
1976
+ # Parse IR-drop reports
1977
+ def _parse_irdrop(path: str) -> float:
1978
+ max_drop = 0.0
1979
+ if not os.path.exists(path):
1980
+ return max_drop
1981
+ with open(path, "r", errors="ignore") as f:
1982
+ header = f.readline()
1983
+ for line in f:
1984
+ parts = [p.strip() for p in line.split(",")]
1985
+ if len(parts) < 4:
1986
+ continue
1987
+ try:
1988
+ v = float(parts[3])
1989
+ except ValueError:
1990
+ continue
1991
+ if "VPWR" in os.path.basename(path):
1992
+ drop = max(0.0, 1.8 - v) if v > 0.1 else 0.0
1993
+ else:
1994
+ drop = abs(v)
1995
+ if drop > max_drop:
1996
+ max_drop = drop
1997
+ return max_drop
1998
+
1999
+ vpwr_path = os.path.join(report_dir, "32-irdrop-VPWR.rpt")
2000
+ vgnd_path = os.path.join(report_dir, "32-irdrop-VGND.rpt")
2001
+ result["irdrop_max_vpwr"] = _parse_irdrop(vpwr_path)
2002
+ result["irdrop_max_vgnd"] = _parse_irdrop(vgnd_path)
2003
+
2004
+ # 5% of 1.8V ~= 90mV
2005
+ result["power_ok"] = result["irdrop_max_vpwr"] <= 0.09 and result["irdrop_max_vgnd"] <= 0.09
2006
+ return result
2007
+ except Exception:
2008
+ return result
2009
 
tests/test_tier1_upgrade.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import sys
4
+ import shutil
5
+ import tempfile
6
+ import textwrap
7
+ import unittest
8
+
9
+ REPO_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
10
+ SRC_ROOT = os.path.join(REPO_ROOT, "src")
11
+ if SRC_ROOT not in sys.path:
12
+ sys.path.insert(0, SRC_ROOT)
13
+
14
+ from agentic.tools import vlsi_tools # noqa: E402
15
+ from agentic.orchestrator import BuildOrchestrator # noqa: E402
16
+
17
+
18
+ class SyntaxIntegrityTests(unittest.TestCase):
19
+ def test_no_merge_conflict_markers(self):
20
+ base = os.path.join(REPO_ROOT, "src", "agentic")
21
+ bad = []
22
+ for root, _, files in os.walk(base):
23
+ for fname in files:
24
+ if not fname.endswith(".py"):
25
+ continue
26
+ path = os.path.join(root, fname)
27
+ with open(path, "r", errors="ignore") as f:
28
+ for idx, line in enumerate(f, start=1):
29
+ if line.startswith("<<<<<<<") or line.startswith(">>>>>>>"):
30
+ bad.append(f"{path}:{idx}")
31
+ self.assertEqual([], bad, msg=f"Found conflict markers: {bad}")
32
+
33
+
34
+ class SemanticGateTests(unittest.TestCase):
35
+ def _write_tmp(self, code: str) -> str:
36
+ tmpdir = tempfile.mkdtemp(prefix="tier1_sem_")
37
+ path = os.path.join(tmpdir, "dut.sv")
38
+ with open(path, "w") as f:
39
+ f.write(code)
40
+ self.addCleanup(lambda: shutil.rmtree(tmpdir, ignore_errors=True))
41
+ return path
42
+
43
+ def test_port_shadowing_rejected(self):
44
+ code = textwrap.dedent(
45
+ """
46
+ module dut(
47
+ input logic clk,
48
+ input logic a,
49
+ output logic y
50
+ );
51
+ logic a;
52
+ always_comb y = a;
53
+ endmodule
54
+ """
55
+ )
56
+ path = self._write_tmp(code)
57
+ ok, report = vlsi_tools.run_semantic_rigor_check(path)
58
+ self.assertFalse(ok)
59
+ self.assertIn("a", report.get("port_shadowing", []))
60
+
61
+ def test_clean_semantics_pass(self):
62
+ code = textwrap.dedent(
63
+ """
64
+ module dut(
65
+ input logic clk,
66
+ input logic [3:0] a,
67
+ output logic [3:0] y
68
+ );
69
+ always_comb y = a;
70
+ endmodule
71
+ """
72
+ )
73
+ path = self._write_tmp(code)
74
+ ok, report = vlsi_tools.run_semantic_rigor_check(path)
75
+ self.assertTrue(ok, msg=str(report))
76
+
77
+
78
+ class ParserTests(unittest.TestCase):
79
+ def test_log_summary_stream_parser(self):
80
+ tmpdir = tempfile.mkdtemp(prefix="tier1_log_")
81
+ self.addCleanup(lambda: shutil.rmtree(tmpdir, ignore_errors=True))
82
+ log = os.path.join(tmpdir, "routing.log")
83
+ with open(log, "w") as f:
84
+ for _ in range(5000):
85
+ f.write("[INFO GRT] overflow on met2 congestion\n")
86
+ for _ in range(200):
87
+ f.write("[WARN] antenna violation\n")
88
+ summary = vlsi_tools.parse_eda_log_summary(log, kind="routing", top_n=10)
89
+ self.assertEqual(summary.get("total_lines"), 5200)
90
+ self.assertTrue(summary.get("top_issues"))
91
+ self.assertIn("routing_congestion", summary.get("counts", {}))
92
+
93
+ def test_multi_corner_sta_parse(self):
94
+ tmp = tempfile.mkdtemp(prefix="tier1_sta_")
95
+ self.addCleanup(lambda: shutil.rmtree(tmp, ignore_errors=True))
96
+ original = vlsi_tools.OPENLANE_ROOT
97
+ vlsi_tools.OPENLANE_ROOT = tmp
98
+ self.addCleanup(lambda: setattr(vlsi_tools, "OPENLANE_ROOT", original))
99
+
100
+ base = os.path.join(tmp, "designs", "chip", "runs", "run1", "reports", "signoff")
101
+ for corner, setup, hold in [
102
+ ("26-mca", "5.20", "0.11"),
103
+ ("28-mca", "5.00", "0.09"),
104
+ ("30-mca", "4.90", "0.08"),
105
+ ]:
106
+ os.makedirs(os.path.join(base, corner), exist_ok=True)
107
+ path = os.path.join(base, corner, f"{corner}_sta.summary.rpt")
108
+ with open(path, "w") as f:
109
+ f.write(
110
+ textwrap.dedent(
111
+ f"""
112
+ report_wns
113
+ wns {setup}
114
+ report_worst_slack -max (Setup)
115
+ worst slack {setup}
116
+ report_worst_slack -min (Hold)
117
+ worst slack {hold}
118
+ """
119
+ )
120
+ )
121
+ sta = vlsi_tools.parse_sta_signoff("chip")
122
+ self.assertFalse(sta.get("error"))
123
+ self.assertEqual(3, len(sta.get("corners", [])))
124
+ self.assertAlmostEqual(4.90, sta.get("worst_setup"), places=2)
125
+ self.assertAlmostEqual(0.08, sta.get("worst_hold"), places=2)
126
+
127
+ def test_congestion_parser(self):
128
+ tmp = tempfile.mkdtemp(prefix="tier1_cong_")
129
+ self.addCleanup(lambda: shutil.rmtree(tmp, ignore_errors=True))
130
+ original = vlsi_tools.OPENLANE_ROOT
131
+ vlsi_tools.OPENLANE_ROOT = tmp
132
+ self.addCleanup(lambda: setattr(vlsi_tools, "OPENLANE_ROOT", original))
133
+
134
+ log_dir = os.path.join(tmp, "designs", "chip", "runs", "agentrun", "logs", "routing")
135
+ os.makedirs(log_dir, exist_ok=True)
136
+ log_path = os.path.join(log_dir, "19-global.log")
137
+ with open(log_path, "w") as f:
138
+ f.write("met1 8342 44 0.53% 0 / 0 / 0\n")
139
+ f.write("met2 8036 1580 19.66% 5 / 2 / 7\n")
140
+ f.write("Total 16378 1624 9.91% 5 / 2 / 7\n")
141
+ data = vlsi_tools.parse_congestion_metrics("chip")
142
+ self.assertAlmostEqual(9.91, data.get("total_usage_pct"), places=2)
143
+ self.assertEqual(7, data.get("total_overflow"))
144
+
145
+
146
+ class OrchestratorSafetyTests(unittest.TestCase):
147
+ def test_failure_fingerprint_repetition(self):
148
+ orch = BuildOrchestrator(
149
+ name="fingerprint_demo",
150
+ desc="demo",
151
+ llm=None,
152
+ strict_gates=True,
153
+ )
154
+ first = orch._record_failure_fingerprint("same failure")
155
+ second = orch._record_failure_fingerprint("same failure")
156
+ self.assertFalse(first)
157
+ self.assertTrue(second)
158
+
159
+ def test_hierarchy_auto_threshold(self):
160
+ orch = BuildOrchestrator(
161
+ name="hier_demo",
162
+ desc="demo",
163
+ llm=None,
164
+ hierarchical_mode="auto",
165
+ )
166
+ rtl = "\n".join([
167
+ "module top(input logic clk, output logic y); assign y = 1'b0; endmodule",
168
+ "module blk_a(input logic i, output logic o); assign o = i; endmodule",
169
+ "module blk_b(input logic i, output logic o); assign o = i; endmodule",
170
+ ] + ["// filler"] * 650)
171
+ orch._evaluate_hierarchy(rtl)
172
+ plan = orch.artifacts.get("hierarchy_plan", {})
173
+ self.assertTrue(plan.get("enabled"), msg=str(plan))
174
+
175
+ def test_benchmark_metrics_written_to_metircs(self):
176
+ import agentic.orchestrator as orch_mod
177
+
178
+ tmp = tempfile.mkdtemp(prefix="tier1_metircs_")
179
+ self.addCleanup(lambda: shutil.rmtree(tmp, ignore_errors=True))
180
+ old_workspace = orch_mod.WORKSPACE_ROOT
181
+ orch_mod.WORKSPACE_ROOT = tmp
182
+ self.addCleanup(lambda: setattr(orch_mod, "WORKSPACE_ROOT", old_workspace))
183
+
184
+ orch = BuildOrchestrator(name="metric_chip", desc="demo", llm=None)
185
+ orch.state = orch.state.SUCCESS
186
+ orch.artifacts["signoff_result"] = "PASS"
187
+ orch.artifacts["metrics"] = {"chip_area_um2": 1234.5, "area": 321, "utilization": 42.0, "timing_tns": 0.0, "timing_wns": 0.1}
188
+ orch.artifacts["sta_signoff"] = {"worst_setup": 0.1, "worst_hold": 0.05}
189
+ orch.artifacts["power_signoff"] = {"total_power_w": 1e-3, "internal_power_w": 5e-4, "switching_power_w": 4e-4, "leakage_power_w": 1e-5, "irdrop_max_vpwr": 0.01, "irdrop_max_vgnd": 0.02}
190
+ orch.artifacts["signoff"] = {"drc_violations": 0, "lvs_errors": 0, "antenna_violations": 0}
191
+ orch.artifacts["coverage"] = {"line_pct": 90.0}
192
+ orch.artifacts["formal_result"] = "PASS"
193
+ orch.artifacts["lec_result"] = "PASS"
194
+ orch._save_industry_benchmark_metrics()
195
+
196
+ metircs_dir = os.path.join(tmp, "metircs", "metric_chip")
197
+ self.assertTrue(os.path.isdir(metircs_dir))
198
+ self.assertTrue(os.path.isfile(os.path.join(metircs_dir, "latest.json")))
199
+ self.assertTrue(os.path.isfile(os.path.join(metircs_dir, "latest.md")))
200
+
201
+
202
+ if __name__ == "__main__":
203
+ unittest.main()