vxkyyy commited on
Commit
f8d709b
Β·
1 Parent(s): eb51568

feat: add skip coverage option to Design Studio and Human In Loop Build pages

Browse files

- Introduced a new state variable `skipCoverage` in both Design Studio and Human In Loop Build components.
- Added a checkbox to toggle the `skipCoverage` option in the UI for both components.
- Updated the API request payload to include `skip_coverage` based on the new state.

README.md CHANGED
@@ -1,421 +1,344 @@
1
- <div align="center">
2
 
3
- <br/>
4
 
5
- <!-- ──────────────── WORDMARK ──────────────── -->
6
- <h1>
7
- <img src="https://readme-typing-svg.demolab.com?font=Fira+Code&weight=700&size=42&pause=1000&color=C9643E&center=true&vCenter=true&width=600&lines=AgentIC" alt="AgentIC" />
8
- </h1>
9
 
10
- <h3><em>Describe a chip. Get silicon.</em></h3>
11
 
12
- <br/>
13
 
14
- <!-- ──────────────── BRAND BADGES ──────────────── -->
 
 
 
15
 
16
- ![](https://img.shields.io/badge/Autonomous%20Silicon%20Compiler-informational?style=for-the-badge&labelColor=1C1A17&color=C9643E)
17
- ![](https://img.shields.io/badge/All%205%20Core%20Modules%20Active-success?style=for-the-badge&labelColor=1C1A17&color=3A7856)
18
- ![](https://img.shields.io/badge/Fail--Closed%20by%20Default-critical?style=for-the-badge&labelColor=1C1A17&color=B83030)
19
 
20
- <br/>
21
 
22
- ![](https://img.shields.io/badge/Python-3.10%2B-C9643E?style=flat-square&logo=python&logoColor=white&labelColor=1C1A17)
23
- ![](https://img.shields.io/badge/PDK-Sky130%20%7C%20GF180-C9643E?style=flat-square&labelColor=1C1A17)
24
- ![](https://img.shields.io/badge/Formal%20Verification-SymbiYosys-C9643E?style=flat-square&labelColor=1C1A17)
25
- ![](https://img.shields.io/badge/Physical%20Flow-OpenLane-C9643E?style=flat-square&labelColor=1C1A17)
26
- ![](https://img.shields.io/badge/License-Proprietary-A67828?style=flat-square&labelColor=1C1A17)
27
 
28
- <br/><br/>
29
 
30
- <table>
31
- <tr>
32
- <td align="center" width="220"><b>Natural Language In ⟢</b></td>
33
- <td align="center" width="60">β†’</td>
34
- <td align="center" width="220"><b>⟢ Verified RTL + GDS Out</b></td>
35
- </tr>
36
- </table>
37
 
38
- <br/>
 
 
 
 
39
 
40
- </div>
41
 
42
- ---
43
 
44
- ## What is AgentIC?
45
 
46
- AgentIC is a **fully autonomous hardware compiler** that takes a plain-English description of a digital circuit and produces a complete, verified, physically-implemented chip design β€” with no human in the loop unless you want one.
47
 
48
- It is not a code-generation copilot. It is not a template filler. It is an **end-to-end autonomous agent system** that reasons, verifies, debugs, repairs, and re-verifies until the design meets every quality gate β€” then hands you the GDS.
49
 
50
- > *"You wrote the spec. We wrote the chip."*
51
 
52
- ---
53
 
54
- ## The Problem It Solves
55
 
56
- Traditional hardware design has two unavoidable costs:
 
 
 
 
 
 
 
 
 
 
 
57
 
58
- | Problem | Industry Reality | AgentIC's Answer |
59
- |---------|-----------------|-----------------|
60
- | **Iteration time** | Hours per RTL-to-sim cycle | Fully automated multi-stage pipeline |
61
- | **Silent bugs** | Weak checks ship bad silicon | Every gate is fail-closed β€” if it cannot prove correctness, it does not proceed |
62
- | **Expert bottleneck** | Needs senior RTL + verification + physical engineers | One prompt, autonomous resolution |
63
- | **Infinite churn** | Teams retry the same broken strategy | Loop budgets, loop-identity detection, and strategy pivots are baked in |
64
 
65
- ---
66
 
67
- <div align="center">
 
 
 
 
 
 
 
 
 
 
 
68
 
69
- ## Pipeline at a Glance
70
 
71
- </div>
72
 
73
- ```
74
- Your Prompt
75
- β”‚
76
- β–Ό
77
- β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
78
- β”‚ AGENTRIC PIPELINE β”‚
79
- β”‚ β”‚
80
- β”‚ β‘  Specification β†’ Structured design contract β”‚
81
- β”‚ β‘‘ RTL Generation β†’ Architecture-aware Verilog β”‚
82
- β”‚ β‘’ RTL Hardening β†’ Iterative syntax Β· lint Β· semantic fix β”‚
83
- β”‚ β‘£ Verification β†’ Testbench compile Β· simulation β”‚
84
- β”‚ β‘€ Formal Proof β†’ SVA assertions Β· SymbiYosys bounded MC β”‚
85
- β”‚ β‘₯ Coverage β†’ Profile-driven closure Β· anti-regressionβ”‚
86
- β”‚ ⑦ Regression β†’ Directed corner-case execution β”‚
87
- β”‚ β‘§ Physical Flow β†’ Floorplan Β· Hardening Β· Convergence β”‚
88
- β”‚ ⑨ Signoff β†’ DRC Β· LVS Β· STA Β· Power Β· IR Β· LEC β”‚
89
- β”‚ β”‚
90
- β”‚ SUCCESS ──────────────────── GDS + Reports β”‚
91
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
92
- ```
93
 
94
- Every transition between stages is gated. Nothing proceeds until the previous stage passes cleanly.
95
-
96
- ---
97
-
98
- ## Five Core Intelligence Modules
99
-
100
- AgentIC's reasoning layer is built on five proprietary modules β€” all active in the current production pipeline.
101
-
102
- <br/>
103
-
104
- <table>
105
- <thead>
106
- <tr>
107
- <th width="220">Module</th>
108
- <th>Capability</th>
109
- <th width="200">When It Activates</th>
110
- </tr>
111
- </thead>
112
- <tbody>
113
- <tr>
114
- <td><b>Specification Architect</b></td>
115
- <td>Converts ambiguous prose into a precise, structured design contract shared by all downstream agents β€” preventing conflicting interpretations before a single line of RTL is written</td>
116
- <td>Before RTL generation</td>
117
- </tr>
118
- <tr>
119
- <td><b>Iterative Reasoning Agent</b></td>
120
- <td>Applies a multi-step Think β†’ Act β†’ Observe loop to reason through RTL issues before committing any repair, reducing unnecessary edits and preserving design intent</td>
121
- <td>Every RTL repair cycle</td>
122
- </tr>
123
- <tr>
124
- <td><b>Waveform Intelligence</b></td>
125
- <td>Reads simulation waveforms and traces every incorrectly-driven output back to the exact RTL construct responsible β€” giving the repair layer deterministic evidence instead of inference</td>
126
- <td>On any simulation failure</td>
127
- </tr>
128
- <tr>
129
- <td><b>Formal Causal Debugger</b></td>
130
- <td>Builds a signal causality graph from the failing formal property, applies balanced for-and-against analysis, and returns the root-cause signal, source line, and a confidence score</td>
131
- <td>On any formal failure</td>
132
- </tr>
133
- <tr>
134
- <td><b>Self-Reflection Recovery</b></td>
135
- <td>Categorises physical implementation failures, reflects on convergence history, proposes corrective actions, applies them, and tracks whether metrics are improving or stagnating before retrying</td>
136
- <td>On any hardening failure</td>
137
- </tr>
138
- </tbody>
139
- </table>
140
-
141
- <br/>
142
-
143
- > **Intellectual Property Notice** β€” The internal algorithms, decision logic, prompt architecture, repair heuristics, scoring mechanisms, and module interfaces are proprietary and confidential. The table above describes *capabilities*, not *implementations*. No part of AgentIC's core reasoning design is disclosed in this document.
144
-
145
- ---
146
-
147
- ## Quality Architecture
148
-
149
- AgentIC is engineered around one principle: **trust nothing, verify everything.**
150
-
151
- ### Fail-Closed Gates
152
-
153
- Every stage either **passes** or **halts with a diagnosis**. There is no silent forwarding of a broken artifact.
154
 
155
- ```
156
- RTL FIX ──► [Syntax Gate] ──► [Lint Gate] ──► [Semantic Gate] ──► Next Stage
157
- β”‚ β”‚ β”‚
158
- FAIL FAIL FAIL
159
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
160
- β”‚
161
- Repair Loop (budgeted)
162
- β”‚
163
- [Loop-Identity Guard] ◄── prevents infinite churn
164
- ```
165
 
166
- ### Layered Autonomous Repair
167
 
168
- At every gate, the system applies repair in strict priority order:
 
 
 
169
 
170
- 1. **Deterministic pass** β€” machine-precise corrections with zero LLM involvement
171
- 2. **Reasoned pass** β€” multi-step agentic reasoning with hardware-specific tool use
172
- 3. **Generative pass** β€” LLM-guided surgical correction, minimum diff enforced
173
- 4. **Strategy pivot** β€” if all passes exhaust their budget, the build fails closed; it does not ship a broken artifact
174
 
175
- ### Loop Safety
176
 
177
- Every repair loop has a hard step budget. Identical repeated artifacts are detected and rejected before an LLM call is made. If the system cannot demonstrate measurable forward progress, it escalates to fail-closed rather than spinning.
178
 
179
- ---
180
 
181
- ## Multi-Agent Collaboration
182
 
183
- The generative layer is a collaborative crew of specialised AI agents, each scoped to a single responsibility and equipped with hardware-specific tooling:
184
 
185
- | Agent Role | Responsibility |
186
- |------------|---------------|
187
- | **RTL Designer** | Architecture-aware Verilog generation with pre-submission self-verification |
188
- | **Testbench Designer** | Simulator-safe testbenches with stimulus integrity guarantees |
189
- | **Failure Analyst** | Signal-level diagnosis β€” always cites specific RTL line, construct, and expected vs. actual values |
190
- | **Verification Engineer** | SVA assertion generation tuned for the open-source formal toolchain |
191
- | **Regression Architect** | Directed corner-case scenario planning |
192
- | **Physical Constraints** | SDC timing constraint synthesis |
193
- | **Documentation** | Design specification and IP declaration |
194
 
195
- All agents operate from the structured design contract established at the Specification stage β€” eliminating the divergence that occurs when different agents interpret the same prose spec independently.
196
 
197
- ---
198
 
199
- ## Human-in-the-Loop Web Interface
200
 
201
- <div align="center">
 
 
202
 
203
- ![](https://img.shields.io/badge/Real--time%20SSE%20Streaming-C9643E?style=for-the-badge&labelColor=1C1A17)
204
- ![](https://img.shields.io/badge/Approval%20Gates-C9643E?style=for-the-badge&labelColor=1C1A17)
205
- ![](https://img.shields.io/badge/Live%20Agent%20Reasoning%20View-C9643E?style=for-the-badge&labelColor=1C1A17)
206
 
207
- </div>
208
 
209
- AgentIC ships with a production-grade web application (React 19 + Vite frontend, FastAPI backend). Every pipeline event streams to the UI in real time. Three build modes are available:
210
 
211
- | Mode | Description |
212
- |------|-------------|
213
- | **Autonomous** | Zero human checkpoints β€” fully hands-off |
214
- | **Supervised** | Pause and approve at user-defined stages |
215
- | **Interactive** | Full step-by-step control with per-decision approval |
216
 
217
- Design artifacts, agent reasoning steps, signal traces, formal proof results, and physical convergence metrics are all visible in the interface as they are produced.
218
 
219
- ---
220
 
221
- ## Signoff Coverage
222
 
223
- | Domain | What Is Verified |
224
- |--------|-----------------|
225
- | **Functional** | Simulation correctness across all generated and directed stimuli |
226
- | **Formal** | Bounded model checking with SVA property coverage |
227
- | **Structural** | DRC β€” design rule compliance for the target PDK |
228
- | **Physical** | LVS β€” layout-versus-schematic equivalence |
229
- | **Timing** | Multi-corner STA β€” setup and hold across all paths |
230
- | **Power** | Peak and average power estimation |
231
- | **IR Drop** | Supply integrity validation |
232
- | **Equivalence** | LEC β€” RTL-to-GDS logical equivalence |
 
 
 
 
233
 
234
- ---
 
 
 
 
 
235
 
236
- ## Getting Started
237
 
238
  ### Prerequisites
239
 
240
- ```bash
241
- # Core
242
- Python 3.10+, Verilator 5.x, Icarus Verilog (iverilog + vvp)
 
 
 
 
 
243
 
244
- # Formal verification
245
- oss-cad-suite β€” provides sby, yosys, eqy
246
 
247
- # Physical implementation (optional, skip with --skip-openlane)
248
- OpenLane + Docker
 
 
249
  ```
250
 
251
- ### Install
252
 
253
  ```bash
254
  git clone https://github.com/Vickyrrrrrr/AgentIC.git
255
  cd AgentIC
256
- python3 -m venv .venv && source .venv/bin/activate
 
257
  pip install -r requirements.txt
258
  ```
259
 
260
- ### Configure `.env`
 
 
261
 
262
  ```bash
263
- # LLM backend β€” cloud
264
  NVIDIA_API_KEY="your-key-here"
265
-
266
- # LLM backend β€” local
267
  LLM_BASE_URL="http://localhost:11434"
268
-
269
- # Physical flow roots (only needed for --full-signoff builds)
270
  OPENLANE_ROOT="/path/to/OpenLane"
271
  PDK_ROOT="/path/to/pdk"
272
  ```
273
 
274
- ---
 
 
 
 
275
 
276
- ## CLI Reference
 
 
277
 
278
  ```bash
279
- # Functional verification only β€” fast, no physical tools needed
280
  python3 main.py build \
281
  --name my_design \
282
  --desc "32-bit APB timer with interrupt" \
283
  --skip-openlane
 
 
 
284
 
285
- # Full build through physical signoff
286
  python3 main.py build \
287
  --name my_design \
288
  --desc "32-bit APB timer with interrupt" \
289
  --full-signoff \
290
  --pdk-profile sky130
 
291
 
292
- # Exploration mode β€” relaxed gates
 
 
293
  python3 main.py build \
294
  --name my_design \
295
- --desc "32-bit APB timer with interrupt" \
296
  --skip-openlane \
297
- --no-strict-gates
298
  ```
299
 
300
- ### All build flags
301
-
302
- ```
303
- --strict-gates / --no-strict-gates Enforce all quality gates (default: strict)
304
- --skip-openlane Stop after formal/coverage signoff
305
- --pdk-profile {sky130, gf180} Target PDK (default: sky130)
306
- --full-signoff Run full DRC/LVS/STA/Power/LEC suite
307
- --max-retries N Per-stage LLM repair budget
308
- --min-coverage N Coverage closure threshold (%)
309
- --max-pivots N Physical flow strategy pivot limit
310
- --congestion-threshold FLOAT Routing congestion abort threshold
311
- --hierarchical {auto, off, on} Hierarchical flow mode
 
312
  ```
313
 
314
- ---
 
 
 
 
 
 
315
 
316
  ## Generated Artifacts
317
 
318
- Every completed build produces a full artifact set:
319
 
320
- ```
321
  designs/<name>/
322
  β”œβ”€β”€ src/
323
- β”‚ β”œβ”€β”€ <name>.v # Production RTL
324
- β”‚ β”œβ”€β”€ <name>_tb.v # Verified testbench
325
- β”‚ β”œβ”€β”€ <name>_sva.sv # SVA property suite
326
- β”‚ └── <name>.sdc # Timing constraints
 
 
327
  β”œβ”€β”€ formal/
328
- β”‚ └── <name>.sby # SymbiYosys config + results
329
- β”œβ”€β”€ <name>.eqy # Equivalence check config
330
- β”œβ”€β”€ <name>_eco_patch.tcl # ECO patch (when signoff required it)
331
- β”œβ”€β”€ config.tcl # OpenLane configuration
332
- β”œβ”€β”€ macro_placement.tcl # Floorplan
333
- └── ip_manifest.json # IP declaration manifest
334
-
335
- metircs/<name>/
336
- β”œβ”€β”€ latest.json # Machine-readable benchmark snapshot
337
- └── latest.md # Human-readable signoff report
338
  ```
339
 
340
- ---
341
 
342
- ## CI
343
 
344
- ```bash
345
- # PR gate β€” syntax + unit tests (fast, ~2 min)
346
- bash scripts/ci/smoke.sh
347
 
348
- # Nightly β€” full build + signoff
349
- bash scripts/ci/nightly_full.sh
350
  ```
351
 
352
- Workflow definition: `.github/workflows/ci.yml`
353
 
354
- ---
355
 
356
- ## Supported PDKs
357
 
358
- | PDK | Process Node | Status |
359
- |-----|-------------|--------|
360
- | SkyWater Sky130 | 130 nm | Production |
361
- | GlobalFoundries GF180MCU | 180 nm | Production |
362
 
363
- ---
 
 
 
364
 
365
- ## Practical Scope
366
 
367
- AgentIC is designed for **OSS PDK prototype tape-out and research-grade autonomous hardware design**. It is not a replacement for a certified commercial foundry sign-off flow. ECO and hierarchical flows produce concrete, functional artifacts but are not tuned for production process corners.
 
368
 
369
- ---
370
 
371
- <div align="center">
372
 
373
- ## Design Philosophy
374
 
375
- <br/>
 
 
 
376
 
377
- > *The system is only as trustworthy as its most lenient gate.*
378
- >
379
- > Every component is designed to fail loudly, repair precisely,
380
- > and proceed only when correctness is demonstrated β€” not assumed.
381
 
382
- <br/>
383
 
384
- | Principle | What It Means in Practice |
385
- |-----------|--------------------------|
386
- | **Fail closed** | No stage silently degrades quality |
387
- | **Minimum diff** | Repairs change the least possible β€” intent is preserved |
388
- | **Bounded loops** | Every retry has a hard budget |
389
- | **Determinism first** | Machine-precise fixes are always attempted before LLM fixes |
390
- | **Evidence-driven** | Every diagnosis cites signal names and line numbers β€” never guesses |
391
 
392
- </div>
393
 
394
- ---
395
 
396
  ## License
397
 
398
- **Proprietary and Confidential.**
399
 
400
  Copyright Β© 2026 Vicky Nishad. All rights reserved.
401
 
402
- This software, its architecture, algorithms, agent designs, internal logic, prompt methodologies, repair heuristics, and all associated intellectual property are the exclusive property of the author. No part of this system β€” in whole or in part β€” may be reproduced, decompiled, reverse-engineered, distributed, sublicensed, or used in any derivative work without explicit written permission from the copyright holder.
403
-
404
- Unauthorised use is a violation of applicable intellectual property law.
405
-
406
- ---
407
-
408
- <div align="center">
409
-
410
- <br/>
411
-
412
- ![](https://img.shields.io/badge/Built%20with%20intention-C9643E?style=for-the-badge&labelColor=1C1A17)
413
- ![](https://img.shields.io/badge/Designed%20to%20last-C9643E?style=for-the-badge&labelColor=1C1A17)
414
-
415
- <br/>
416
-
417
- *AgentIC β€” from words to wafers.*
418
-
419
- <br/>
420
-
421
- </div>
 
1
+ # AgentIC
2
 
3
+ AgentIC is an autonomous digital design pipeline that takes a natural-language chip specification and drives it through RTL generation, verification, formal checks, coverage, regression, and optional physical implementation. The system is built as a gated flow around standard EDA tools and AI-assisted generation and debugging.
4
 
5
+ This README is written for both technical readers and non-specialists. It explains what the system does, what the build stages mean, and what the repository contains, without exposing internal proprietary logic.
 
 
 
6
 
7
+ ## What AgentIC Actually Does
8
 
9
+ At a high level, AgentIC performs four jobs:
10
 
11
+ 1. Turn an ambiguous prose specification into a structured hardware task.
12
+ 2. Generate candidate RTL, testbenches, properties, and constraints.
13
+ 3. Push those artifacts through quality gates with controlled repair loops.
14
+ 4. Stop only when the design either passes the configured flow or fails with a concrete diagnosis.
15
 
16
+ The system is not a simple code generator. It is a build-and-check pipeline that keeps testing what it generates.
 
 
17
 
18
+ ## System Model
19
 
20
+ AgentIC is organized around three layers:
 
 
 
 
21
 
22
+ ### 1. Orchestration Layer
23
 
24
+ The orchestrator owns stage transitions, retry budgets, artifact routing, and failure handling. It is the source of truth for:
 
 
 
 
 
 
25
 
26
+ - which stage runs next
27
+ - what inputs each stage is allowed to consume
28
+ - what counts as pass, fail, skip, or tool error
29
+ - how many retries are allowed
30
+ - when the build must halt instead of spinning
31
 
32
+ Core implementation: [orchestrator.py](/home/vickynishad/AgentIC/src/agentic/orchestrator.py)
33
 
34
+ ### 2. Tool Layer
35
 
36
+ EDA tool wrappers execute Verilator, Icarus Verilog, Yosys, and SymbiYosys. Intermediate work is staged in temporary directories; human-relevant artifacts remain in the design tree.
37
 
38
+ Core implementation: [vlsi_tools.py](/home/vickynishad/AgentIC/src/agentic/tools/vlsi_tools.py)
39
 
40
+ ### 3. Agent Layer
41
 
42
+ Specialist AI components assist specific tasks such as RTL generation, testbench creation, failure analysis, and formal-property generation. These components do not bypass the tool checks; their output must still pass the relevant stage gates.
43
 
44
+ ## End-to-End Pipeline
45
 
46
+ The full build pipeline is:
47
 
48
+ ```text
49
+ Specification
50
+ -> RTL_GEN
51
+ -> RTL_FIX
52
+ -> VERIFICATION
53
+ -> FORMAL_VERIFY
54
+ -> COVERAGE_CHECK
55
+ -> REGRESSION
56
+ -> HARDENING
57
+ -> CONVERGENCE
58
+ -> SIGNOFF
59
+ ```
60
 
61
+ Each stage is gated. A stage can only advance if its required checks pass. There is no silent forwarding of a broken artifact.
 
 
 
 
 
62
 
63
+ ### Stage Intent
64
 
65
+ | Stage | Purpose | Typical Outputs |
66
+ |------|---------|-----------------|
67
+ | `SPECIFICATION` | Normalize the user prompt into a design contract | structured prompt context |
68
+ | `RTL_GEN` | Generate initial RTL and supporting files | `<name>.v`, supporting metadata |
69
+ | `RTL_FIX` | Enforce syntax, lint, and semantic rigor; repair failures | corrected RTL, diagnostics |
70
+ | `VERIFICATION` | Build TB, run simulation, analyze functional failures | `<name>_tb.v`, sim logs, VCD |
71
+ | `FORMAL_VERIFY` | Generate SVA, preflight it, run formal checks | `<name>_sva.sv`, formal summaries |
72
+ | `COVERAGE_CHECK` | Improve or validate coverage after functional success | coverage metrics, coverage JSON |
73
+ | `REGRESSION` | Run directed corner-case validation | regression results |
74
+ | `HARDENING` | Invoke OpenLane flow if enabled | layout flow outputs |
75
+ | `CONVERGENCE` | Recover from physical-flow failures | updated configs or constraints |
76
+ | `SIGNOFF` | Run DRC/LVS/STA/power/equivalence style checks | signoff reports |
77
 
78
+ ## Reliability Model
79
 
80
+ AgentIC is designed around one rule: every stage must provide enough evidence to justify the next stage.
81
 
82
+ ### Fail-Closed By Default
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
+ If syntax, lint, simulation, formal checks, or physical checks fail, the build does not continue unless a replacement artifact passes the relevant gate.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
+ ### Deterministic Before Generative
 
 
 
 
 
 
 
 
 
87
 
88
+ Repair is layered:
89
 
90
+ 1. deterministic cleanup or mechanical fix
91
+ 2. targeted analysis
92
+ 3. constrained regeneration
93
+ 4. strategy pivot or fail-closed halt
94
 
95
+ This ordering matters. The system tries to preserve design intent and minimize uncontrolled rewrites.
 
 
 
96
 
97
+ ### Budgeted Loops
98
 
99
+ Each repair path is budgeted. The system tracks repeated failures and retry exhaustion to avoid infinite churn.
100
 
101
+ ## Core Reasoning Components
102
 
103
+ AgentIC uses multiple specialized components rather than one undifferentiated "agent".
104
 
105
+ These components are used for tasks such as:
106
 
107
+ - RTL generation
108
+ - testbench generation
109
+ - SVA generation
110
+ - constraint generation
111
+ - documentation
 
 
 
 
112
 
113
+ These outputs are still stage-gated. AgentIC does not assume that generated code is valid just because an AI model produced it.
114
 
115
+ ## How Reliability Is Managed
116
 
117
+ Reliability work in AgentIC focuses on three practical areas:
118
 
119
+ - clear stage boundaries
120
+ - explicit artifact passing between stages
121
+ - replayable testing from benchmark failures
122
 
123
+ The goal is to make failures diagnosable and repeatable, not to hide them behind optimistic retries.
 
 
124
 
125
+ ## Toolchain
126
 
127
+ AgentIC is built around open-source digital design tools:
128
 
129
+ - Verilator
130
+ - Icarus Verilog (`iverilog`, `vvp`)
131
+ - Yosys
132
+ - SymbiYosys (`sby`)
133
+ - OpenLane for physical implementation
134
 
135
+ Formal and physical-flow stages are optional depending on the selected build mode and installed environment.
136
 
137
+ ## Repository Structure
138
 
139
+ Top-level layout:
140
 
141
+ ```text
142
+ AgentIC/
143
+ β”œβ”€β”€ src/agentic/ # orchestrator, tool wrappers, agents, CLI
144
+ β”œβ”€β”€ tests/ # unit and reliability tests
145
+ β”œβ”€β”€ benchmark/ # benchmark runner and reports
146
+ β”œβ”€β”€ docs/ # supporting documentation
147
+ β”œβ”€β”€ web/ # frontend
148
+ β”œβ”€β”€ server/ # backend/service layer
149
+ β”œβ”€β”€ scripts/ # helper and CI scripts
150
+ β”œβ”€β”€ artifacts/ # generated runtime artifacts
151
+ └── metircs/ # benchmark and design metrics
152
+ ```
153
+
154
+ Key files:
155
 
156
+ - [README.md](/home/vickynishad/AgentIC/README.md)
157
+ - [main.py](/home/vickynishad/AgentIC/main.py)
158
+ - [cli.py](/home/vickynishad/AgentIC/src/agentic/cli.py)
159
+ - [orchestrator.py](/home/vickynishad/AgentIC/src/agentic/orchestrator.py)
160
+ - [vlsi_tools.py](/home/vickynishad/AgentIC/src/agentic/tools/vlsi_tools.py)
161
+ - [USER_GUIDE.md](/home/vickynishad/AgentIC/docs/USER_GUIDE.md)
162
 
163
+ ## Installation
164
 
165
  ### Prerequisites
166
 
167
+ Minimum verification flow:
168
+
169
+ ```text
170
+ Python 3.10+
171
+ Verilator 5.x
172
+ Icarus Verilog
173
+ Yosys / SymbiYosys via oss-cad-suite
174
+ ```
175
 
176
+ Optional physical flow:
 
177
 
178
+ ```text
179
+ OpenLane
180
+ Docker
181
+ Installed PDK (for example sky130 or gf180)
182
  ```
183
 
184
+ ### Setup
185
 
186
  ```bash
187
  git clone https://github.com/Vickyrrrrrr/AgentIC.git
188
  cd AgentIC
189
+ python3 -m venv .venv
190
+ source .venv/bin/activate
191
  pip install -r requirements.txt
192
  ```
193
 
194
+ ### Environment
195
+
196
+ Typical `.env` values:
197
 
198
  ```bash
 
199
  NVIDIA_API_KEY="your-key-here"
 
 
200
  LLM_BASE_URL="http://localhost:11434"
 
 
201
  OPENLANE_ROOT="/path/to/OpenLane"
202
  PDK_ROOT="/path/to/pdk"
203
  ```
204
 
205
+ See [USER_GUIDE.md](/home/vickynishad/AgentIC/docs/USER_GUIDE.md) for model backend selection details.
206
+
207
+ ## CLI Usage
208
+
209
+ All commands are invoked through `main.py`.
210
 
211
+ ### Build
212
+
213
+ Fast RTL and verification flow:
214
 
215
  ```bash
 
216
  python3 main.py build \
217
  --name my_design \
218
  --desc "32-bit APB timer with interrupt" \
219
  --skip-openlane
220
+ ```
221
+
222
+ Full flow with signoff-oriented stages:
223
 
224
+ ```bash
225
  python3 main.py build \
226
  --name my_design \
227
  --desc "32-bit APB timer with interrupt" \
228
  --full-signoff \
229
  --pdk-profile sky130
230
+ ```
231
 
232
+ Skip coverage while still continuing from formal to regression:
233
+
234
+ ```bash
235
  python3 main.py build \
236
  --name my_design \
237
+ --desc "UART transmitter with programmable baud divisor" \
238
  --skip-openlane \
239
+ --skip-coverage
240
  ```
241
 
242
+ Important build flags:
243
+
244
+ ```text
245
+ --skip-openlane
246
+ --skip-coverage
247
+ --full-signoff
248
+ --strict-gates / --no-strict-gates
249
+ --min-coverage
250
+ --max-retries
251
+ --max-pivots
252
+ --pdk-profile {sky130,gf180}
253
+ --hierarchical {auto,off,on}
254
+ --congestion-threshold
255
  ```
256
 
257
+ ### Other Commands
258
+
259
+ ```bash
260
+ python3 main.py simulate --name <design>
261
+ python3 main.py harden --name <design>
262
+ python3 main.py verify <design>
263
+ ```
264
 
265
  ## Generated Artifacts
266
 
267
+ A typical design directory contains:
268
 
269
+ ```text
270
  designs/<name>/
271
  β”œβ”€β”€ src/
272
+ β”‚ β”œβ”€β”€ <name>.v
273
+ β”‚ β”œβ”€β”€ <name>_tb.v
274
+ β”‚ β”œβ”€β”€ <name>_sva.sv
275
+ β”‚ β”œβ”€β”€ *_formal_result.json
276
+ β”‚ β”œβ”€β”€ *_coverage_result.json
277
+ β”‚ └── *.vcd
278
  β”œβ”€β”€ formal/
279
+ β”œβ”€β”€ config.tcl
280
+ β”œβ”€β”€ macro_placement.tcl
281
+ └── ip_manifest.json
 
 
 
 
 
 
 
282
  ```
283
 
284
+ Design-local `src/` is intended to keep permanent, human-useful artifacts. Tool intermediates such as Verilator build trees, compiled simulators, `.sby` working directories, coverage work products, and Yosys scratch outputs are staged in temporary directories and cleaned automatically.
285
 
286
+ ## Benchmarking
287
 
288
+ The repository includes a benchmark runner for multi-design evaluation:
 
 
289
 
290
+ ```bash
291
+ python3 benchmark/run_benchmark.py --design counter8 --attempts 1 --skip-openlane
292
  ```
293
 
294
+ Generated summaries live under [benchmark/results](/home/vickynishad/AgentIC/benchmark/results).
295
 
296
+ Benchmarking matters here because repeated failures usually point to pipeline issues, validation gaps, or repair-routing problems. Those failures are used to improve the system over time.
297
 
298
+ ## Web Interface
299
 
300
+ AgentIC includes a frontend and backend for interactive execution and live streaming of pipeline events. The UI is useful when you want:
 
 
 
301
 
302
+ - stage-by-stage visibility
303
+ - human approval gates
304
+ - real-time log streaming
305
+ - artifact inspection during a build
306
 
307
+ Frontend and service code:
308
 
309
+ - [web](/home/vickynishad/AgentIC/web)
310
+ - [server](/home/vickynishad/AgentIC/server)
311
 
312
+ ## Scope And Limits
313
 
314
+ AgentIC is aimed at autonomous digital design exploration, verification-heavy iteration, and open-source PDK implementation flows. It is not yet a replacement for a certified commercial signoff stack or a production ASIC team with foundry-qualified internal methodology.
315
 
316
+ Practical implications:
317
 
318
+ - benchmark pass rate still matters more than demo quality
319
+ - hierarchical repair is harder than single-module repair and is treated explicitly
320
+ - formal and coverage stages are valuable, but must be routed correctly to be useful
321
+ - "industry-grade" here means constrained, diagnosable, replayable, and fail-closed
322
 
323
+ ## Design Principles
 
 
 
324
 
325
+ The project is built around a small number of non-negotiable rules:
326
 
327
+ - fail closed
328
+ - prefer deterministic fixes before LLM fixes
329
+ - preserve design intent with minimum-diff repair
330
+ - validate every generated artifact before downstream use
331
+ - treat routing bugs as seriously as model bugs
332
+ - turn observed benchmark failures into regression tests
 
333
 
334
+ ## IP Note
335
 
336
+ This README describes capabilities and workflow at a high level. It does not document the internal prompt architecture, private heuristics, decision policies, or proprietary reasoning logic used inside the system.
337
 
338
  ## License
339
 
340
+ Proprietary and Confidential.
341
 
342
  Copyright Β© 2026 Vicky Nishad. All rights reserved.
343
 
344
+ This repository, including its architecture, algorithms, prompts, agent logic, repair heuristics, and associated intellectual property, may not be reproduced, distributed, reverse-engineered, or used in derivative works without explicit written permission.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
server/api.py CHANGED
@@ -199,6 +199,7 @@ class BuildRequest(BaseModel):
199
  design_name: str
200
  description: str
201
  skip_openlane: bool = False
 
202
  full_signoff: bool = False
203
  max_retries: int = 5
204
  show_thinking: bool = False
@@ -309,6 +310,7 @@ def _run_agentic_build(job_id: str, req: BuildRequest):
309
  max_retries=req.max_retries,
310
  verbose=req.show_thinking,
311
  skip_openlane=req.skip_openlane,
 
312
  full_signoff=req.full_signoff,
313
  min_coverage=req.min_coverage,
314
  strict_gates=req.strict_gates,
@@ -554,15 +556,12 @@ def _run_with_approval_gates(job_id: str, orchestrator, req, llm):
554
 
555
  if not approved:
556
  # User rejected β€” loop back to retry the CURRENT state
557
- # (which is now new_state after transition)
558
- # Actually, set state back to the stage that just completed so it retries
559
  _emit_agent_thought(job_id, "Orchestrator", "decision",
560
  f"Stage {completed_stage} rejected by user. Retrying...",
561
  new_state.name)
562
- # The state already transitioned. If rejected, we need to figure out
563
- # what to do. For most stages, retrying means going back.
564
- # However, the rejection feedback was already stored and will be
565
- # picked up at the top of the next iteration.
566
  continue
567
  else:
568
  # State didn't change β€” this can happen for retry loops within a stage
@@ -809,6 +808,7 @@ def get_build_options_contract():
809
  {"key": "strict_gates", "type": "boolean", "default": True, "description": "Enable strict gate enforcement with bounded self-healing."},
810
  {"key": "full_signoff", "type": "boolean", "default": False, "description": "Run full physical signoff checks when available."},
811
  {"key": "skip_openlane", "type": "boolean", "default": False, "description": "Skip physical implementation stages for faster RTL-only iteration."},
 
812
  {"key": "max_retries", "type": "int", "default": 5, "min": 1, "max": 12, "description": "Max repair retries per stage."},
813
  ],
814
  },
 
199
  design_name: str
200
  description: str
201
  skip_openlane: bool = False
202
+ skip_coverage: bool = False
203
  full_signoff: bool = False
204
  max_retries: int = 5
205
  show_thinking: bool = False
 
310
  max_retries=req.max_retries,
311
  verbose=req.show_thinking,
312
  skip_openlane=req.skip_openlane,
313
+ skip_coverage=req.skip_coverage,
314
  full_signoff=req.full_signoff,
315
  min_coverage=req.min_coverage,
316
  strict_gates=req.strict_gates,
 
556
 
557
  if not approved:
558
  # User rejected β€” loop back to retry the CURRENT state
559
+ # Reset state back to the completed stage so the next loop iteration
560
+ # actually reruns it with the stored rejection feedback.
561
  _emit_agent_thought(job_id, "Orchestrator", "decision",
562
  f"Stage {completed_stage} rejected by user. Retrying...",
563
  new_state.name)
564
+ orchestrator.state = prev_state
 
 
 
565
  continue
566
  else:
567
  # State didn't change β€” this can happen for retry loops within a stage
 
808
  {"key": "strict_gates", "type": "boolean", "default": True, "description": "Enable strict gate enforcement with bounded self-healing."},
809
  {"key": "full_signoff", "type": "boolean", "default": False, "description": "Run full physical signoff checks when available."},
810
  {"key": "skip_openlane", "type": "boolean", "default": False, "description": "Skip physical implementation stages for faster RTL-only iteration."},
811
+ {"key": "skip_coverage", "type": "boolean", "default": False, "description": "Skip the coverage stage and continue from formal verification to regression."},
812
  {"key": "max_retries", "type": "int", "default": 5, "min": 1, "max": 12, "description": "Max repair retries per stage."},
813
  ],
814
  },
src/agentic/cli.py CHANGED
@@ -438,6 +438,7 @@ def build(
438
  desc: str = typer.Option(..., "--desc", "-d", help="Natural language description"),
439
  max_retries: int = typer.Option(5, "--max-retries", "-r", min=0, help="Max auto-fix retries for RTL/TB/sim failures"),
440
  skip_openlane: bool = typer.Option(False, "--skip-openlane", help="Stop after simulation (no RTLβ†’GDSII hardening)"),
 
441
  show_thinking: bool = typer.Option(False, "--show-thinking", help="Print DeepSeek <think> reasoning for each generation/fix step"),
442
  full_signoff: bool = typer.Option(False, "--full-signoff", help="Run full industry signoff (formal + coverage + regression + DRC/LVS)"),
443
  min_coverage: float = typer.Option(80.0, "--min-coverage", help="Minimum line coverage percentage to pass verification"),
@@ -496,6 +497,7 @@ def build(
496
  max_retries=max_retries,
497
  verbose=show_thinking,
498
  skip_openlane=skip_openlane,
 
499
  full_signoff=full_signoff,
500
  min_coverage=min_coverage,
501
  strict_gates=strict_gates,
 
438
  desc: str = typer.Option(..., "--desc", "-d", help="Natural language description"),
439
  max_retries: int = typer.Option(5, "--max-retries", "-r", min=0, help="Max auto-fix retries for RTL/TB/sim failures"),
440
  skip_openlane: bool = typer.Option(False, "--skip-openlane", help="Stop after simulation (no RTLβ†’GDSII hardening)"),
441
+ skip_coverage: bool = typer.Option(False, "--skip-coverage", help="Bypass COVERAGE_CHECK and continue from formal verification to regression"),
442
  show_thinking: bool = typer.Option(False, "--show-thinking", help="Print DeepSeek <think> reasoning for each generation/fix step"),
443
  full_signoff: bool = typer.Option(False, "--full-signoff", help="Run full industry signoff (formal + coverage + regression + DRC/LVS)"),
444
  min_coverage: float = typer.Option(80.0, "--min-coverage", help="Minimum line coverage percentage to pass verification"),
 
497
  max_retries=max_retries,
498
  verbose=show_thinking,
499
  skip_openlane=skip_openlane,
500
+ skip_coverage=skip_coverage,
501
  full_signoff=full_signoff,
502
  min_coverage=min_coverage,
503
  strict_gates=strict_gates,
src/agentic/orchestrator.py CHANGED
@@ -10,7 +10,7 @@ import difflib
10
  import subprocess
11
  import threading
12
  from dataclasses import dataclass, asdict
13
- from typing import Optional, Dict, Any, List
14
  from rich.console import Console
15
  from rich.panel import Panel
16
  from crewai import Agent, Task, Crew, LLM
@@ -34,6 +34,18 @@ from .agents.verifier import get_verification_agent, get_error_analyst_agent, ge
34
  from .agents.doc_agent import get_doc_agent
35
  from .agents.sdc_agent import get_sdc_agent
36
  from .core import ArchitectModule, SelfReflectPipeline, ReActAgent, WaveformExpertModule, DeepDebuggerModule
 
 
 
 
 
 
 
 
 
 
 
 
37
  from .tools.vlsi_tools import (
38
  write_verilog,
39
  run_syntax_check,
@@ -157,6 +169,7 @@ class BuildOrchestrator:
157
  max_retries: int = 5,
158
  verbose: bool = True,
159
  skip_openlane: bool = False,
 
160
  full_signoff: bool = False,
161
  min_coverage: float = 80.0,
162
  strict_gates: bool = True,
@@ -182,6 +195,7 @@ class BuildOrchestrator:
182
  self.max_retries = max_retries
183
  self.verbose = verbose
184
  self.skip_openlane = skip_openlane
 
185
  self.full_signoff = full_signoff
186
  self.min_coverage = min_coverage
187
  self.strict_gates = strict_gates
@@ -230,6 +244,14 @@ class BuildOrchestrator:
230
  self.tb_failure_fingerprint_history: Dict[str, int] = {}
231
  self.tb_recovery_counts: Dict[str, int] = {}
232
  self.artifacts: Dict[str, Any] = {} # Store paths to gathered files
 
 
 
 
 
 
 
 
233
  self.history: List[Dict[str, Any]] = [] # Log of state transitions and errors
234
  self.errors: List[str] = [] # List of error messages
235
 
@@ -283,6 +305,7 @@ class BuildOrchestrator:
283
  self.log(f"Transitioning: {self.state.name} -> {new_state.name}", refined=True)
284
  self.state = new_state
285
  if not preserve_retries:
 
286
  self.state_retry_counts[new_state.name] = 0
287
  # Emit a dedicated transition event for the web UI checkpoint timeline
288
  if self.event_sink is not None:
@@ -348,6 +371,308 @@ class BuildOrchestrator:
348
  fp = hashlib.sha256(base.encode("utf-8", errors="ignore")).hexdigest()
349
  self.failure_fingerprint_history.pop(fp, None)
350
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
351
  def _build_llm_context(self, include_rtl: bool = True, max_rtl_chars: int = 15000) -> str:
352
  """Build cumulative context string for LLM calls.
353
 
@@ -823,6 +1148,97 @@ SPECIFICATION SECTIONS (Markdown):
823
  )
824
  return ports
825
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
826
  def _tb_gate_strict_enforced(self) -> bool:
827
  return self.strict_gates or self.tb_gate_mode == "strict"
828
 
@@ -1089,7 +1505,14 @@ SPECIFICATION SECTIONS (Markdown):
1089
  return
1090
 
1091
  if cycle == 2:
1092
- self.artifacts["tb_regen_context"] = json.dumps(report, indent=2, default=str)[:5000]
 
 
 
 
 
 
 
1093
  tb_path = self.artifacts.get("tb_path")
1094
  if tb_path and os.path.exists(tb_path):
1095
  try:
@@ -1763,41 +2186,69 @@ ALWAYS return the COMPLETE code in ```verilog``` fences.
1763
  self.logger.info(f"SEMANTIC RIGOR: {sem_report}")
1764
  if not sem_ok:
1765
  if self.strict_gates:
1766
- # --- Mechanical width auto-fix (no LLM) ---
1767
- self.log("Semantic rigor gate failed. Attempting mechanical width auto-fix.", refined=True)
1768
- fix_ok, fix_report = auto_fix_width_warnings(path)
1769
- self.logger.info(f"WIDTH AUTO-FIX: fixed={fix_report.get('fixed_count', 0)}, "
1770
- f"remaining={fix_report.get('remaining_count', 0)}")
1771
- if fix_ok:
1772
- self.log(f"Width auto-fix resolved all {fix_report['fixed_count']} warnings.", refined=True)
1773
- # Re-read the patched RTL into artifacts
1774
- with open(path, 'r') as f:
1775
- self.artifacts['rtl_code'] = f.read()
1776
- # Loop back to re-check syntax/lint on the patched file
1777
- return
1778
- elif fix_report.get("fixed_count", 0) > 0:
1779
- self.log(f"Width auto-fix resolved {fix_report['fixed_count']} warnings; "
1780
- f"{fix_report['remaining_count']} remain. Re-checking.", refined=True)
1781
- with open(path, 'r') as f:
1782
- self.artifacts['rtl_code'] = f.read()
1783
- # Loop back β€” the remaining warnings may resolve after re-lint
1784
- return
1785
- # Post-processor couldn't fix anything β€” fall through to LLM
1786
- self.log("Mechanical auto-fix could not resolve width warnings. Routing to LLM fixer.", refined=True)
1787
- # If the post-processor gathered rich context for unfixable warnings,
1788
- # build a detailed prompt giving the LLM everything it needs.
1789
- unfixable = fix_report.get("unfixable_context", [])
1790
- if unfixable:
1791
- errors = self._format_unfixable_width_errors(unfixable)
1792
- else:
1793
  errors = self._format_semantic_rigor_errors(sem_report)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1794
  else:
1795
  self.log("Semantic rigor warnings detected (non-blocking).", refined=True)
1796
  self.artifacts["semantic_report"] = sem_report
 
 
 
 
 
 
 
 
 
 
1797
  self.transition(BuildState.VERIFICATION)
1798
  return
1799
  else:
1800
  self.artifacts["semantic_report"] = sem_report
 
 
 
 
 
 
 
 
 
 
1801
  self.transition(BuildState.VERIFICATION)
1802
  return
1803
 
@@ -1868,15 +2319,31 @@ ALWAYS return the COMPLETE code in ```verilog``` fences.
1868
  ),
1869
  context=_react_context,
1870
  )
1871
- if _react_trace.success and _react_trace.final_answer:
1872
- _vlog = re.search(r'```verilog\s*(.*?)```', _react_trace.final_answer, re.DOTALL)
1873
- if _vlog:
1874
- _react_fixed_code = _vlog.group(1).strip()
1875
- self.logger.info(
1876
- f"[ReAct] RTL fix done in {_react_trace.total_steps} steps "
1877
- f"({_react_trace.total_duration_s:.1f}s)"
1878
- )
 
 
 
 
 
1879
  if not _react_fixed_code:
 
 
 
 
 
 
 
 
 
 
 
1880
  self.logger.info(
1881
  f"[ReAct] No valid code produced "
1882
  f"(success={_react_trace.success}, steps={_react_trace.total_steps}). "
@@ -1939,6 +2406,11 @@ You explain what you changed and why.""",
1939
  new_code = str(result)
1940
  # --- Universal code output validation (RTL fix) ---
1941
  if not validate_llm_code_output(new_code):
 
 
 
 
 
1942
  self.log("RTL fix returned prose instead of code. Retrying once.", refined=True)
1943
  self.logger.warning(f"RTL FIX VALIDATION FAIL (prose detected):\n{new_code[:500]}")
1944
  new_code = str(Crew(agents=[fixer], tasks=[task]).kickoff())
@@ -1956,6 +2428,20 @@ You explain what you changed and why.""",
1956
  return
1957
 
1958
  self.logger.info(f"FIXED RTL:\n{new_code}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1959
 
1960
  # --- Inner retry loop for LLM parse errors ---
1961
  # If write_verilog fails (LLM didn't output valid code), re-prompt immediately
@@ -2021,10 +2507,11 @@ Original errors to fix:
2021
  # 1. Generate Testbench (Only if missing)
2022
  # We reuse existing TB to ensure consistent verification targets
2023
  tb_exists = 'tb_path' in self.artifacts and os.path.exists(self.artifacts['tb_path'])
 
2024
 
2025
  if not tb_exists:
2026
  # Check if we have a golden testbench from template matching
2027
- if self.artifacts.get('golden_tb'):
2028
  self.log("Using Golden Reference Testbench (pre-verified).", refined=True)
2029
  tb_code = self.artifacts['golden_tb']
2030
  # Replace template module name with actual design name
@@ -2040,7 +2527,6 @@ Original errors to fix:
2040
  tb_agent = get_testbench_agent(self.llm, f"Verify {self.name}", verbose=self.verbose, strategy=self.strategy.name)
2041
 
2042
  tb_strategy_prompt = self._get_tb_strategy_prompt()
2043
- regen_context = self.artifacts.pop("tb_regen_context", "")
2044
 
2045
  # --- Extract module port signature from RTL ---
2046
  # This prevents the most common TB failure: port name mismatches
@@ -2116,6 +2602,11 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2116
  )
2117
  # --- Universal code output validation (TB gen) ---
2118
  if not validate_llm_code_output(tb_code):
 
 
 
 
 
2119
  self.log("TB generation returned prose instead of code. Retrying once.", refined=True)
2120
  self.logger.warning(f"TB VALIDATION FAIL (prose detected):\n{tb_code[:500]}")
2121
  tb_code = self._kickoff_with_timeout(
@@ -2131,6 +2622,20 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2131
  if "module" not in tb_code or "endmodule" not in tb_code:
2132
  self.log("TB generation returned invalid code. Using deterministic fallback TB.", refined=True)
2133
  tb_code = self._deterministic_tb_fallback(self.artifacts.get("rtl_code", ""))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2134
  self.logger.info(f"GENERATED TESTBENCH:\n{tb_code}")
2135
 
2136
  tb_path = write_verilog(self.name, tb_code, is_testbench=True)
@@ -2143,6 +2648,15 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2143
  self.state = BuildState.FAIL
2144
  return
2145
  self.artifacts['tb_path'] = tb_path
 
 
 
 
 
 
 
 
 
2146
  self._clear_tb_fingerprints() # New TB β†’ fresh gate attempts
2147
  else:
2148
  self.log(f"Verifying with existing Testbench (Attempt {self.retry_count}).", refined=True)
@@ -2284,11 +2798,8 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2284
  "rtl_path",
2285
  os.path.join(OPENLANE_ROOT, "designs", self.name, "src", f"{self.name}.v"),
2286
  )
2287
- if (
2288
- os.path.exists(_vcd_path)
2289
- and os.path.getsize(_vcd_path) > 200
2290
- and os.path.exists(_rtl_path)
2291
- ):
2292
  _waveform_mod = WaveformExpertModule()
2293
  _diagnosis = _waveform_mod.analyze_failure(
2294
  rtl_path=_rtl_path,
@@ -2296,6 +2807,13 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2296
  sim_log=output, # 'output' is the sim stdout/stderr
2297
  design_name=self.name,
2298
  )
 
 
 
 
 
 
 
2299
  if _diagnosis is not None:
2300
  _waveform_context = (
2301
  f"\n\n## WAVEFORM + AST ANALYSIS\n"
@@ -2310,6 +2828,19 @@ Before returning any testbench code, mentally compile it with strict SystemVeril
2310
  self.logger.info("[WaveformExpert] No signal mismatch found in VCD")
2311
  else:
2312
  _vcd_size = os.path.getsize(_vcd_path) if os.path.exists(_vcd_path) else 0
 
 
 
 
 
 
 
 
 
 
 
 
 
2313
  self.logger.info(
2314
  f"[WaveformExpert] Skipping β€” VCD exists={os.path.exists(_vcd_path)}, "
2315
  f"size={_vcd_size}, rtl_exists={os.path.exists(_rtl_path)}"
@@ -2332,21 +2863,23 @@ CURRENT TESTBENCH (first 3000 chars):
2332
  Use your read_file tool to read the full RTL and TB files if needed.
2333
 
2334
  Classify the failure as ONE of:
2335
- A) TESTBENCH_SYNTAX β€” TB compilation/syntax error (missing semicolons, undeclared signals, class errors)
2336
- B) RTL_LOGIC_BUG β€” Functional error in RTL design (wrong state transitions, bad arithmetic, logic errors)
2337
- C) PORT_MISMATCH β€” TB and RTL have incompatible port names, widths, or missing connections
2338
- D) TIMING_RACE β€” Clock/reset timing issue in TB stimulus (setup/hold violations, race conditions)
2339
- E) ARCHITECTURAL β€” Design spec is ambiguous, contradictory, or fundamentally flawed
2340
-
2341
- Reply in this EXACT format (one field per line):
2342
- CLASS: <letter A-E>
2343
- FAILING_OUTPUT: <the exact $display message from the simulation log that indicates failure, e.g. "Data mismatch at pop 0">
2344
- FAILING_SIGNALS: <comma-separated list of signal names whose values are wrong>
2345
- EXPECTED_VS_ACTUAL: <expected value vs actual value if determinable, otherwise "undetermined">
2346
- RESPONSIBLE_CONSTRUCT: <the specific always_ff/always_comb/assign statement and its line number in the RTL that most likely causes the wrong value, e.g. "always_ff block at line 23 driving write_ptr">
2347
- ROOT_CAUSE: <1-line description naming the specific signal and logic error, e.g. "write_ptr increments on push but the counter does not gate on the full flag">
2348
- FIX_HINT: <surgical fix instruction referencing specific line numbers or signal names, e.g. "Change line 25: gate the write_ptr increment with !full">''',
2349
- expected_output='Structured signal-level failure classification with CLASS, FAILING_OUTPUT, FAILING_SIGNALS, EXPECTED_VS_ACTUAL, RESPONSIBLE_CONSTRUCT, ROOT_CAUSE, and FIX_HINT',
 
 
2350
  agent=analyst
2351
  )
2352
 
@@ -2354,33 +2887,63 @@ FIX_HINT: <surgical fix instruction referencing specific line numbers or signal
2354
  analysis = str(Crew(agents=[analyst], tasks=[analysis_task]).kickoff()).strip()
2355
 
2356
  self.logger.info(f"FAILURE ANALYSIS:\n{analysis}")
2357
-
2358
- # Parse structured response
2359
- failure_class = "A" # default fallback
2360
- root_cause = ""
2361
- fix_hint = ""
2362
- failing_output = ""
2363
- failing_signals = ""
2364
- expected_vs_actual = ""
2365
- responsible_construct = ""
2366
- for line in analysis.split("\n"):
2367
- line_stripped = line.strip()
2368
- if line_stripped.startswith("CLASS:"):
2369
- letter = line_stripped.replace("CLASS:", "").strip().upper()
2370
- if letter and letter[0] in "ABCDE":
2371
- failure_class = letter[0]
2372
- elif line_stripped.startswith("ROOT_CAUSE:"):
2373
- root_cause = line_stripped.replace("ROOT_CAUSE:", "").strip()
2374
- elif line_stripped.startswith("FIX_HINT:"):
2375
- fix_hint = line_stripped.replace("FIX_HINT:", "").strip()
2376
- elif line_stripped.startswith("FAILING_OUTPUT:"):
2377
- failing_output = line_stripped.replace("FAILING_OUTPUT:", "").strip()
2378
- elif line_stripped.startswith("FAILING_SIGNALS:"):
2379
- failing_signals = line_stripped.replace("FAILING_SIGNALS:", "").strip()
2380
- elif line_stripped.startswith("EXPECTED_VS_ACTUAL:"):
2381
- expected_vs_actual = line_stripped.replace("EXPECTED_VS_ACTUAL:", "").strip()
2382
- elif line_stripped.startswith("RESPONSIBLE_CONSTRUCT:"):
2383
- responsible_construct = line_stripped.replace("RESPONSIBLE_CONSTRUCT:", "").strip()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2384
 
2385
  # Build structured diagnosis string for downstream fix prompts
2386
  structured_diagnosis = (
@@ -2654,10 +3217,27 @@ Return the complete module with ONLY the minimal fix applied.
2654
 
2655
  # Bug 3: Inject DeepDebugger diagnostic context into the SVA generation prompt
2656
  formal_debug = self.artifacts.get("formal_debug_context", "")
 
 
 
 
 
2657
  if formal_debug:
2658
  formal_debug_str = f"\n\nPREVIOUS FORMAL VERIFICATION FAILURE DIAGNOSIS:\n{formal_debug}\n\nPlease use this diagnosis to correct the flawed assertions.\n"
2659
  else:
2660
  formal_debug_str = ""
 
 
 
 
 
 
 
 
 
 
 
 
2661
 
2662
  verif_agent = get_verification_agent(self.llm, verbose=self.verbose)
2663
  sva_task = Task(
@@ -2667,8 +3247,12 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2667
 
2668
  RTL Code:
2669
  ```verilog
2670
- {self.artifacts.get('rtl_code', '')}
2671
  ```
 
 
 
 
2672
 
2673
  SPECIFICATION:
2674
  {self.artifacts.get('spec', '')}
@@ -2700,6 +3284,11 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2700
 
2701
  # --- Universal code output validation (SVA) ---
2702
  if not validate_llm_code_output(sva_result):
 
 
 
 
 
2703
  self.log("SVA generation returned prose instead of code. Retrying once.", refined=True)
2704
  self.logger.warning(f"SVA VALIDATION FAIL (prose detected):\n{sva_result[:500]}")
2705
  sva_result = str(Crew(agents=[verif_agent], tasks=[sva_task]).kickoff())
@@ -2707,6 +3296,23 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2707
  self.log("SVA retry also returned invalid output. Skipping formal.", refined=True)
2708
  self.transition(BuildState.COVERAGE_CHECK)
2709
  return
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2710
 
2711
  self.logger.info(f"GENERATED SVA:\n{sva_result}")
2712
 
@@ -2739,8 +3345,20 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2739
  json.dump(preflight_report, f, indent=2)
2740
  self.artifacts["formal_preflight"] = preflight_report
2741
  self.artifacts["formal_preflight_path"] = formal_diag_path
 
 
 
 
 
 
2742
 
2743
  if not preflight_ok:
 
 
 
 
 
 
2744
  self.log(f"Formal preflight failed: {preflight_report.get('issue_count', 0)} issue(s).", refined=True)
2745
  self.artifacts['formal_result'] = 'FAIL'
2746
  if self.strict_gates:
@@ -2763,15 +3381,36 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2763
  pf = subprocess.run(preflight_cmd, capture_output=True, text=True, timeout=30)
2764
  if pf.returncode != 0:
2765
  yosys_err = (pf.stderr or pf.stdout or "").strip()
2766
- self.log(f"Yosys SVA preflight failed. Regenerating SVA with error context.", refined=True)
2767
  self.logger.info(f"YOSYS SVA PREFLIGHT FAIL:\n{yosys_err}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2768
  # Remove stale SVA files so the next iteration regenerates
2769
  for stale in (sva_path, sby_check_path):
2770
  if os.path.exists(stale):
2771
  os.remove(stale)
2772
- self.artifacts["sva_preflight_error"] = yosys_err[:2000]
2773
  # Stay in FORMAL_VERIFY β€” will regenerate SVA on re-entry
2774
  return
 
 
2775
  except Exception as pf_exc:
2776
  self.logger.warning(f"Yosys SVA preflight exception: {pf_exc}")
2777
 
@@ -2815,6 +3454,13 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2815
  design_name=self.name,
2816
  rtl_code=self.artifacts.get("rtl_code", ""),
2817
  )
 
 
 
 
 
 
 
2818
  if _verdict is not None:
2819
  _formal_debug_context = (
2820
  f"\n\nFVDEBUG ROOT CAUSE:\n"
@@ -2836,7 +3482,12 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2836
  f"sby_cfg_exists={os.path.exists(_sby_cfg)}, "
2837
  f"rtl_exists={os.path.exists(_rtl_path_fv)}"
2838
  )
2839
- self.artifacts["formal_debug_context"] = _formal_debug_context
 
 
 
 
 
2840
 
2841
  self.artifacts['formal_result'] = 'FAIL'
2842
  if self.strict_gates:
@@ -2866,6 +3517,11 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2866
  self.state = BuildState.FAIL
2867
  return
2868
 
 
 
 
 
 
2869
  self.transition(BuildState.COVERAGE_CHECK)
2870
 
2871
  def do_coverage_check(self):
@@ -2912,6 +3568,15 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
2912
  self.artifacts["coverage"] = coverage_data
2913
  self.artifacts["coverage_backend_used"] = coverage_data.get("backend", self.coverage_backend)
2914
  self.artifacts["coverage_mode"] = coverage_data.get("coverage_mode", "unknown")
 
 
 
 
 
 
 
 
 
2915
 
2916
  src_dir = os.path.join(OPENLANE_ROOT, "designs", self.name, "src")
2917
  os.makedirs(src_dir, exist_ok=True)
@@ -3048,6 +3713,8 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
3048
  improve_prompt = f"""The current testbench for "{self.name}" does not meet coverage thresholds.
3049
  TARGET: Branch >={branch_target:.1f}%, Line >={float(thresholds['line']):.1f}%.
3050
  Current Coverage Data: {coverage_data}
 
 
3051
 
3052
  Current RTL:
3053
  ```verilog
@@ -3090,9 +3757,28 @@ Generate SVA assertions that are compatible with the Yosys formal verification e
3090
  improved_tb = str(result)
3091
  # --- Universal code output validation (coverage TB improvement) ---
3092
  if not validate_llm_code_output(improved_tb):
 
 
 
 
 
3093
  self.log("Coverage TB improvement returned prose instead of code. Retrying once.", refined=True)
3094
  self.logger.warning(f"COVERAGE TB VALIDATION FAIL (prose detected):\n{improved_tb[:500]}")
3095
  improved_tb = str(Crew(agents=[tb_agent], tasks=[improve_task]).kickoff())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3096
  self.logger.info(f"IMPROVED TB:\n{improved_tb}")
3097
 
3098
  tb_path = write_verilog(self.name, improved_tb, is_testbench=True)
 
10
  import subprocess
11
  import threading
12
  from dataclasses import dataclass, asdict
13
+ from typing import Optional, Dict, Any, List, Tuple
14
  from rich.console import Console
15
  from rich.panel import Panel
16
  from crewai import Agent, Task, Crew, LLM
 
34
  from .agents.doc_agent import get_doc_agent
35
  from .agents.sdc_agent import get_sdc_agent
36
  from .core import ArchitectModule, SelfReflectPipeline, ReActAgent, WaveformExpertModule, DeepDebuggerModule
37
+ from .contracts import (
38
+ AgentResult,
39
+ ArtifactRef,
40
+ FailureClass,
41
+ FailureRecord,
42
+ StageResult,
43
+ StageStatus,
44
+ extract_json_object,
45
+ infer_failure_class,
46
+ materially_changed,
47
+ validate_agent_payload,
48
+ )
49
  from .tools.vlsi_tools import (
50
  write_verilog,
51
  run_syntax_check,
 
169
  max_retries: int = 5,
170
  verbose: bool = True,
171
  skip_openlane: bool = False,
172
+ skip_coverage: bool = False,
173
  full_signoff: bool = False,
174
  min_coverage: float = 80.0,
175
  strict_gates: bool = True,
 
195
  self.max_retries = max_retries
196
  self.verbose = verbose
197
  self.skip_openlane = skip_openlane
198
+ self.skip_coverage = skip_coverage
199
  self.full_signoff = full_signoff
200
  self.min_coverage = min_coverage
201
  self.strict_gates = strict_gates
 
244
  self.tb_failure_fingerprint_history: Dict[str, int] = {}
245
  self.tb_recovery_counts: Dict[str, int] = {}
246
  self.artifacts: Dict[str, Any] = {} # Store paths to gathered files
247
+ self.artifact_bus: Dict[str, ArtifactRef] = {}
248
+ self.stage_contract_history: List[Dict[str, Any]] = []
249
+ self.retry_metadata: Dict[str, int] = {
250
+ "stage_retry": 0,
251
+ "regeneration_retry": 0,
252
+ "format_retry": 0,
253
+ "infrastructure_retry": 0,
254
+ }
255
  self.history: List[Dict[str, Any]] = [] # Log of state transitions and errors
256
  self.errors: List[str] = [] # List of error messages
257
 
 
305
  self.log(f"Transitioning: {self.state.name} -> {new_state.name}", refined=True)
306
  self.state = new_state
307
  if not preserve_retries:
308
+ self.retry_count = 0
309
  self.state_retry_counts[new_state.name] = 0
310
  # Emit a dedicated transition event for the web UI checkpoint timeline
311
  if self.event_sink is not None:
 
371
  fp = hashlib.sha256(base.encode("utf-8", errors="ignore")).hexdigest()
372
  self.failure_fingerprint_history.pop(fp, None)
373
 
374
+ def _set_artifact(
375
+ self,
376
+ key: str,
377
+ value: Any,
378
+ *,
379
+ producer: str,
380
+ consumer: str = "",
381
+ required: bool = False,
382
+ blocking: bool = False,
383
+ ) -> None:
384
+ self.artifacts[key] = value
385
+ self.artifact_bus[key] = ArtifactRef(
386
+ key=key,
387
+ producer=producer,
388
+ consumer=consumer,
389
+ required=required,
390
+ blocking=blocking,
391
+ value=value,
392
+ )
393
+
394
+ def _get_artifact(self, key: str, default: Any = None) -> Any:
395
+ return self.artifacts.get(key, default)
396
+
397
+ def _require_artifact(self, key: str, *, consumer: str, message: str) -> Any:
398
+ if key in self.artifacts and self.artifacts[key] not in (None, "", {}):
399
+ ref = self.artifact_bus.get(key)
400
+ if ref is not None:
401
+ ref.consumer = consumer
402
+ return self.artifacts[key]
403
+ self._record_stage_contract(
404
+ StageResult(
405
+ stage=self.state.name,
406
+ status=StageStatus.ERROR,
407
+ producer=consumer,
408
+ failure_class=FailureClass.ORCHESTRATOR_ROUTING_ERROR,
409
+ diagnostics=[message],
410
+ next_action="fail_closed",
411
+ )
412
+ )
413
+ raise RuntimeError(message)
414
+
415
+ def _consume_handoff(self, key: str, *, consumer: str, required: bool = False) -> Any:
416
+ value = self.artifacts.get(key)
417
+ if value in (None, "", {}):
418
+ if required:
419
+ msg = f"Missing artifact handoff '{key}' for {consumer}."
420
+ self._record_stage_contract(
421
+ StageResult(
422
+ stage=self.state.name,
423
+ status=StageStatus.ERROR,
424
+ producer=consumer,
425
+ failure_class=FailureClass.ORCHESTRATOR_ROUTING_ERROR,
426
+ diagnostics=[msg],
427
+ next_action="fail_closed",
428
+ )
429
+ )
430
+ raise RuntimeError(msg)
431
+ return None
432
+ ref = self.artifact_bus.get(key)
433
+ if ref is not None:
434
+ ref.consumer = consumer
435
+ return value
436
+
437
+ def _record_stage_contract(self, result: StageResult) -> None:
438
+ payload = result.to_dict()
439
+ self.stage_contract_history.append(payload)
440
+ self.artifacts["last_stage_result"] = payload
441
+ if hasattr(self, "logger"):
442
+ self.logger.info(f"STAGE RESULT:\n{json.dumps(payload, indent=2, default=str)}")
443
+
444
+ def _record_retry(self, bucket: str, *, consume_global: bool = False) -> int:
445
+ count = int(self.retry_metadata.get(bucket, 0)) + 1
446
+ self.retry_metadata[bucket] = count
447
+ self.artifacts["retry_metadata"] = dict(self.retry_metadata)
448
+ if consume_global:
449
+ self.global_retry_count += 1
450
+ return count
451
+
452
+ def _record_non_consumable_output(self, producer: str, raw_output: str, diagnostics: List[str]) -> None:
453
+ self._record_retry("format_retry", consume_global=False)
454
+ self._record_stage_contract(
455
+ StageResult(
456
+ stage=self.state.name,
457
+ status=StageStatus.RETRY,
458
+ producer=producer,
459
+ failure_class=FailureClass.LLM_FORMAT_ERROR,
460
+ diagnostics=diagnostics or ["LLM output could not be consumed."],
461
+ next_action="retry_generation",
462
+ )
463
+ )
464
+ self._set_artifact(
465
+ "last_non_consumable_output",
466
+ {
467
+ "producer": producer,
468
+ "raw_output": raw_output[:4000],
469
+ "diagnostics": diagnostics,
470
+ },
471
+ producer=producer,
472
+ consumer=self.state.name,
473
+ required=False,
474
+ blocking=False,
475
+ )
476
+
477
+ @staticmethod
478
+ def _extract_module_names(code: str) -> List[str]:
479
+ return re.findall(r"\bmodule\s+([A-Za-z_]\w*)", code or "")
480
+
481
+ def _is_hierarchical_design(self, code: str) -> bool:
482
+ return len(self._extract_module_names(code)) > 1
483
+
484
+ def _validate_rtl_candidate(self, candidate_code: str, previous_code: str) -> List[str]:
485
+ issues: List[str] = []
486
+ if not validate_llm_code_output(candidate_code):
487
+ issues.append("RTL candidate is not valid Verilog/SystemVerilog code output.")
488
+ return issues
489
+ modules = self._extract_module_names(candidate_code)
490
+ if self.name not in modules:
491
+ issues.append(f"RTL candidate is missing top module '{self.name}'.")
492
+ prev_modules = self._extract_module_names(previous_code)
493
+ if prev_modules and len(prev_modules) > 1:
494
+ if sorted(prev_modules) != sorted(modules):
495
+ issues.append(
496
+ "Hierarchical RTL repair changed the module inventory; module-scoped preservation failed."
497
+ )
498
+ prev_ports = self._extract_module_interface(previous_code)
499
+ new_ports = self._extract_module_interface(candidate_code)
500
+ if prev_ports and new_ports and prev_ports != new_ports:
501
+ issues.append("RTL candidate changed the top-module interface.")
502
+ return issues
503
+
504
+ def _validate_tb_candidate(self, tb_code: str) -> List[str]:
505
+ issues: List[str] = []
506
+ if not validate_llm_code_output(tb_code):
507
+ issues.append("TB candidate is not valid Verilog/SystemVerilog code output.")
508
+ return issues
509
+ module_match = re.search(r"\bmodule\s+([A-Za-z_]\w*)", tb_code)
510
+ if not module_match or module_match.group(1) != f"{self.name}_tb":
511
+ issues.append(f"TB module name must be '{self.name}_tb'.")
512
+ if f'$dumpfile("{self.name}_wave.vcd")' not in tb_code:
513
+ issues.append("TB candidate is missing the required VCD dumpfile block.")
514
+ if "$dumpvars(0," not in tb_code:
515
+ issues.append("TB candidate is missing the required dumpvars block.")
516
+ if "TEST PASSED" not in tb_code or "TEST FAILED" not in tb_code:
517
+ issues.append("TB candidate must include TEST PASSED and TEST FAILED markers.")
518
+ return issues
519
+
520
+ def _validate_sva_candidate(self, sva_code: str, rtl_code: str) -> List[str]:
521
+ issues: List[str] = []
522
+ if not validate_llm_code_output(sva_code):
523
+ issues.append("SVA candidate is not valid SystemVerilog code output.")
524
+ return issues
525
+ if f"module {self.name}_sva" not in sva_code:
526
+ issues.append(f"SVA candidate is missing module '{self.name}_sva'.")
527
+ yosys_code = convert_sva_to_yosys(sva_code, self.name)
528
+ if not yosys_code:
529
+ issues.append("SVA candidate could not be translated to Yosys-compatible assertions.")
530
+ return issues
531
+ ok, report = validate_yosys_sby_check(yosys_code)
532
+ if not ok:
533
+ for issue in report.get("issues", []):
534
+ issues.append(issue.get("message", "Invalid Yosys preflight assertion output."))
535
+ signal_inventory = self._format_signal_inventory_for_prompt(rtl_code)
536
+ if "No signal inventory could be extracted" in signal_inventory:
537
+ issues.append("RTL signal inventory is unavailable for SVA validation.")
538
+ return issues
539
+
540
+ @staticmethod
541
+ def _simulation_capabilities(sim_output: str, vcd_path: str) -> Dict[str, Any]:
542
+ trace_enabled = "without --trace" not in (sim_output or "")
543
+ waveform_generated = bool(vcd_path and os.path.exists(vcd_path) and os.path.getsize(vcd_path) > 200)
544
+ return {
545
+ "trace_enabled": trace_enabled,
546
+ "waveform_generated": waveform_generated,
547
+ }
548
+
549
+ def _normalize_react_result(self, trace: Any) -> AgentResult:
550
+ final_answer = getattr(trace, "final_answer", "") or ""
551
+ code_match = re.search(r"```verilog\s*(.*?)```", final_answer, re.DOTALL)
552
+ payload = {
553
+ "code": code_match.group(1).strip() if code_match else "",
554
+ "self_check_status": "verified" if getattr(trace, "success", False) else "unverified",
555
+ "tool_observations": [getattr(step, "observation", "") for step in getattr(trace, "steps", []) if getattr(step, "observation", "")],
556
+ "final_decision": "accept" if code_match else "fallback",
557
+ }
558
+ failure_class = FailureClass.UNKNOWN if code_match else FailureClass.LLM_FORMAT_ERROR
559
+ return AgentResult(
560
+ agent="ReAct",
561
+ ok=bool(code_match),
562
+ producer="agent_react",
563
+ payload=payload,
564
+ diagnostics=[] if code_match else ["ReAct did not return fenced Verilog code."],
565
+ failure_class=failure_class,
566
+ raw_output=final_answer,
567
+ )
568
+
569
+ def _normalize_waveform_result(self, diagnosis: Any, raw_output: str = "") -> AgentResult:
570
+ if diagnosis is None:
571
+ return AgentResult(
572
+ agent="WaveformExpert",
573
+ ok=False,
574
+ producer="agent_waveform",
575
+ payload={"fallback_reason": "No waveform diagnosis available."},
576
+ diagnostics=["WaveformExpert returned no diagnosis."],
577
+ failure_class=FailureClass.UNKNOWN,
578
+ raw_output=raw_output,
579
+ )
580
+ payload = {
581
+ "failing_signal": diagnosis.failing_signal,
582
+ "mismatch_time": diagnosis.mismatch_time,
583
+ "expected_value": diagnosis.expected_value,
584
+ "actual_value": diagnosis.actual_value,
585
+ "trace_roots": [
586
+ {
587
+ "signal_name": trace.signal_name,
588
+ "source_file": trace.source_file,
589
+ "source_line": trace.source_line,
590
+ "assignment_type": trace.assignment_type,
591
+ }
592
+ for trace in diagnosis.root_cause_traces
593
+ ],
594
+ "suggested_fix_area": diagnosis.suggested_fix_area,
595
+ "fallback_reason": "" if diagnosis.root_cause_traces else "No AST trace roots found.",
596
+ }
597
+ return AgentResult(
598
+ agent="WaveformExpert",
599
+ ok=True,
600
+ producer="agent_waveform",
601
+ payload=payload,
602
+ diagnostics=[],
603
+ failure_class=FailureClass.UNKNOWN,
604
+ raw_output=raw_output,
605
+ )
606
+
607
+ def _normalize_deepdebug_result(self, verdict: Any, raw_output: str = "") -> AgentResult:
608
+ if verdict is None:
609
+ return AgentResult(
610
+ agent="DeepDebugger",
611
+ ok=False,
612
+ producer="agent_deepdebug",
613
+ payload={"usable_for_regeneration": False},
614
+ diagnostics=["DeepDebugger returned no verdict."],
615
+ failure_class=FailureClass.UNKNOWN,
616
+ raw_output=raw_output,
617
+ )
618
+ payload = {
619
+ "root_cause_signal": verdict.root_cause_signal,
620
+ "root_cause_line": verdict.root_cause_line,
621
+ "root_cause_file": verdict.root_cause_file,
622
+ "fix_description": verdict.fix_description,
623
+ "confidence": verdict.confidence,
624
+ "balanced_analysis_log": verdict.balanced_analysis_log,
625
+ "usable_for_regeneration": bool(verdict.root_cause_signal and verdict.fix_description),
626
+ }
627
+ return AgentResult(
628
+ agent="DeepDebugger",
629
+ ok=True,
630
+ producer="agent_deepdebug",
631
+ payload=payload,
632
+ diagnostics=[],
633
+ failure_class=FailureClass.UNKNOWN,
634
+ raw_output=raw_output,
635
+ )
636
+
637
+ def _parse_structured_agent_json(
638
+ self,
639
+ *,
640
+ agent_name: str,
641
+ raw_output: str,
642
+ required_keys: List[str],
643
+ ) -> AgentResult:
644
+ payload = extract_json_object(raw_output)
645
+ if payload is None:
646
+ return AgentResult(
647
+ agent=agent_name,
648
+ ok=False,
649
+ producer=f"agent_{agent_name.lower()}",
650
+ payload={},
651
+ diagnostics=["LLM output is not valid JSON."],
652
+ failure_class=FailureClass.LLM_FORMAT_ERROR,
653
+ raw_output=raw_output,
654
+ )
655
+ errors = validate_agent_payload(payload, required_keys)
656
+ if errors:
657
+ return AgentResult(
658
+ agent=agent_name,
659
+ ok=False,
660
+ producer=f"agent_{agent_name.lower()}",
661
+ payload=payload,
662
+ diagnostics=errors,
663
+ failure_class=FailureClass.LLM_FORMAT_ERROR,
664
+ raw_output=raw_output,
665
+ )
666
+ return AgentResult(
667
+ agent=agent_name,
668
+ ok=True,
669
+ producer=f"agent_{agent_name.lower()}",
670
+ payload=payload,
671
+ diagnostics=[],
672
+ failure_class=FailureClass.UNKNOWN,
673
+ raw_output=raw_output,
674
+ )
675
+
676
  def _build_llm_context(self, include_rtl: bool = True, max_rtl_chars: int = 15000) -> str:
677
  """Build cumulative context string for LLM calls.
678
 
 
1148
  )
1149
  return ports
1150
 
1151
+ @staticmethod
1152
+ def _extract_rtl_signal_inventory(rtl_code: str) -> List[Dict[str, str]]:
1153
+ """Extract DUT-visible signals and widths for downstream prompt grounding."""
1154
+ text = rtl_code or ""
1155
+ signals: List[Dict[str, str]] = []
1156
+
1157
+ param_defaults: Dict[str, str] = {}
1158
+ param_pattern = re.compile(
1159
+ r"parameter\s+(?:\w+\s+)?([A-Za-z_]\w*)\s*=\s*([^,;\)\n]+)",
1160
+ re.IGNORECASE,
1161
+ )
1162
+ for pname, pval in param_pattern.findall(text):
1163
+ param_defaults[pname.strip()] = pval.strip()
1164
+
1165
+ def _resolve_width(width: str) -> str:
1166
+ resolved = (width or "").strip()
1167
+ if not resolved:
1168
+ return "[0:0]"
1169
+ for pname, pval in param_defaults.items():
1170
+ if pname not in resolved:
1171
+ continue
1172
+ try:
1173
+ expr = resolved[1:-1]
1174
+ expr = expr.replace(pname, str(pval))
1175
+ parts = expr.split(":")
1176
+ evaluated = []
1177
+ for part in parts:
1178
+ part = part.strip()
1179
+ if re.match(r'^[\d\s\+\-\*\/\(\)]+$', part):
1180
+ evaluated.append(str(int(eval(part)))) # noqa: S307
1181
+ else:
1182
+ evaluated.append(part)
1183
+ resolved = f"[{':'.join(evaluated)}]"
1184
+ except Exception:
1185
+ pass
1186
+ return resolved
1187
+
1188
+ seen = set()
1189
+ for port in BuildOrchestrator._extract_module_ports(text):
1190
+ key = (port["name"], port["direction"])
1191
+ if key in seen:
1192
+ continue
1193
+ seen.add(key)
1194
+ signals.append(
1195
+ {
1196
+ "name": port["name"],
1197
+ "category": port["direction"],
1198
+ "width": _resolve_width(port.get("width", "")),
1199
+ }
1200
+ )
1201
+
1202
+ scrubbed = re.sub(r"//.*", "", text)
1203
+ scrubbed = re.sub(r"/\*[\s\S]*?\*/", "", scrubbed)
1204
+ internal_pattern = re.compile(
1205
+ r"^\s*(wire|reg|logic)\s*(?:signed\s+)?(\[[^\]]+\])?\s*([^;]+);",
1206
+ re.IGNORECASE | re.MULTILINE,
1207
+ )
1208
+ for kind, width, names_blob in internal_pattern.findall(scrubbed):
1209
+ resolved_width = _resolve_width(width)
1210
+ for raw_name in names_blob.split(","):
1211
+ candidate = raw_name.strip()
1212
+ if not candidate:
1213
+ continue
1214
+ candidate = candidate.split("=")[0].strip()
1215
+ candidate = re.sub(r"\[[^\]]+\]", "", candidate).strip()
1216
+ if not re.fullmatch(r"[A-Za-z_]\w*", candidate):
1217
+ continue
1218
+ key = (candidate, kind.lower())
1219
+ if key in seen:
1220
+ continue
1221
+ seen.add(key)
1222
+ signals.append(
1223
+ {
1224
+ "name": candidate,
1225
+ "category": kind.lower(),
1226
+ "width": resolved_width,
1227
+ }
1228
+ )
1229
+ return signals
1230
+
1231
+ @staticmethod
1232
+ def _format_signal_inventory_for_prompt(rtl_code: str) -> str:
1233
+ signals = BuildOrchestrator._extract_rtl_signal_inventory(rtl_code)
1234
+ if not signals:
1235
+ return "No signal inventory could be extracted from the RTL. Use only identifiers explicitly declared in the RTL."
1236
+ lines = [
1237
+ f"- {sig['name']}: category={sig['category']}, width={sig['width']}"
1238
+ for sig in signals
1239
+ ]
1240
+ return "\n".join(lines)
1241
+
1242
  def _tb_gate_strict_enforced(self) -> bool:
1243
  return self.strict_gates or self.tb_gate_mode == "strict"
1244
 
 
1505
  return
1506
 
1507
  if cycle == 2:
1508
+ self._set_artifact(
1509
+ "tb_regen_context",
1510
+ json.dumps(report, indent=2, default=str)[:5000],
1511
+ producer="orchestrator_tb_gate",
1512
+ consumer="VERIFICATION",
1513
+ required=True,
1514
+ blocking=True,
1515
+ )
1516
  tb_path = self.artifacts.get("tb_path")
1517
  if tb_path and os.path.exists(tb_path):
1518
  try:
 
2186
  self.logger.info(f"SEMANTIC RIGOR: {sem_report}")
2187
  if not sem_ok:
2188
  if self.strict_gates:
2189
+ width_issues = sem_report.get("width_issues", []) if isinstance(sem_report, dict) else []
2190
+ if not width_issues:
2191
+ self.log(
2192
+ "Semantic rigor failed on non-width issues. Routing directly to LLM fixer.",
2193
+ refined=True,
2194
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2195
  errors = self._format_semantic_rigor_errors(sem_report)
2196
+ else:
2197
+ # --- Mechanical width auto-fix (no LLM) ---
2198
+ self.log("Semantic rigor gate failed. Attempting mechanical width auto-fix.", refined=True)
2199
+ fix_ok, fix_report = auto_fix_width_warnings(path)
2200
+ self.logger.info(f"WIDTH AUTO-FIX: fixed={fix_report.get('fixed_count', 0)}, "
2201
+ f"remaining={fix_report.get('remaining_count', 0)}")
2202
+ if fix_ok:
2203
+ self.log(f"Width auto-fix resolved all {fix_report['fixed_count']} warnings.", refined=True)
2204
+ # Re-read the patched RTL into artifacts
2205
+ with open(path, 'r') as f:
2206
+ self.artifacts['rtl_code'] = f.read()
2207
+ # Loop back to re-check syntax/lint on the patched file
2208
+ return
2209
+ elif fix_report.get("fixed_count", 0) > 0:
2210
+ self.log(f"Width auto-fix resolved {fix_report['fixed_count']} warnings; "
2211
+ f"{fix_report['remaining_count']} remain. Re-checking.", refined=True)
2212
+ with open(path, 'r') as f:
2213
+ self.artifacts['rtl_code'] = f.read()
2214
+ # Loop back β€” the remaining warnings may resolve after re-lint
2215
+ return
2216
+ # Post-processor couldn't fix anything β€” fall through to LLM
2217
+ self.log("Mechanical auto-fix could not resolve width warnings. Routing to LLM fixer.", refined=True)
2218
+ # If the post-processor gathered rich context for unfixable warnings,
2219
+ # build a detailed prompt giving the LLM everything it needs.
2220
+ unfixable = fix_report.get("unfixable_context", [])
2221
+ if unfixable:
2222
+ errors = self._format_unfixable_width_errors(unfixable)
2223
+ else:
2224
+ errors = self._format_semantic_rigor_errors(sem_report)
2225
  else:
2226
  self.log("Semantic rigor warnings detected (non-blocking).", refined=True)
2227
  self.artifacts["semantic_report"] = sem_report
2228
+ self._record_stage_contract(
2229
+ StageResult(
2230
+ stage=self.state.name,
2231
+ status=StageStatus.PASS,
2232
+ producer="orchestrator_rtl_fix",
2233
+ consumable_payload={"semantic_report": bool(sem_report)},
2234
+ artifacts_written=["semantic_report"],
2235
+ next_action=BuildState.VERIFICATION.name,
2236
+ )
2237
+ )
2238
  self.transition(BuildState.VERIFICATION)
2239
  return
2240
  else:
2241
  self.artifacts["semantic_report"] = sem_report
2242
+ self._record_stage_contract(
2243
+ StageResult(
2244
+ stage=self.state.name,
2245
+ status=StageStatus.PASS,
2246
+ producer="orchestrator_rtl_fix",
2247
+ consumable_payload={"semantic_report": True},
2248
+ artifacts_written=["semantic_report"],
2249
+ next_action=BuildState.VERIFICATION.name,
2250
+ )
2251
+ )
2252
  self.transition(BuildState.VERIFICATION)
2253
  return
2254
 
 
2319
  ),
2320
  context=_react_context,
2321
  )
2322
+ react_result = self._normalize_react_result(_react_trace)
2323
+ self._set_artifact(
2324
+ "react_last_result",
2325
+ react_result.to_dict(),
2326
+ producer="agent_react",
2327
+ consumer="RTL_FIX",
2328
+ )
2329
+ if react_result.ok:
2330
+ _react_fixed_code = react_result.payload.get("code", "")
2331
+ self.logger.info(
2332
+ f"[ReAct] RTL fix done in {_react_trace.total_steps} steps "
2333
+ f"({_react_trace.total_duration_s:.1f}s)"
2334
+ )
2335
  if not _react_fixed_code:
2336
+ self._record_stage_contract(
2337
+ StageResult(
2338
+ stage=self.state.name,
2339
+ status=StageStatus.RETRY,
2340
+ producer="agent_react",
2341
+ failure_class=react_result.failure_class,
2342
+ diagnostics=react_result.diagnostics,
2343
+ artifacts_written=["react_last_result"],
2344
+ next_action="fallback_to_single_shot",
2345
+ )
2346
+ )
2347
  self.logger.info(
2348
  f"[ReAct] No valid code produced "
2349
  f"(success={_react_trace.success}, steps={_react_trace.total_steps}). "
 
2406
  new_code = str(result)
2407
  # --- Universal code output validation (RTL fix) ---
2408
  if not validate_llm_code_output(new_code):
2409
+ self._record_non_consumable_output(
2410
+ "llm_rtl_fix",
2411
+ new_code,
2412
+ ["RTL fix returned prose instead of code."],
2413
+ )
2414
  self.log("RTL fix returned prose instead of code. Retrying once.", refined=True)
2415
  self.logger.warning(f"RTL FIX VALIDATION FAIL (prose detected):\n{new_code[:500]}")
2416
  new_code = str(Crew(agents=[fixer], tasks=[task]).kickoff())
 
2428
  return
2429
 
2430
  self.logger.info(f"FIXED RTL:\n{new_code}")
2431
+ rtl_validation_issues = self._validate_rtl_candidate(new_code, self.artifacts.get("rtl_code", ""))
2432
+ if rtl_validation_issues:
2433
+ self._record_stage_contract(
2434
+ StageResult(
2435
+ stage=self.state.name,
2436
+ status=StageStatus.RETRY,
2437
+ producer="orchestrator_rtl_validator",
2438
+ failure_class=FailureClass.LLM_SEMANTIC_ERROR,
2439
+ diagnostics=rtl_validation_issues,
2440
+ next_action="retry_rtl_fix",
2441
+ )
2442
+ )
2443
+ self.log(f"RTL candidate rejected: {rtl_validation_issues[0]}", refined=True)
2444
+ return
2445
 
2446
  # --- Inner retry loop for LLM parse errors ---
2447
  # If write_verilog fails (LLM didn't output valid code), re-prompt immediately
 
2507
  # 1. Generate Testbench (Only if missing)
2508
  # We reuse existing TB to ensure consistent verification targets
2509
  tb_exists = 'tb_path' in self.artifacts and os.path.exists(self.artifacts['tb_path'])
2510
+ regen_context = self._consume_handoff("tb_regen_context", consumer="VERIFICATION", required=False) or ""
2511
 
2512
  if not tb_exists:
2513
  # Check if we have a golden testbench from template matching
2514
+ if self.artifacts.get('golden_tb') and not regen_context:
2515
  self.log("Using Golden Reference Testbench (pre-verified).", refined=True)
2516
  tb_code = self.artifacts['golden_tb']
2517
  # Replace template module name with actual design name
 
2527
  tb_agent = get_testbench_agent(self.llm, f"Verify {self.name}", verbose=self.verbose, strategy=self.strategy.name)
2528
 
2529
  tb_strategy_prompt = self._get_tb_strategy_prompt()
 
2530
 
2531
  # --- Extract module port signature from RTL ---
2532
  # This prevents the most common TB failure: port name mismatches
 
2602
  )
2603
  # --- Universal code output validation (TB gen) ---
2604
  if not validate_llm_code_output(tb_code):
2605
+ self._record_non_consumable_output(
2606
+ "llm_tb_generation",
2607
+ tb_code,
2608
+ ["TB generation returned prose instead of code."],
2609
+ )
2610
  self.log("TB generation returned prose instead of code. Retrying once.", refined=True)
2611
  self.logger.warning(f"TB VALIDATION FAIL (prose detected):\n{tb_code[:500]}")
2612
  tb_code = self._kickoff_with_timeout(
 
2622
  if "module" not in tb_code or "endmodule" not in tb_code:
2623
  self.log("TB generation returned invalid code. Using deterministic fallback TB.", refined=True)
2624
  tb_code = self._deterministic_tb_fallback(self.artifacts.get("rtl_code", ""))
2625
+ tb_validation_issues = self._validate_tb_candidate(tb_code)
2626
+ if tb_validation_issues:
2627
+ self._record_stage_contract(
2628
+ StageResult(
2629
+ stage=self.state.name,
2630
+ status=StageStatus.RETRY,
2631
+ producer="orchestrator_tb_validator",
2632
+ failure_class=FailureClass.LLM_SEMANTIC_ERROR,
2633
+ diagnostics=tb_validation_issues,
2634
+ next_action="deterministic_tb_fallback",
2635
+ )
2636
+ )
2637
+ self.log(f"TB candidate rejected: {tb_validation_issues[0]}. Using deterministic fallback TB.", refined=True)
2638
+ tb_code = self._deterministic_tb_fallback(self.artifacts.get("rtl_code", ""))
2639
  self.logger.info(f"GENERATED TESTBENCH:\n{tb_code}")
2640
 
2641
  tb_path = write_verilog(self.name, tb_code, is_testbench=True)
 
2648
  self.state = BuildState.FAIL
2649
  return
2650
  self.artifacts['tb_path'] = tb_path
2651
+ self._set_artifact(
2652
+ "tb_candidate",
2653
+ {
2654
+ "tb_path": tb_path,
2655
+ "regen_context_used": bool(regen_context),
2656
+ },
2657
+ producer="orchestrator_verification",
2658
+ consumer="VERIFICATION",
2659
+ )
2660
  self._clear_tb_fingerprints() # New TB β†’ fresh gate attempts
2661
  else:
2662
  self.log(f"Verifying with existing Testbench (Attempt {self.retry_count}).", refined=True)
 
2798
  "rtl_path",
2799
  os.path.join(OPENLANE_ROOT, "designs", self.name, "src", f"{self.name}.v"),
2800
  )
2801
+ sim_caps = self._simulation_capabilities(output, _vcd_path)
2802
+ if sim_caps["trace_enabled"] and sim_caps["waveform_generated"] and os.path.exists(_rtl_path):
 
 
 
2803
  _waveform_mod = WaveformExpertModule()
2804
  _diagnosis = _waveform_mod.analyze_failure(
2805
  rtl_path=_rtl_path,
 
2807
  sim_log=output, # 'output' is the sim stdout/stderr
2808
  design_name=self.name,
2809
  )
2810
+ waveform_result = self._normalize_waveform_result(_diagnosis, output)
2811
+ self._set_artifact(
2812
+ "waveform_diagnosis",
2813
+ waveform_result.to_dict(),
2814
+ producer="agent_waveform",
2815
+ consumer="VERIFICATION",
2816
+ )
2817
  if _diagnosis is not None:
2818
  _waveform_context = (
2819
  f"\n\n## WAVEFORM + AST ANALYSIS\n"
 
2828
  self.logger.info("[WaveformExpert] No signal mismatch found in VCD")
2829
  else:
2830
  _vcd_size = os.path.getsize(_vcd_path) if os.path.exists(_vcd_path) else 0
2831
+ self._record_stage_contract(
2832
+ StageResult(
2833
+ stage=self.state.name,
2834
+ status=StageStatus.RETRY,
2835
+ producer="orchestrator_waveform_gate",
2836
+ failure_class=FailureClass.ORCHESTRATOR_ROUTING_ERROR,
2837
+ diagnostics=[
2838
+ f"WaveformExpert gated off: trace_enabled={sim_caps['trace_enabled']}, "
2839
+ f"waveform_generated={sim_caps['waveform_generated']}, rtl_exists={os.path.exists(_rtl_path)}"
2840
+ ],
2841
+ next_action="continue_without_waveform",
2842
+ )
2843
+ )
2844
  self.logger.info(
2845
  f"[WaveformExpert] Skipping β€” VCD exists={os.path.exists(_vcd_path)}, "
2846
  f"size={_vcd_size}, rtl_exists={os.path.exists(_rtl_path)}"
 
2863
  Use your read_file tool to read the full RTL and TB files if needed.
2864
 
2865
  Classify the failure as ONE of:
2866
+ A) TESTBENCH_SYNTAX
2867
+ B) RTL_LOGIC_BUG
2868
+ C) PORT_MISMATCH
2869
+ D) TIMING_RACE
2870
+ E) ARCHITECTURAL
2871
+
2872
+ Reply with JSON only, no prose, using this exact schema:
2873
+ {{
2874
+ "class": "A|B|C|D|E",
2875
+ "failing_output": "exact failing display or summary",
2876
+ "failing_signals": ["sig1", "sig2"],
2877
+ "expected_vs_actual": "expected vs actual or undetermined",
2878
+ "responsible_construct": "specific RTL construct and line number",
2879
+ "root_cause": "1-line root cause",
2880
+ "fix_hint": "surgical fix hint"
2881
+ }}''',
2882
+ expected_output='JSON object with class, failing_output, failing_signals, expected_vs_actual, responsible_construct, root_cause, and fix_hint',
2883
  agent=analyst
2884
  )
2885
 
 
2887
  analysis = str(Crew(agents=[analyst], tasks=[analysis_task]).kickoff()).strip()
2888
 
2889
  self.logger.info(f"FAILURE ANALYSIS:\n{analysis}")
2890
+ analyst_result = self._parse_structured_agent_json(
2891
+ agent_name="VerificationAnalyst",
2892
+ raw_output=analysis,
2893
+ required_keys=[
2894
+ "class",
2895
+ "failing_output",
2896
+ "failing_signals",
2897
+ "expected_vs_actual",
2898
+ "responsible_construct",
2899
+ "root_cause",
2900
+ "fix_hint",
2901
+ ],
2902
+ )
2903
+ if not analyst_result.ok:
2904
+ self._record_non_consumable_output(
2905
+ "agent_verificationanalyst",
2906
+ analysis,
2907
+ analyst_result.diagnostics,
2908
+ )
2909
+ with console.status("[bold red]Retrying Failure Analysis (JSON)...[/bold red]"):
2910
+ analysis = str(Crew(agents=[analyst], tasks=[analysis_task]).kickoff()).strip()
2911
+ self.logger.info(f"FAILURE ANALYSIS RETRY:\n{analysis}")
2912
+ analyst_result = self._parse_structured_agent_json(
2913
+ agent_name="VerificationAnalyst",
2914
+ raw_output=analysis,
2915
+ required_keys=[
2916
+ "class",
2917
+ "failing_output",
2918
+ "failing_signals",
2919
+ "expected_vs_actual",
2920
+ "responsible_construct",
2921
+ "root_cause",
2922
+ "fix_hint",
2923
+ ],
2924
+ )
2925
+ if not analyst_result.ok:
2926
+ self.log("Verification analysis returned invalid JSON twice. Failing closed.", refined=True)
2927
+ self.state = BuildState.FAIL
2928
+ return
2929
+ self._set_artifact(
2930
+ "verification_analysis",
2931
+ analyst_result.to_dict(),
2932
+ producer="agent_verificationanalyst",
2933
+ consumer="VERIFICATION",
2934
+ )
2935
+ analysis_payload = analyst_result.payload
2936
+ failure_class = str(analysis_payload.get("class", "A")).upper()[:1] or "A"
2937
+ root_cause = str(analysis_payload.get("root_cause", "")).strip()
2938
+ fix_hint = str(analysis_payload.get("fix_hint", "")).strip()
2939
+ failing_output = str(analysis_payload.get("failing_output", "")).strip()
2940
+ failing_signals_list = analysis_payload.get("failing_signals", [])
2941
+ if isinstance(failing_signals_list, list):
2942
+ failing_signals = ", ".join(str(x) for x in failing_signals_list)
2943
+ else:
2944
+ failing_signals = str(failing_signals_list)
2945
+ expected_vs_actual = str(analysis_payload.get("expected_vs_actual", "")).strip()
2946
+ responsible_construct = str(analysis_payload.get("responsible_construct", "")).strip()
2947
 
2948
  # Build structured diagnosis string for downstream fix prompts
2949
  structured_diagnosis = (
 
3217
 
3218
  # Bug 3: Inject DeepDebugger diagnostic context into the SVA generation prompt
3219
  formal_debug = self.artifacts.get("formal_debug_context", "")
3220
+ formal_preflight_error = self._consume_handoff(
3221
+ "formal_preflight_error",
3222
+ consumer="FORMAL_VERIFY",
3223
+ required=False,
3224
+ ) or self.artifacts.get("sva_preflight_error", "")
3225
  if formal_debug:
3226
  formal_debug_str = f"\n\nPREVIOUS FORMAL VERIFICATION FAILURE DIAGNOSIS:\n{formal_debug}\n\nPlease use this diagnosis to correct the flawed assertions.\n"
3227
  else:
3228
  formal_debug_str = ""
3229
+ if formal_preflight_error:
3230
+ formal_debug_str += (
3231
+ "\n\nPREVIOUS YOSYS/SVA PREFLIGHT FAILURE:\n"
3232
+ f"{formal_preflight_error}\n"
3233
+ "You must correct the assertions so this exact failure does not recur.\n"
3234
+ )
3235
+ try:
3236
+ with open(rtl_path, "r") as rtl_file:
3237
+ rtl_for_sva = rtl_file.read()
3238
+ except OSError:
3239
+ rtl_for_sva = self.artifacts.get("rtl_code", "")
3240
+ signal_inventory = self._format_signal_inventory_for_prompt(rtl_for_sva)
3241
 
3242
  verif_agent = get_verification_agent(self.llm, verbose=self.verbose)
3243
  sva_task = Task(
 
3247
 
3248
  RTL Code:
3249
  ```verilog
3250
+ {rtl_for_sva}
3251
  ```
3252
+
3253
+ The DUT has the following signals with these exact widths:
3254
+ {signal_inventory}
3255
+ Use only these signals and these exact widths in every assertion. Do not invent signals, aliases, or widths.
3256
 
3257
  SPECIFICATION:
3258
  {self.artifacts.get('spec', '')}
 
3284
 
3285
  # --- Universal code output validation (SVA) ---
3286
  if not validate_llm_code_output(sva_result):
3287
+ self._record_non_consumable_output(
3288
+ "llm_sva_generation",
3289
+ sva_result,
3290
+ ["SVA generation returned prose instead of code."],
3291
+ )
3292
  self.log("SVA generation returned prose instead of code. Retrying once.", refined=True)
3293
  self.logger.warning(f"SVA VALIDATION FAIL (prose detected):\n{sva_result[:500]}")
3294
  sva_result = str(Crew(agents=[verif_agent], tasks=[sva_task]).kickoff())
 
3296
  self.log("SVA retry also returned invalid output. Skipping formal.", refined=True)
3297
  self.transition(BuildState.COVERAGE_CHECK)
3298
  return
3299
+ sva_validation_issues = self._validate_sva_candidate(sva_result, rtl_for_sva)
3300
+ if sva_validation_issues:
3301
+ self._record_stage_contract(
3302
+ StageResult(
3303
+ stage=self.state.name,
3304
+ status=StageStatus.RETRY,
3305
+ producer="orchestrator_sva_validator",
3306
+ failure_class=FailureClass.LLM_SEMANTIC_ERROR,
3307
+ diagnostics=sva_validation_issues,
3308
+ next_action="retry_sva_generation",
3309
+ )
3310
+ )
3311
+ self.log(f"SVA candidate rejected: {sva_validation_issues[0]}", refined=True)
3312
+ for stale in (sva_path,):
3313
+ if os.path.exists(stale):
3314
+ os.remove(stale)
3315
+ return
3316
 
3317
  self.logger.info(f"GENERATED SVA:\n{sva_result}")
3318
 
 
3345
  json.dump(preflight_report, f, indent=2)
3346
  self.artifacts["formal_preflight"] = preflight_report
3347
  self.artifacts["formal_preflight_path"] = formal_diag_path
3348
+ self._set_artifact(
3349
+ "formal_preflight_report",
3350
+ preflight_report,
3351
+ producer="orchestrator_formal_preflight",
3352
+ consumer="FORMAL_VERIFY",
3353
+ )
3354
 
3355
  if not preflight_ok:
3356
+ self._set_artifact(
3357
+ "formal_preflight_error",
3358
+ json.dumps(preflight_report, indent=2)[:2000],
3359
+ producer="orchestrator_formal_preflight",
3360
+ consumer="FORMAL_VERIFY",
3361
+ )
3362
  self.log(f"Formal preflight failed: {preflight_report.get('issue_count', 0)} issue(s).", refined=True)
3363
  self.artifacts['formal_result'] = 'FAIL'
3364
  if self.strict_gates:
 
3381
  pf = subprocess.run(preflight_cmd, capture_output=True, text=True, timeout=30)
3382
  if pf.returncode != 0:
3383
  yosys_err = (pf.stderr or pf.stdout or "").strip()
 
3384
  self.logger.info(f"YOSYS SVA PREFLIGHT FAIL:\n{yosys_err}")
3385
+ prev_err = self.artifacts.get("sva_preflight_error_last", "")
3386
+ if prev_err == yosys_err:
3387
+ streak = int(self.artifacts.get("sva_preflight_error_streak", 0)) + 1
3388
+ else:
3389
+ streak = 1
3390
+ self.artifacts["sva_preflight_error_last"] = yosys_err
3391
+ self.artifacts["sva_preflight_error_streak"] = streak
3392
+ self.artifacts["sva_preflight_error"] = yosys_err[:2000]
3393
+ self._set_artifact(
3394
+ "formal_preflight_error",
3395
+ yosys_err[:2000],
3396
+ producer="yosys_preflight",
3397
+ consumer="FORMAL_VERIFY",
3398
+ )
3399
+ if streak >= 2:
3400
+ self.log("Repeated Yosys SVA preflight failure detected. Skipping formal instead of regenerating again.", refined=True)
3401
+ self.artifacts["formal_result"] = "SKIP"
3402
+ self.artifacts["sva_preflight_skip_reason"] = yosys_err[:2000]
3403
+ self.transition(BuildState.COVERAGE_CHECK)
3404
+ return
3405
+ self.log("Yosys SVA preflight failed. Regenerating SVA with error context.", refined=True)
3406
  # Remove stale SVA files so the next iteration regenerates
3407
  for stale in (sva_path, sby_check_path):
3408
  if os.path.exists(stale):
3409
  os.remove(stale)
 
3410
  # Stay in FORMAL_VERIFY β€” will regenerate SVA on re-entry
3411
  return
3412
+ self.artifacts.pop("sva_preflight_error_last", None)
3413
+ self.artifacts.pop("sva_preflight_error_streak", None)
3414
  except Exception as pf_exc:
3415
  self.logger.warning(f"Yosys SVA preflight exception: {pf_exc}")
3416
 
 
3454
  design_name=self.name,
3455
  rtl_code=self.artifacts.get("rtl_code", ""),
3456
  )
3457
+ debug_result = self._normalize_deepdebug_result(_verdict, result)
3458
+ self._set_artifact(
3459
+ "formal_debug_result",
3460
+ debug_result.to_dict(),
3461
+ producer="agent_deepdebug",
3462
+ consumer="FORMAL_VERIFY",
3463
+ )
3464
  if _verdict is not None:
3465
  _formal_debug_context = (
3466
  f"\n\nFVDEBUG ROOT CAUSE:\n"
 
3482
  f"sby_cfg_exists={os.path.exists(_sby_cfg)}, "
3483
  f"rtl_exists={os.path.exists(_rtl_path_fv)}"
3484
  )
3485
+ self._set_artifact(
3486
+ "formal_debug_context",
3487
+ _formal_debug_context,
3488
+ producer="agent_deepdebug",
3489
+ consumer="FORMAL_VERIFY",
3490
+ )
3491
 
3492
  self.artifacts['formal_result'] = 'FAIL'
3493
  if self.strict_gates:
 
3517
  self.state = BuildState.FAIL
3518
  return
3519
 
3520
+ if self.skip_coverage:
3521
+ self.log("Skipping Coverage Analysis (--skip-coverage).", refined=True)
3522
+ self.transition(BuildState.REGRESSION)
3523
+ return
3524
+
3525
  self.transition(BuildState.COVERAGE_CHECK)
3526
 
3527
  def do_coverage_check(self):
 
3568
  self.artifacts["coverage"] = coverage_data
3569
  self.artifacts["coverage_backend_used"] = coverage_data.get("backend", self.coverage_backend)
3570
  self.artifacts["coverage_mode"] = coverage_data.get("coverage_mode", "unknown")
3571
+ self._set_artifact(
3572
+ "coverage_improvement_context",
3573
+ {
3574
+ "coverage_data": coverage_data,
3575
+ "sim_output": sim_output[:4000] if isinstance(sim_output, str) else str(sim_output),
3576
+ },
3577
+ producer="orchestrator_coverage",
3578
+ consumer="COVERAGE_CHECK",
3579
+ )
3580
 
3581
  src_dir = os.path.join(OPENLANE_ROOT, "designs", self.name, "src")
3582
  os.makedirs(src_dir, exist_ok=True)
 
3713
  improve_prompt = f"""The current testbench for "{self.name}" does not meet coverage thresholds.
3714
  TARGET: Branch >={branch_target:.1f}%, Line >={float(thresholds['line']):.1f}%.
3715
  Current Coverage Data: {coverage_data}
3716
+ PREVIOUS FAILED ATTEMPTS:
3717
+ {self._format_failure_history()}
3718
 
3719
  Current RTL:
3720
  ```verilog
 
3757
  improved_tb = str(result)
3758
  # --- Universal code output validation (coverage TB improvement) ---
3759
  if not validate_llm_code_output(improved_tb):
3760
+ self._record_non_consumable_output(
3761
+ "llm_coverage_tb",
3762
+ improved_tb,
3763
+ ["Coverage TB improvement returned prose instead of code."],
3764
+ )
3765
  self.log("Coverage TB improvement returned prose instead of code. Retrying once.", refined=True)
3766
  self.logger.warning(f"COVERAGE TB VALIDATION FAIL (prose detected):\n{improved_tb[:500]}")
3767
  improved_tb = str(Crew(agents=[tb_agent], tasks=[improve_task]).kickoff())
3768
+ tb_validation_issues = self._validate_tb_candidate(improved_tb)
3769
+ if tb_validation_issues:
3770
+ self._record_stage_contract(
3771
+ StageResult(
3772
+ stage=self.state.name,
3773
+ status=StageStatus.RETRY,
3774
+ producer="orchestrator_coverage_tb_validator",
3775
+ failure_class=FailureClass.LLM_SEMANTIC_ERROR,
3776
+ diagnostics=tb_validation_issues,
3777
+ next_action="keep_previous_tb",
3778
+ )
3779
+ )
3780
+ self.log(f"Coverage TB candidate rejected: {tb_validation_issues[0]}", refined=True)
3781
+ return
3782
  self.logger.info(f"IMPROVED TB:\n{improved_tb}")
3783
 
3784
  tb_path = write_verilog(self.name, improved_tb, is_testbench=True)
src/agentic/tools/vlsi_tools.py CHANGED
@@ -4,6 +4,8 @@ import re
4
  import json
5
  import hashlib
6
  import subprocess
 
 
7
  from collections import Counter, defaultdict, deque
8
  from typing import Dict, Any, List, Tuple
9
  import shutil
@@ -55,6 +57,162 @@ def _resolve_binary(bin_hint: str) -> str:
55
  return bin_hint
56
 
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  def startup_self_check() -> Dict[str, Any]:
59
  """Validate required tooling and environment before running the flow."""
60
  checks: List[Dict[str, Any]] = []
@@ -349,35 +507,37 @@ def run_syntax_check(file_path: str) -> tuple:
349
  """
350
  if not os.path.exists(file_path):
351
  return False, f"File not found: {file_path}"
352
-
353
- try:
354
- # Gather all RTL files to support multi-file compilation
355
- import glob
356
- src_dir = os.path.dirname(file_path)
357
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
358
- if not f.endswith("_tb.v") and "regression" not in f]
359
- if file_path not in rtl_files and os.path.exists(file_path):
360
- rtl_files.append(file_path)
361
-
362
- # --lint-only: check syntax and basic semantics
363
- # --sv: force SystemVerilog parsing
364
- # --timing: support delays
365
- # -Wno-fatal: don't crash on warnings (unless they are errors)
366
- cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wno-fatal"] + rtl_files
367
-
368
- result = subprocess.run(
369
- cmd,
370
- capture_output=True, text=True,
371
- timeout=60
372
- )
373
- # Verilator prints errors/warnings to stderr
374
- if result.returncode == 0:
375
- return True, "Syntax OK (Verilator)"
376
- return False, f"Verilator Syntax Errors:\n{result.stderr}"
377
- except subprocess.TimeoutExpired:
378
- return False, "Syntax check timed out (>60s)."
379
- except FileNotFoundError:
380
- return False, "Verilator not found. Please install Verilator 5.0+."
 
 
381
 
382
  def run_lint_check(file_path: str) -> tuple:
383
  """
@@ -388,67 +548,65 @@ def run_lint_check(file_path: str) -> tuple:
388
  """
389
  if not os.path.exists(file_path):
390
  return False, f"File not found: {file_path}"
391
-
392
- import glob
393
  src_dir = os.path.dirname(file_path)
394
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
395
- if not f.endswith("_tb.v") and "regression" not in f]
396
- if file_path not in rtl_files and os.path.exists(file_path):
397
  rtl_files.append(file_path)
398
 
399
  # --sv: force SystemVerilog parsing (critical for typedef, logic, always_comb)
400
  # -Wno-fatal: don't exit on warnings β€” let us separate real errors from warnings
401
  # Suppress informational warnings that are not bugs:
402
- cmd = [
403
- "verilator", "--lint-only", "--sv", "--timing",
404
- "-Wno-fatal", # warnings don't cause non-zero exit
405
- "-Wno-UNUSED", # unused signals (common in AI-generated code)
406
- "-Wno-PINMISSING", # missing port connections
407
- "-Wno-CASEINCOMPLETE", # incomplete case (handled by default)
408
- "-Wno-WIDTHEXPAND", # zero-extension (harmless implicit widening)
409
- "-Wno-WIDTHTRUNC", # truncation (flag separately in semantic check)
410
- ] + rtl_files
411
-
412
- try:
413
- result = subprocess.run(
414
- cmd,
415
- capture_output=True, text=True,
416
- timeout=30
417
- )
418
- stderr = result.stderr.strip()
419
-
420
- if result.returncode == 0:
421
- # Check for remaining warnings (non-fatal)
422
- if stderr:
423
- # Parse for LATCH warnings β€” these are fixable and important
424
- has_latch = bool(re.search(r'%Warning-LATCH:', stderr))
425
- if has_latch:
426
- # LATCH is a real design issue β€” fail so the LLM can fix it
427
- return False, f"Verilator Lint Errors:\n{stderr}"
428
- # Other warnings are informational, pass with report
429
- return True, f"Lint OK (with warnings):\n{stderr}"
430
- return True, "Lint OK"
431
-
432
- # Non-zero exit: check if there are REAL %Error lines (not just "Exiting due to N warning(s)")
433
- real_errors = [
434
- line for line in stderr.splitlines()
435
- if line.strip().startswith("%Error") and "Exiting due to" not in line
436
- ]
437
-
438
- if not real_errors:
439
- # Only warnings caused the exit β€” try iverilog fallback
440
- iverilog_ok, iverilog_report = run_iverilog_lint(file_path)
441
- if iverilog_ok:
442
- return True, f"Lint OK (Verilator warnings only, iverilog passed):\n{stderr}"
443
- else:
 
444
  return False, f"Verilator Lint Errors:\n{stderr}\n\niverilog also failed:\n{iverilog_report}"
445
-
446
- return False, f"Verilator Lint Errors:\n{stderr}"
447
-
448
- except FileNotFoundError:
449
- return True, "Verilator not found (Skipping Lint)"
450
- except subprocess.TimeoutExpired:
451
- return False, "Lint check timed out."
452
 
453
 
454
  def run_iverilog_lint(file_path: str) -> tuple:
@@ -460,44 +618,54 @@ def run_iverilog_lint(file_path: str) -> tuple:
460
  """
461
  if not os.path.exists(file_path):
462
  return False, f"File not found: {file_path}"
463
-
464
- import glob
465
  src_dir = os.path.dirname(file_path)
466
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) if not f.endswith("_tb.v") and "regression" not in f]
467
- if file_path not in rtl_files and os.path.exists(file_path):
468
  rtl_files.append(file_path)
469
 
470
- # -g2012: IEEE 1800-2012 SystemVerilog standard
471
- # -Wall: enable all warnings
472
- # -o /dev/null: don't produce output binary (lint-only mode)
473
- cmd = ["iverilog", "-g2012", "-Wall", "-o", "/dev/null"] + rtl_files
474
-
475
- try:
476
- result = subprocess.run(
477
- cmd,
478
- capture_output=True, text=True,
479
- timeout=30
480
- )
481
- combined = (result.stdout + "\n" + result.stderr).strip()
482
-
483
- # iverilog returns 0 on success, non-zero on errors
484
- if result.returncode == 0:
485
- if combined:
486
- return True, f"iverilog OK (with warnings):\n{combined}"
487
- return True, "iverilog OK"
488
-
489
- return False, f"iverilog Lint Errors:\n{combined}"
490
-
491
- except FileNotFoundError:
492
- return False, "iverilog not found (install with: apt install iverilog)"
493
- except subprocess.TimeoutExpired:
494
- return False, "iverilog lint check timed out."
 
 
 
 
495
 
496
 
497
  def run_semantic_rigor_check(file_path: str) -> Tuple[bool, Dict[str, Any]]:
498
  """Deterministic semantic preflight for width-safety and port-shadowing."""
499
  report: Dict[str, Any] = {
500
  "ok": True,
 
 
 
 
 
 
 
501
  "width_issues": [],
502
  "port_shadowing": [],
503
  "details": "",
@@ -540,20 +708,36 @@ def run_semantic_rigor_check(file_path: str) -> Tuple[bool, Dict[str, Any]]:
540
  "signed",
541
  "truncat",
542
  )
543
- cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wall", file_path]
544
- try:
545
- result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
546
- stderr = result.stderr or ""
547
- width_lines = []
548
- for line in stderr.splitlines():
549
- upper = line.upper()
550
- if any(p.upper() in upper for p in width_patterns):
551
- width_lines.append(line.strip())
552
- if width_lines:
553
- report["width_issues"] = width_lines[:20]
554
- report["details"] = "\n".join(width_lines[:20])
555
- except Exception as exc:
556
- report["details"] = f"Semantic width scan fallback triggered: {exc}"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
557
 
558
  report["ok"] = not report["port_shadowing"] and not report["width_issues"]
559
  return report["ok"], report
@@ -666,20 +850,24 @@ def auto_fix_width_warnings(file_path: str) -> Tuple[bool, Dict[str, Any]]:
666
 
667
  def _collect_width_warnings(file_path: str) -> List[str]:
668
  """Run Verilator -Wall and return only WIDTH-related warning lines."""
669
- cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wall", file_path]
670
- try:
671
- result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
672
- stderr = result.stderr or ""
673
- except Exception:
674
- return []
 
 
 
 
675
 
676
- hit_keys = ("WIDTHTRUNC", "WIDTHEXPAND", "WIDTH")
677
- out = []
678
- for line in stderr.splitlines():
679
- upper = line.upper()
680
- if any(k in upper for k in hit_keys):
681
- out.append(line.strip())
682
- return out
683
 
684
 
685
  def _parse_width_warning_record(warning: str) -> dict | None:
@@ -1281,38 +1469,21 @@ bind {module_name} {module_name}_sby_check sby_inst (.*);
1281
  '''
1282
  return yosys_code
1283
 
1284
- def write_sby_config(design_name, use_sby_check: bool = True):
1285
- """Writes a default SBY config for the design.
1286
-
1287
- Args:
1288
- design_name: Name of the design
1289
- use_sby_check: If True, use the Yosys-compatible _sby_check.sv file
1290
- """
1291
- import glob as _glob
1292
  src_dir = f"{OPENLANE_ROOT}/designs/{design_name}/src"
1293
- formal_dir = f"{OPENLANE_ROOT}/designs/{design_name}/formal"
1294
- os.makedirs(formal_dir, exist_ok=True)
1295
- path = f"{formal_dir}/{design_name}.sby"
1296
-
1297
  sva_file = f"{design_name}_sby_check.sv" if use_sby_check else f"{design_name}_sva.sv"
1298
  sva_abs = f"{src_dir}/{sva_file}"
1299
-
1300
- # Glob all RTL files from src/ β€” same pattern as Verilator multi-file fix
1301
- # Exclude _sva.sv (raw LLM SVA β€” not Yosys-compatible) and _tb.v testbenches
1302
  sva_raw = f"{src_dir}/{design_name}_sva.sv"
1303
  rtl_files = sorted(
1304
- f for f in _glob.glob(os.path.join(src_dir, "*.v")) + _glob.glob(os.path.join(src_dir, "*.sv"))
1305
- if not f.endswith("_tb.v") and "regression" not in f
1306
- and f != sva_abs and f != sva_raw
1307
  )
1308
- # Ensure the Yosys-compatible SVA check file is included
1309
  if os.path.exists(sva_abs):
1310
  rtl_files.append(sva_abs)
1311
-
1312
- # Build [script] read commands and [files] entries from the globbed list
1313
  read_cmds = "\n".join(f"read -formal {os.path.basename(f)}" for f in rtl_files)
1314
- files_entries = "\n".join(rtl_files)
1315
-
1316
  config = f"""[options]
1317
  mode prove
1318
 
@@ -1326,36 +1497,57 @@ prep -top {design_name}
1326
  [files]
1327
  {files_entries}
1328
  """
1329
- with open(path, "w") as f:
1330
- f.write(config)
1331
- return path
 
 
 
 
 
 
 
 
 
1332
 
1333
  def run_formal_verification(design_name):
1334
  """Runs SymbiYosys (SBY) for formal verification."""
1335
- formal_dir = f"{OPENLANE_ROOT}/designs/{design_name}/formal"
1336
- sby_file = f"{formal_dir}/{design_name}.sby"
1337
-
1338
- if not os.path.exists(sby_file):
1339
  return False, "SBY configuration file not found."
1340
 
1341
- # Run SBY from formal/ directory to avoid polluting src/
1342
- sby_cmd = _resolve_binary(SBY_BIN)
1343
- try:
1344
- result = subprocess.run(
1345
- [sby_cmd, "-f", f"{design_name}.sby"],
1346
- cwd=formal_dir,
1347
- capture_output=True,
1348
- text=True,
1349
- timeout=600 # 10 minute timeout for formal verification
1350
- )
1351
- if result.returncode == 0:
1352
- return True, f"Formal Verification PASSED.\n{result.stdout}"
1353
- else:
1354
- return False, f"Formal Verification FAILED:\n{result.stdout}\n{result.stderr}"
1355
- except subprocess.TimeoutExpired:
1356
- return False, "Formal Verification timed out (>10 mins). Design may be too complex for bounded model checking."
1357
- except FileNotFoundError:
1358
- return False, "SymbiYosys (sby) tool not installed/found in path."
 
 
 
 
 
 
 
 
 
 
 
 
 
1359
 
1360
  def read_file_content(file_path: str):
1361
  """
@@ -1608,9 +1800,14 @@ def run_tb_compile_gate(design_name: str, tb_path: str, rtl_path: str) -> Tuple[
1608
  "design_name": design_name,
1609
  "tb_path": tb_path,
1610
  "rtl_path": rtl_path,
 
1611
  "returncode": -1,
 
 
 
1612
  "issue_categories": [],
1613
  "diagnostics": [],
 
1614
  "compile_output": "",
1615
  "timeout": False,
1616
  "fingerprint": "",
@@ -1627,87 +1824,86 @@ def run_tb_compile_gate(design_name: str, tb_path: str, rtl_path: str) -> Tuple[
1627
  report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1628
  return False, report
1629
 
1630
- import glob
1631
  src_dir = os.path.dirname(rtl_path)
1632
- all_rtl = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
1633
- if not f.endswith("_tb.v") and "regression" not in f]
1634
- if rtl_path not in all_rtl and os.path.exists(rtl_path):
1635
  all_rtl.append(rtl_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1636
 
1637
- cmd = [
1638
- "verilator",
1639
- "--lint-only",
1640
- "--sv",
1641
- "--timing",
1642
- "-Wno-fatal",
1643
- *all_rtl,
1644
- tb_path,
1645
- "--top-module",
1646
- f"{design_name}_tb",
1647
- ]
1648
- report["command"] = cmd
1649
-
1650
- try:
1651
- result = subprocess.run(cmd, capture_output=True, text=True, timeout=120)
1652
- except subprocess.TimeoutExpired:
1653
- report["timeout"] = True
1654
- report["compile_output"] = "TB compile gate timed out (>120s)."
1655
- report["issue_categories"] = ["compile_timeout"]
1656
- report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1657
- return False, report
1658
- except FileNotFoundError:
1659
- report["compile_output"] = "Verilator binary not found."
1660
- report["issue_categories"] = ["verilator_missing"]
1661
- report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1662
- return False, report
1663
-
1664
- raw = ((result.stdout or "") + ("\n" + result.stderr if result.stderr else "")).strip()
1665
- report["returncode"] = result.returncode
1666
- report["compile_output"] = raw[:16000]
1667
-
1668
- diag_lines: List[str] = []
1669
- for line in raw.splitlines():
1670
- s = line.strip()
1671
- if not s:
1672
- continue
1673
- if s.startswith("%Error") or s.startswith("%Warning") or "syntax error" in s.lower() or "Internal Error" in s:
1674
- diag_lines.append(s)
1675
- if not diag_lines:
1676
- diag_lines = [x.strip() for x in raw.splitlines() if x.strip()][:12]
1677
- report["diagnostics"] = diag_lines[:12]
1678
 
1679
- categories = set()
1680
- low = raw.lower()
1681
- if result.returncode == 0:
1682
- categories.add("compile_ok")
1683
- else:
1684
- if "internal error" in low:
1685
- categories.add("parser_internal_state_error")
1686
- if "syntax error" in low:
1687
- categories.add("syntax_error")
1688
- if ("_if" in raw and ("unexpected IDENTIFIER" in raw or "expecting ')'" in raw)) or (
1689
- "unexpected identifier" in low and "expecting ')'" in low
1690
- ):
1691
- categories.add("interface_typing_error")
1692
- if "function new" in low and "_if" in low:
1693
- categories.add("constructor_interface_type_error")
1694
- if "covergroup" in low or "coverpoint" in low:
1695
- categories.add("covergroup_scope_error")
1696
- if "pin not found" in low or "pinnotfound" in low:
1697
- categories.add("pin_mismatch")
1698
- # Missing interface definition (e.g. UVM-lite fallback references _if not in design)
1699
- if "cannot find" in low and "interface" in low:
1700
- categories.add("missing_interface")
1701
- # Dotted references to missing interfaces (cascade from above)
1702
- if "dotted reference" in low and ("missing module" in low or "missing interface" in low):
1703
- categories.add("dotted_ref_missing_interface")
1704
- if not categories:
1705
- categories.add("compile_error")
1706
- report["issue_categories"] = sorted(categories)
1707
-
1708
- fp_base = "|".join(report["issue_categories"]) + "|" + "\n".join(report["diagnostics"][:6])
1709
- report["fingerprint"] = hashlib.sha256(fp_base.encode("utf-8", errors="ignore")).hexdigest()[:16]
1710
- report["ok"] = result.returncode == 0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1711
 
1712
  # --- iverilog fallback ---
1713
  # If Verilator rejects the TB (especially for interface/class issues),
@@ -1732,23 +1928,39 @@ def run_tb_compile_gate(design_name: str, tb_path: str, rtl_path: str) -> Tuple[
1732
 
1733
  def _iverilog_compile_tb(tb_path: str, rtl_path: str, design_name: str) -> Tuple[bool, str]:
1734
  """Try compiling TB + RTL with iverilog as a Verilator fallback."""
1735
- import glob
1736
  src_dir = os.path.dirname(rtl_path)
1737
- all_rtl = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
1738
- if not f.endswith("_tb.v") and "regression" not in f]
1739
- if rtl_path not in all_rtl and os.path.exists(rtl_path):
1740
  all_rtl.append(rtl_path)
1741
- cmd = ["iverilog", "-g2012", "-Wall", "-o", "/dev/null", *all_rtl, tb_path]
1742
- try:
1743
- result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
1744
- combined = (result.stdout + "\n" + result.stderr).strip()
1745
- if result.returncode == 0:
1746
- return True, f"iverilog compile OK: {combined[:500]}" if combined else "iverilog compile OK"
1747
- return False, f"iverilog compile failed:\n{combined[:2000]}"
1748
- except FileNotFoundError:
1749
- return False, "iverilog not found"
1750
- except subprocess.TimeoutExpired:
1751
- return False, "iverilog compile timed out"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1752
 
1753
 
1754
  # ---------------------------------------------------------------------------
@@ -2285,57 +2497,87 @@ def run_simulation(design_name: str) -> tuple:
2285
  return False, f"RTL file not found: {rtl_file}"
2286
  if not os.path.exists(tb_file):
2287
  return False, f"Testbench file not found: {tb_file}"
2288
-
2289
- import glob
2290
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
2291
- if not f.endswith("_tb.v") and "regression" not in f]
2292
 
2293
- # Compile & Build using Verilator --binary
2294
- # --binary: Build a binary executable
2295
- # -j 0: Use all cores
2296
- # --timing: Enable timing support (essential for delays like #5)
2297
- # --assert: Enable assertions
2298
- cmd = [
2299
- "verilator",
2300
- "--binary",
2301
- "--sv",
2302
- "-j", "0",
2303
- "--timing",
2304
- "--assert",
2305
- "-Wno-fatal", # Don't error out on warnings
2306
- *rtl_files, tb_file,
2307
- "--top-module", f"{design_name}_tb",
2308
- "--Mdir", obj_dir,
2309
- "-o", "sim_exec"
2310
- ]
2311
-
2312
- try:
2313
- compile_result = subprocess.run(
2314
- cmd,
2315
- capture_output=True, text=True,
2316
- timeout=120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2317
  )
2318
- except subprocess.TimeoutExpired:
2319
- return False, "Compilation timed out (>120s)."
2320
- except FileNotFoundError:
2321
- return False, "Verilator not found. Please install Verilator 5.0+."
2322
-
2323
- if compile_result.returncode != 0:
2324
- return False, f"Verilator Compilation Failed:\n{compile_result.stderr}"
2325
-
2326
- # Run the generated binary
2327
- sim_exec_path = f"{obj_dir}/sim_exec"
2328
- try:
2329
- run_result = subprocess.run(
2330
- [sim_exec_path],
2331
- capture_output=True,
2332
- text=True,
2333
- timeout=300
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2334
  )
2335
- except subprocess.TimeoutExpired:
2336
- return False, "Simulation Timed Out (Exceeded 300s). Infinite loop likely."
2337
-
2338
- sim_text = (run_result.stdout or "") + ("\n" + run_result.stderr if run_result.stderr else "")
2339
 
2340
  if "TEST PASSED" in sim_text:
2341
  return True, sim_text
@@ -2343,7 +2585,7 @@ def run_simulation(design_name: str) -> tuple:
2343
  if "TEST FAILED" in sim_text:
2344
  return False, sim_text
2345
 
2346
- if run_result.returncode != 0:
2347
  return False, f"Simulation Crashed:\n{sim_text}"
2348
 
2349
  return False, sim_text
@@ -2524,33 +2766,70 @@ def run_gls_simulation(design_name: str) -> tuple:
2524
 
2525
  primitives_v = os.path.join(os.path.dirname(pdk_v_path), "primitives.v")
2526
 
2527
- # Compile GLS
2528
- try:
2529
- cmd = ["iverilog", "-g2012", "-DFUNCTIONAL", "-DUNIT_DELAY=#1", "-o", sim_out, tb_file, gl_netlist, pdk_v_path, primitives_v]
2530
- compile_result = subprocess.run(
2531
- cmd,
2532
- capture_output=True, text=True,
2533
- timeout=300
2534
- )
2535
- if compile_result.returncode != 0:
2536
- return False, f"GLS Compilation failed:\n{compile_result.stderr}"
2537
- except subprocess.TimeoutExpired:
2538
- return False, "GLS Compilation timed out."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2539
 
2540
- # Run GLS Simulation
2541
- try:
2542
- run_result = subprocess.run(
2543
- ["vvp", sim_out],
2544
- capture_output=True,
2545
- text=True,
2546
- timeout=600
2547
- )
2548
- sim_text = (run_result.stdout or "") + ("\n" + run_result.stderr if run_result.stderr else "")
2549
- if "TEST PASSED" in sim_text:
2550
- return True, f"GLS Simulation PASSED.\n{sim_text}"
2551
- return False, f"GLS Simulation FAILED or missing PASS marker.\n{sim_text}"
2552
- except subprocess.TimeoutExpired:
2553
- return False, "GLS Simulation Timed Out."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2554
 
2555
 
2556
  def parse_eda_log_summary(log_path: str, kind: str, top_n: int = 10) -> Dict[str, Any]:
@@ -2802,6 +3081,12 @@ def get_coverage_thresholds(profile: str) -> Dict[str, float]:
2802
  def _coverage_shell(design_name: str, backend: str, coverage_mode: str = "full_oss") -> Dict[str, Any]:
2803
  return {
2804
  "ok": False,
 
 
 
 
 
 
2805
  "backend": backend,
2806
  "coverage_mode": coverage_mode,
2807
  "infra_failure": False,
@@ -2853,42 +3138,43 @@ def _parse_verilator_coverage_dat(cov_dat: str, src_dir: str) -> Dict[str, float
2853
  data = {"line_pct": 0.0, "toggle_pct": 0.0, "branch_pct": 0.0, "overall_pct": 0.0}
2854
  if not os.path.exists(cov_dat):
2855
  return data
2856
- annotate_dir = os.path.join(src_dir, "cov_annotate")
2857
- try:
2858
- os.makedirs(annotate_dir, exist_ok=True)
2859
- subprocess.run(
2860
- ["verilator_coverage", "--annotate", annotate_dir, cov_dat],
2861
- capture_output=True,
2862
- text=True,
2863
- timeout=60,
2864
- )
2865
- except Exception:
2866
- pass
2867
-
2868
- total_points = 0
2869
- hit_points = 0
2870
- toggle_points = 0
2871
- toggle_hit = 0
2872
- if os.path.exists(annotate_dir):
2873
- for root, _, files in os.walk(annotate_dir):
2874
- for fname in files:
2875
- if not fname.endswith((".v", ".sv")):
2876
- continue
2877
- with open(os.path.join(root, fname), "r", errors="ignore") as f:
2878
- for line in f:
2879
- s = line.strip()
2880
- if not s:
2881
- continue
2882
- m = re.match(r"^(\d+)\s+", s)
2883
- if m:
2884
- total_points += 1
2885
- if int(m.group(1)) > 0:
2886
- hit_points += 1
2887
- if s.startswith("%"):
2888
- toggle_points += 1
2889
- p = re.match(r"%0*(\d+)", s)
2890
- if p and int(p.group(1)) > 0:
2891
- toggle_hit += 1
 
2892
 
2893
  if total_points > 0:
2894
  data["line_pct"] = round((hit_points / total_points) * 100.0, 2)
@@ -2903,12 +3189,9 @@ def _parse_verilator_coverage_dat(cov_dat: str, src_dir: str) -> Dict[str, float
2903
 
2904
  def run_verilator_coverage(design_name: str, rtl_file: str, tb_file: str, coverage_mode: str = "full_oss") -> Tuple[bool, str, Dict[str, Any]]:
2905
  src_dir = os.path.dirname(rtl_file)
2906
- obj_dir = os.path.join(src_dir, "obj_dir_cov")
2907
  sim_exec = "sim_cov_exec"
2908
- cov_dat = os.path.join(src_dir, "coverage.dat")
2909
- diag_path = os.path.join(src_dir, f"{design_name}_coverage_verilator.log")
2910
  result = _coverage_shell(design_name, backend="verilator", coverage_mode=coverage_mode)
2911
- result["raw_diag_path"] = diag_path
2912
 
2913
  if not os.path.exists(rtl_file):
2914
  result["infra_failure"] = True
@@ -2924,126 +3207,165 @@ def run_verilator_coverage(design_name: str, rtl_file: str, tb_file: str, covera
2924
  signals, rtl_line_count = _read_rtl_signal_stats(rtl_file)
2925
  result["total_signals"] = len(signals)
2926
  signal_set = set(signals)
2927
-
2928
- if os.path.exists(cov_dat):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2929
  try:
2930
- os.remove(cov_dat)
2931
- except OSError:
2932
- pass
2933
-
2934
- import glob
2935
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) + glob.glob(os.path.join(src_dir, "*.sv"))
2936
- if not f.endswith("_tb.v") and "regression" not in f]
2937
-
2938
- compile_cmd = [
2939
- "verilator",
2940
- "--binary",
2941
- "--coverage",
2942
- "--sv",
2943
- "--timing",
2944
- "-Wno-fatal",
2945
- *rtl_files,
2946
- tb_file,
2947
- "--top-module",
2948
- f"{design_name}_tb",
2949
- "--Mdir",
2950
- obj_dir,
2951
- "-o",
2952
- sim_exec,
2953
- ]
2954
- run_cmd = [os.path.join(obj_dir, sim_exec), f"+verilator+coverage+file+{cov_dat}"]
2955
- try:
2956
- comp = subprocess.run(compile_cmd, capture_output=True, text=True, timeout=240, cwd=src_dir)
2957
- except FileNotFoundError:
2958
- result["infra_failure"] = True
2959
- result["error_kind"] = "tool_missing"
2960
- result["diagnostics"] = ["verilator binary not found."]
2961
- return False, result["diagnostics"][0], result
2962
- except subprocess.TimeoutExpired:
2963
- result["infra_failure"] = True
2964
- result["error_kind"] = "compile_timeout"
2965
- result["diagnostics"] = ["Verilator coverage compile timed out (>240s)."]
2966
- return False, result["diagnostics"][0], result
2967
-
2968
- if comp.returncode != 0:
2969
- result["infra_failure"] = True
2970
- result["error_kind"] = "compile_error"
2971
- result["diagnostics"] = [x.strip() for x in (comp.stderr or comp.stdout or "").splitlines() if x.strip()][:12]
2972
- with open(diag_path, "w") as f:
2973
- f.write(f"COMMAND: {' '.join(compile_cmd)}\n\n{comp.stdout}\n{comp.stderr}\n")
2974
- return False, (comp.stderr or comp.stdout or "Verilator compile failed")[:1200], result
2975
-
2976
- try:
2977
- run = subprocess.run(run_cmd, capture_output=True, text=True, timeout=300, cwd=src_dir)
2978
- except subprocess.TimeoutExpired:
2979
- result["infra_failure"] = True
2980
- result["error_kind"] = "run_timeout"
2981
- result["diagnostics"] = ["Verilator coverage simulation timed out (>300s)."]
2982
- return False, result["diagnostics"][0], result
2983
 
2984
- sim_text = (run.stdout or "") + ("\n" + run.stderr if run.stderr else "")
2985
- sim_passed = "TEST PASSED" in sim_text
2986
- with open(diag_path, "w") as f:
2987
- f.write(f"COMPILE: {' '.join(compile_cmd)}\n")
2988
- f.write(f"RUN: {' '.join(run_cmd)}\n\n")
2989
- f.write(sim_text[:20000])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2990
 
2991
- metrics = _parse_verilator_coverage_dat(cov_dat, src_dir)
2992
- if not os.path.exists(cov_dat):
2993
- result["infra_failure"] = True
2994
- result["error_kind"] = "parse_error"
2995
- result["diagnostics"] = ["coverage.dat not generated by Verilator run."]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2996
  return sim_passed, sim_text, result
2997
 
2998
- vcd_candidates = [
2999
- os.path.join(src_dir, f"{design_name}_cov.vcd"),
3000
- os.path.join(src_dir, f"{design_name}.vcd"),
3001
- os.path.join(src_dir, "dump.vcd"),
3002
- ]
3003
- toggled = 0
3004
- for vcd in vcd_candidates:
3005
- if os.path.exists(vcd):
3006
- toggled = max(toggled, _extract_vcd_toggles(vcd, signal_set))
3007
- result["signals_toggled"] = toggled
3008
-
3009
- line_pct = metrics["line_pct"]
3010
- toggle_pct = metrics["toggle_pct"]
3011
- branch_pct = metrics["branch_pct"]
3012
- if toggle_pct <= 0.0 and result["total_signals"] > 0:
3013
- toggle_pct = round((toggled / result["total_signals"]) * 100.0, 2)
3014
- functional_pct = round((line_pct * 0.6 + toggle_pct * 0.4), 2) if sim_passed else round((line_pct * 0.3), 2)
3015
- assertion_pct = 100.0 if sim_passed else 0.0
3016
-
3017
- result.update(
3018
- {
3019
- "ok": True,
3020
- "line_pct": max(0.0, min(100.0, line_pct)),
3021
- "branch_pct": max(0.0, min(100.0, branch_pct)),
3022
- "toggle_pct": max(0.0, min(100.0, toggle_pct)),
3023
- "functional_pct": max(0.0, min(100.0, functional_pct)),
3024
- "assertion_pct": assertion_pct,
3025
- "report_path": cov_dat,
3026
- }
3027
- )
3028
- if run.returncode != 0 and not sim_passed:
3029
- result["ok"] = False
3030
- result["infra_failure"] = True
3031
- result["error_kind"] = "run_error"
3032
- result["diagnostics"] = [x.strip() for x in sim_text.splitlines() if x.strip()][:10]
3033
- elif rtl_line_count > 0 and result["line_pct"] <= 0.0 and sim_passed:
3034
- result["ok"] = False
3035
- result["infra_failure"] = True
3036
- result["error_kind"] = "parse_error"
3037
- result["diagnostics"] = ["Coverage metrics are empty despite passing simulation."]
3038
- return sim_passed, sim_text, result
3039
-
3040
 
3041
  def run_iverilog_coverage(design_name: str, rtl_file: str, tb_file: str, coverage_mode: str = "full_oss") -> Tuple[bool, str, Dict[str, Any]]:
3042
  src_dir = os.path.dirname(rtl_file)
3043
- sim_out = os.path.join(src_dir, "sim_cov")
3044
- diag_path = os.path.join(src_dir, f"{design_name}_coverage_iverilog.log")
3045
  result = _coverage_shell(design_name, backend="iverilog", coverage_mode=coverage_mode)
3046
- result["raw_diag_path"] = diag_path
3047
 
3048
  with open(tb_file, "r", errors="ignore") as f:
3049
  tb_code = f.read()
@@ -3052,96 +3374,150 @@ def run_iverilog_coverage(design_name: str, rtl_file: str, tb_file: str, coverag
3052
  result["total_signals"] = len(signals)
3053
  signal_set = set(signals)
3054
 
3055
- import glob
3056
- rtl_files = [f for f in glob.glob(os.path.join(src_dir, "*.v")) if not f.endswith("_tb.v") and "regression" not in f]
3057
-
3058
- compile_cmd = ["iverilog", "-g2012", "-o", sim_out, *rtl_files, tb_file]
3059
- try:
3060
- comp = subprocess.run(compile_cmd, capture_output=True, text=True, timeout=120, cwd=src_dir)
3061
- except FileNotFoundError:
3062
- result["infra_failure"] = True
3063
- result["error_kind"] = "tool_missing"
3064
- result["diagnostics"] = ["iverilog binary not found."]
3065
- return False, result["diagnostics"][0], result
3066
- except subprocess.TimeoutExpired:
3067
- result["infra_failure"] = True
3068
- result["error_kind"] = "compile_timeout"
3069
- result["diagnostics"] = ["Icarus compile timed out (>120s)."]
3070
- return False, result["diagnostics"][0], result
3071
 
3072
- if comp.returncode != 0:
3073
- result["infra_failure"] = True
3074
- result["error_kind"] = "compile_error"
3075
- result["diagnostics"] = [x.strip() for x in (comp.stderr or comp.stdout or "").splitlines() if x.strip()][:12]
3076
- if tb_style == "sv_class_based":
3077
- result["error_kind"] = "unsupported_tb_style"
3078
- result["diagnostics"].insert(0, "Class-based SV testbench is not supported by iVerilog coverage backend.")
3079
- with open(diag_path, "w") as f:
3080
- f.write(f"COMMAND: {' '.join(compile_cmd)}\n\n{comp.stdout}\n{comp.stderr}\n")
3081
- return False, (comp.stderr or comp.stdout or "Icarus compile failed")[:1200], result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3082
 
3083
- try:
3084
- run = subprocess.run(["vvp", sim_out], capture_output=True, text=True, timeout=300, cwd=src_dir)
3085
- except subprocess.TimeoutExpired:
3086
- result["infra_failure"] = True
3087
- result["error_kind"] = "run_timeout"
3088
- result["diagnostics"] = ["Icarus simulation timed out (>300s)."]
3089
- return False, result["diagnostics"][0], result
3090
- except FileNotFoundError:
3091
- result["infra_failure"] = True
3092
- result["error_kind"] = "tool_missing"
3093
- result["diagnostics"] = ["vvp binary not found."]
3094
- return False, result["diagnostics"][0], result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3095
 
3096
- sim_text = (run.stdout or "") + ("\n" + run.stderr if run.stderr else "")
3097
- sim_passed = "TEST PASSED" in sim_text
3098
- with open(diag_path, "w") as f:
3099
- f.write(sim_text[:20000])
3100
-
3101
- toggled = 0
3102
- displayed_signals = set(re.findall(r'(\w+)\s*=\s*[0-9a-fxzXZhHbB_\']+', sim_text))
3103
- toggled = len(displayed_signals.intersection(signal_set))
3104
- vcd_candidates = [
3105
- os.path.join(src_dir, f"{design_name}_cov.vcd"),
3106
- os.path.join(src_dir, f"{design_name}.vcd"),
3107
- os.path.join(src_dir, "dump.vcd"),
3108
- ]
3109
- for vcd in vcd_candidates:
3110
- if os.path.exists(vcd):
3111
- toggled = max(toggled, _extract_vcd_toggles(vcd, signal_set))
3112
- break
3113
- result["signals_toggled"] = toggled
3114
-
3115
- line_pct = 85.0 if sim_passed else 20.0
3116
- if result["total_signals"] > 0:
3117
- line_pct += (toggled / result["total_signals"]) * 15.0
3118
- line_pct = max(0.0, min(100.0, round(line_pct, 2)))
3119
- toggle_pct = round((toggled / result["total_signals"]) * 100.0, 2) if result["total_signals"] > 0 else 0.0
3120
- branch_pct = round(line_pct * 0.9, 2) if line_pct > 0 else 0.0
3121
- functional_pct = round((line_pct * 0.65 + toggle_pct * 0.35), 2) if sim_passed else round(line_pct * 0.3, 2)
3122
- assertion_pct = 100.0 if sim_passed else 0.0
3123
- result.update(
3124
- {
3125
- "ok": True,
3126
- "line_pct": line_pct,
3127
- "branch_pct": max(0.0, min(100.0, branch_pct)),
3128
- "toggle_pct": max(0.0, min(100.0, toggle_pct)),
3129
- "functional_pct": max(0.0, min(100.0, functional_pct)),
3130
- "assertion_pct": assertion_pct,
3131
- "report_path": diag_path,
3132
- }
3133
- )
3134
- if rtl_line_count > 0 and line_pct <= 0.0 and sim_passed:
3135
- result["ok"] = False
3136
- result["infra_failure"] = True
3137
- result["error_kind"] = "parse_error"
3138
- result["diagnostics"] = ["Coverage estimate collapsed to zero despite passing simulation."]
3139
- if run.returncode != 0 and not sim_passed:
3140
- result["ok"] = False
3141
- result["infra_failure"] = True
3142
- result["error_kind"] = "run_error"
3143
- result["diagnostics"] = [x.strip() for x in sim_text.splitlines() if x.strip()][:10]
3144
- return sim_passed, sim_text, result
3145
 
3146
 
3147
  def run_simulation_with_coverage(
@@ -3450,49 +3826,63 @@ def run_cdc_check(file_path: str) -> tuple:
3450
  """
3451
  if not os.path.exists(file_path):
3452
  return False, f"File not found: {file_path}"
3453
-
3454
- cmd = [
3455
- "verilator", "--lint-only", "--timing",
3456
- "-Wall",
3457
- "-Wwarn-CDCRSTLOGIC", # CDC reset logic warnings
3458
- file_path
3459
- ]
3460
-
3461
- try:
3462
- result = subprocess.run(
3463
- cmd,
3464
- capture_output=True, text=True,
3465
- timeout=60
3466
- )
3467
-
3468
- stderr = result.stderr or ""
3469
-
3470
- # Filter for CDC-specific warnings
3471
- cdc_warnings = []
3472
- all_warnings = []
3473
- for line in stderr.split('\n'):
3474
- if line.strip():
3475
- all_warnings.append(line)
3476
- if any(kw in line.upper() for kw in ['CDC', 'CLOCK', 'DOMAIN', 'SYNC', 'METASTAB', 'CDCRSTLOGIC']):
3477
- cdc_warnings.append(line)
3478
-
3479
- if not cdc_warnings and result.returncode == 0:
3480
- return True, f"CDC Analysis: CLEAN (no clock domain crossing issues detected)\nFull lint output:\n{stderr[:1000]}"
3481
- elif cdc_warnings:
3482
- report = "CDC Analysis: WARNINGS FOUND\n\n"
3483
- report += "CDC-Related Issues:\n"
3484
- for w in cdc_warnings:
3485
- report += f" - {w}\n"
3486
- report += f"\nTotal lint warnings: {len(all_warnings)}"
3487
- return False, report
3488
- else:
3489
- # Non-CDC lint errors
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3490
  return True, f"CDC Analysis: CLEAN (lint has non-CDC warnings)\n{stderr[:1000]}"
3491
-
3492
- except FileNotFoundError:
3493
- return True, "Verilator not found (Skipping CDC Check)"
3494
- except subprocess.TimeoutExpired:
3495
- return False, "CDC check timed out."
3496
 
3497
 
3498
  def generate_design_doc(design_name: str, spec: str = "", metrics: dict = None) -> str:
@@ -3839,4 +4229,3 @@ def parse_power_signoff(design_name: str) -> dict:
3839
  return result
3840
  except Exception:
3841
  return result
3842
-
 
4
  import json
5
  import hashlib
6
  import subprocess
7
+ import tempfile
8
+ import glob
9
  from collections import Counter, defaultdict, deque
10
  from typing import Dict, Any, List, Tuple
11
  import shutil
 
57
  return bin_hint
58
 
59
 
60
+ def _build_tool_result(
61
+ tool: str,
62
+ *,
63
+ ok: bool,
64
+ result: str,
65
+ returncode: int = -1,
66
+ stdout: str = "",
67
+ stderr: str = "",
68
+ diagnostics: List[str] | None = None,
69
+ metrics: Dict[str, Any] | None = None,
70
+ ) -> Dict[str, Any]:
71
+ """Build the canonical structured tool result."""
72
+ return {
73
+ "ok": bool(ok),
74
+ "tool": tool,
75
+ "returncode": int(returncode),
76
+ "stdout": stdout or "",
77
+ "stderr": stderr or "",
78
+ "result": result,
79
+ "diagnostics": list(diagnostics or []),
80
+ "metrics": dict(metrics or {}),
81
+ }
82
+
83
+
84
+ def _collect_design_rtl(src_dir: str, include_sv: bool = True) -> List[str]:
85
+ patterns = [os.path.join(src_dir, "*.v")]
86
+ if include_sv:
87
+ patterns.append(os.path.join(src_dir, "*.sv"))
88
+ rtl_files: List[str] = []
89
+ for pattern in patterns:
90
+ rtl_files.extend(glob.glob(pattern))
91
+ seen = set()
92
+ ordered = []
93
+ for path in sorted(rtl_files):
94
+ if path.endswith("_tb.v") or "regression" in path:
95
+ continue
96
+ if path in seen:
97
+ continue
98
+ seen.add(path)
99
+ ordered.append(path)
100
+ return ordered
101
+
102
+
103
+ def _stage_inputs(tmpdir: str, paths: List[str]) -> Dict[str, str]:
104
+ """Copy required inputs to tmpdir and return original->staged mapping."""
105
+ staged: Dict[str, str] = {}
106
+ used_names: set[str] = set()
107
+ for path in paths:
108
+ if not path or not os.path.exists(path) or path in staged:
109
+ continue
110
+ base = os.path.basename(path)
111
+ stem, ext = os.path.splitext(base)
112
+ candidate = base
113
+ counter = 1
114
+ while candidate in used_names:
115
+ candidate = f"{stem}_{counter}{ext}"
116
+ counter += 1
117
+ used_names.add(candidate)
118
+ dst = os.path.join(tmpdir, candidate)
119
+ shutil.copy2(path, dst)
120
+ staged[path] = dst
121
+ return staged
122
+
123
+
124
+ def _stage_path(path: str, staged_map: Dict[str, str]) -> str:
125
+ return staged_map.get(path, path)
126
+
127
+
128
+ def _temp_roots_from_stage_map(staged_map: Dict[str, str]) -> Tuple[str, str]:
129
+ if not staged_map:
130
+ return "", ""
131
+ temp_root = os.path.commonpath([os.path.dirname(path) for path in staged_map.values()])
132
+ original_root = os.path.commonpath([os.path.dirname(path) for path in staged_map.keys()])
133
+ return temp_root, original_root
134
+
135
+
136
+ def _rewrite_temp_paths(text: str, staged_map: Dict[str, str]) -> str:
137
+ """Rewrite temp paths in diagnostic text back to original source paths."""
138
+ if not text or not staged_map:
139
+ return text
140
+
141
+ rewritten = text
142
+ # Exact staged path replacement first.
143
+ for original, staged in sorted(staged_map.items(), key=lambda item: len(item[1]), reverse=True):
144
+ rewritten = rewritten.replace(staged, original)
145
+ staged_norm = os.path.normpath(staged)
146
+ if staged_norm != staged:
147
+ rewritten = rewritten.replace(staged_norm, original)
148
+ basename = os.path.basename(staged)
149
+ original_base = os.path.basename(original)
150
+ if basename == original_base:
151
+ basename_re = re.compile(rf"(?<![\w./-]){re.escape(basename)}(?=(?::\d)|\b)")
152
+ rewritten = basename_re.sub(original, rewritten)
153
+
154
+ temp_root, original_root = _temp_roots_from_stage_map(staged_map)
155
+ if temp_root and original_root:
156
+ rewritten = rewritten.replace(temp_root + os.sep, original_root + os.sep)
157
+ rewritten = rewritten.replace(temp_root, original_root)
158
+
159
+ return rewritten
160
+
161
+
162
+ def _assert_no_temp_paths(text: str, staged_map: Dict[str, str]):
163
+ temp_root, _ = _temp_roots_from_stage_map(staged_map)
164
+ if temp_root and text and temp_root in text:
165
+ raise AssertionError(f"Temp path leak detected in tool diagnostics: {temp_root}")
166
+
167
+
168
+ def _rewrite_result_paths(result_dict: Dict[str, Any], staged_map: Dict[str, str]) -> Dict[str, Any]:
169
+ """Sanitize tool result payloads so no temp paths leak upstream."""
170
+ sanitized = dict(result_dict)
171
+ for key in ("stdout", "stderr"):
172
+ value = sanitized.get(key, "")
173
+ if isinstance(value, str):
174
+ sanitized[key] = _rewrite_temp_paths(value, staged_map)
175
+ _assert_no_temp_paths(sanitized[key], staged_map)
176
+
177
+ diagnostics = sanitized.get("diagnostics", [])
178
+ if isinstance(diagnostics, list):
179
+ clean_diags = []
180
+ for entry in diagnostics:
181
+ if isinstance(entry, str):
182
+ clean = _rewrite_temp_paths(entry, staged_map)
183
+ _assert_no_temp_paths(clean, staged_map)
184
+ clean_diags.append(clean)
185
+ else:
186
+ clean_diags.append(entry)
187
+ sanitized["diagnostics"] = clean_diags
188
+ return sanitized
189
+
190
+
191
+ def _promote_vcd_artifacts(tmpdir: str, src_dir: str):
192
+ for entry in os.listdir(tmpdir):
193
+ if not entry.endswith(".vcd"):
194
+ continue
195
+ src = os.path.join(tmpdir, entry)
196
+ dst = os.path.join(src_dir, entry)
197
+ try:
198
+ shutil.copy2(src, dst)
199
+ except OSError:
200
+ continue
201
+
202
+
203
+ def _collect_diag_lines(raw: str, limit: int = 12) -> List[str]:
204
+ diag_lines: List[str] = []
205
+ for line in raw.splitlines():
206
+ s = line.strip()
207
+ if not s:
208
+ continue
209
+ if s.startswith("%Error") or s.startswith("%Warning") or "syntax error" in s.lower() or "internal error" in s.lower():
210
+ diag_lines.append(s)
211
+ if not diag_lines:
212
+ diag_lines = [x.strip() for x in raw.splitlines() if x.strip()]
213
+ return diag_lines[:limit]
214
+
215
+
216
  def startup_self_check() -> Dict[str, Any]:
217
  """Validate required tooling and environment before running the flow."""
218
  checks: List[Dict[str, Any]] = []
 
507
  """
508
  if not os.path.exists(file_path):
509
  return False, f"File not found: {file_path}"
510
+
511
+ src_dir = os.path.dirname(file_path)
512
+ rtl_files = _collect_design_rtl(src_dir)
513
+ if file_path not in rtl_files:
514
+ rtl_files.append(file_path)
515
+
516
+ with tempfile.TemporaryDirectory() as tmpdir:
517
+ staged_map = _stage_inputs(tmpdir, rtl_files)
518
+ cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wno-fatal"] + [
519
+ os.path.basename(_stage_path(path, staged_map)) for path in rtl_files
520
+ ]
521
+ try:
522
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=60, cwd=tmpdir)
523
+ tool_result = _build_tool_result(
524
+ "verilator",
525
+ ok=completed.returncode == 0,
526
+ result="PASS" if completed.returncode == 0 else "FAIL",
527
+ returncode=completed.returncode,
528
+ stdout=completed.stdout,
529
+ stderr=completed.stderr,
530
+ diagnostics=_collect_diag_lines((completed.stderr or completed.stdout or "").strip()),
531
+ metrics={"mode": "syntax_check"},
532
+ )
533
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
534
+ if tool_result["ok"]:
535
+ return True, "Syntax OK (Verilator)"
536
+ return False, f"Verilator Syntax Errors:\n{tool_result['stderr']}"
537
+ except subprocess.TimeoutExpired:
538
+ return False, "Syntax check timed out (>60s)."
539
+ except FileNotFoundError:
540
+ return False, "Verilator not found. Please install Verilator 5.0+."
541
 
542
  def run_lint_check(file_path: str) -> tuple:
543
  """
 
548
  """
549
  if not os.path.exists(file_path):
550
  return False, f"File not found: {file_path}"
551
+
 
552
  src_dir = os.path.dirname(file_path)
553
+ rtl_files = _collect_design_rtl(src_dir)
554
+ if file_path not in rtl_files:
 
555
  rtl_files.append(file_path)
556
 
557
  # --sv: force SystemVerilog parsing (critical for typedef, logic, always_comb)
558
  # -Wno-fatal: don't exit on warnings β€” let us separate real errors from warnings
559
  # Suppress informational warnings that are not bugs:
560
+ with tempfile.TemporaryDirectory() as tmpdir:
561
+ staged_map = _stage_inputs(tmpdir, rtl_files)
562
+ cmd = [
563
+ "verilator", "--lint-only", "--sv", "--timing",
564
+ "-Wno-fatal",
565
+ "-Wno-UNUSED",
566
+ "-Wno-PINMISSING",
567
+ "-Wno-CASEINCOMPLETE",
568
+ "-Wno-WIDTHEXPAND",
569
+ "-Wno-WIDTHTRUNC",
570
+ ] + [os.path.basename(_stage_path(path, staged_map)) for path in rtl_files]
571
+
572
+ try:
573
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=30, cwd=tmpdir)
574
+ tool_result = _build_tool_result(
575
+ "verilator",
576
+ ok=completed.returncode == 0,
577
+ result="PASS" if completed.returncode == 0 else "FAIL",
578
+ returncode=completed.returncode,
579
+ stdout=completed.stdout,
580
+ stderr=completed.stderr,
581
+ diagnostics=_collect_diag_lines((completed.stderr or completed.stdout or "").strip()),
582
+ metrics={"mode": "lint_check"},
583
+ )
584
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
585
+ stderr = tool_result["stderr"].strip()
586
+
587
+ if tool_result["returncode"] == 0:
588
+ if stderr:
589
+ has_latch = bool(re.search(r'%Warning-LATCH:', stderr))
590
+ if has_latch:
591
+ return False, f"Verilator Lint Errors:\n{stderr}"
592
+ return True, f"Lint OK (with warnings):\n{stderr}"
593
+ return True, "Lint OK"
594
+
595
+ real_errors = [
596
+ line for line in stderr.splitlines()
597
+ if line.strip().startswith("%Error") and "Exiting due to" not in line
598
+ ]
599
+ if not real_errors:
600
+ iverilog_ok, iverilog_report = run_iverilog_lint(file_path)
601
+ if iverilog_ok:
602
+ return True, f"Lint OK (Verilator warnings only, iverilog passed):\n{stderr}"
603
  return False, f"Verilator Lint Errors:\n{stderr}\n\niverilog also failed:\n{iverilog_report}"
604
+
605
+ return False, f"Verilator Lint Errors:\n{stderr}"
606
+ except FileNotFoundError:
607
+ return True, "Verilator not found (Skipping Lint)"
608
+ except subprocess.TimeoutExpired:
609
+ return False, "Lint check timed out."
 
610
 
611
 
612
  def run_iverilog_lint(file_path: str) -> tuple:
 
618
  """
619
  if not os.path.exists(file_path):
620
  return False, f"File not found: {file_path}"
621
+
 
622
  src_dir = os.path.dirname(file_path)
623
+ rtl_files = _collect_design_rtl(src_dir, include_sv=False)
624
+ if file_path not in rtl_files:
625
  rtl_files.append(file_path)
626
 
627
+ with tempfile.TemporaryDirectory() as tmpdir:
628
+ staged_map = _stage_inputs(tmpdir, rtl_files)
629
+ out_path = os.path.join(tmpdir, "iverilog_lint.out")
630
+ cmd = ["iverilog", "-g2012", "-Wall", "-o", out_path] + [
631
+ os.path.basename(_stage_path(path, staged_map)) for path in rtl_files
632
+ ]
633
+ try:
634
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=30, cwd=tmpdir)
635
+ tool_result = _build_tool_result(
636
+ "iverilog",
637
+ ok=completed.returncode == 0,
638
+ result="PASS" if completed.returncode == 0 else "FAIL",
639
+ returncode=completed.returncode,
640
+ stdout=completed.stdout,
641
+ stderr=completed.stderr,
642
+ diagnostics=_collect_diag_lines(((completed.stdout or "") + "\n" + (completed.stderr or "")).strip()),
643
+ metrics={"mode": "lint_check"},
644
+ )
645
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
646
+ combined = ((tool_result["stdout"] or "") + "\n" + (tool_result["stderr"] or "")).strip()
647
+ if tool_result["ok"]:
648
+ if combined:
649
+ return True, f"iverilog OK (with warnings):\n{combined}"
650
+ return True, "iverilog OK"
651
+ return False, f"iverilog Lint Errors:\n{combined}"
652
+ except FileNotFoundError:
653
+ return False, "iverilog not found (install with: apt install iverilog)"
654
+ except subprocess.TimeoutExpired:
655
+ return False, "iverilog lint check timed out."
656
 
657
 
658
  def run_semantic_rigor_check(file_path: str) -> Tuple[bool, Dict[str, Any]]:
659
  """Deterministic semantic preflight for width-safety and port-shadowing."""
660
  report: Dict[str, Any] = {
661
  "ok": True,
662
+ "tool": "verilator",
663
+ "returncode": -1,
664
+ "stdout": "",
665
+ "stderr": "",
666
+ "result": "ERROR",
667
+ "diagnostics": [],
668
+ "metrics": {},
669
  "width_issues": [],
670
  "port_shadowing": [],
671
  "details": "",
 
708
  "signed",
709
  "truncat",
710
  )
711
+ with tempfile.TemporaryDirectory() as tmpdir:
712
+ staged_map = _stage_inputs(tmpdir, [file_path])
713
+ staged_file = _stage_path(file_path, staged_map)
714
+ cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wall", os.path.basename(staged_file)]
715
+ try:
716
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=60, cwd=tmpdir)
717
+ tool_result = _build_tool_result(
718
+ "verilator",
719
+ ok=completed.returncode == 0,
720
+ result="PASS" if completed.returncode == 0 else "FAIL",
721
+ returncode=completed.returncode,
722
+ stdout=completed.stdout,
723
+ stderr=completed.stderr,
724
+ diagnostics=_collect_diag_lines((completed.stderr or completed.stdout or "").strip()),
725
+ metrics={"mode": "semantic_rigor"},
726
+ )
727
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
728
+ report.update(tool_result)
729
+ stderr = tool_result["stderr"] or ""
730
+ width_lines = []
731
+ for line in stderr.splitlines():
732
+ upper = line.upper()
733
+ if any(p.upper() in upper for p in width_patterns):
734
+ width_lines.append(line.strip())
735
+ if width_lines:
736
+ report["width_issues"] = width_lines[:20]
737
+ report["details"] = "\n".join(width_lines[:20])
738
+ report["tool_result"] = tool_result
739
+ except Exception as exc:
740
+ report["details"] = f"Semantic width scan fallback triggered: {exc}"
741
 
742
  report["ok"] = not report["port_shadowing"] and not report["width_issues"]
743
  return report["ok"], report
 
850
 
851
  def _collect_width_warnings(file_path: str) -> List[str]:
852
  """Run Verilator -Wall and return only WIDTH-related warning lines."""
853
+ with tempfile.TemporaryDirectory() as tmpdir:
854
+ staged_map = _stage_inputs(tmpdir, [file_path])
855
+ staged_file = _stage_path(file_path, staged_map)
856
+ cmd = ["verilator", "--lint-only", "--sv", "--timing", "-Wall", os.path.basename(staged_file)]
857
+ try:
858
+ result = subprocess.run(cmd, capture_output=True, text=True, timeout=60, cwd=tmpdir)
859
+ stderr = _rewrite_temp_paths(result.stderr or "", staged_map)
860
+ _assert_no_temp_paths(stderr, staged_map)
861
+ except Exception:
862
+ return []
863
 
864
+ hit_keys = ("WIDTHTRUNC", "WIDTHEXPAND", "WIDTH")
865
+ out = []
866
+ for line in stderr.splitlines():
867
+ upper = line.upper()
868
+ if any(k in upper for k in hit_keys):
869
+ out.append(line.strip())
870
+ return out
871
 
872
 
873
  def _parse_width_warning_record(warning: str) -> dict | None:
 
1469
  '''
1470
  return yosys_code
1471
 
1472
+
1473
+ def _render_sby_config(design_name: str, use_sby_check: bool = True) -> Tuple[str, List[str]]:
 
 
 
 
 
 
1474
  src_dir = f"{OPENLANE_ROOT}/designs/{design_name}/src"
 
 
 
 
1475
  sva_file = f"{design_name}_sby_check.sv" if use_sby_check else f"{design_name}_sva.sv"
1476
  sva_abs = f"{src_dir}/{sva_file}"
 
 
 
1477
  sva_raw = f"{src_dir}/{design_name}_sva.sv"
1478
  rtl_files = sorted(
1479
+ f for f in _collect_design_rtl(src_dir)
1480
+ if f != sva_abs and f != sva_raw
 
1481
  )
 
1482
  if os.path.exists(sva_abs):
1483
  rtl_files.append(sva_abs)
1484
+
 
1485
  read_cmds = "\n".join(f"read -formal {os.path.basename(f)}" for f in rtl_files)
1486
+ files_entries = "\n".join(os.path.basename(f) for f in rtl_files)
 
1487
  config = f"""[options]
1488
  mode prove
1489
 
 
1497
  [files]
1498
  {files_entries}
1499
  """
1500
+ return config, rtl_files
1501
+
1502
+
1503
+ def write_sby_config(design_name, use_sby_check: bool = True):
1504
+ """Render the default SBY config for compatibility.
1505
+
1506
+ Args:
1507
+ design_name: Name of the design
1508
+ use_sby_check: If True, use the Yosys-compatible _sby_check.sv file
1509
+ """
1510
+ _render_sby_config(design_name, use_sby_check=use_sby_check)
1511
+ return f"{OPENLANE_ROOT}/designs/{design_name}/formal/{design_name}.sby"
1512
 
1513
  def run_formal_verification(design_name):
1514
  """Runs SymbiYosys (SBY) for formal verification."""
1515
+ sby_cmd = _resolve_binary(SBY_BIN)
1516
+ config_text, rtl_files = _render_sby_config(design_name, use_sby_check=True)
1517
+ if not rtl_files:
 
1518
  return False, "SBY configuration file not found."
1519
 
1520
+ with tempfile.TemporaryDirectory() as tmpdir:
1521
+ staged_map = _stage_inputs(tmpdir, rtl_files)
1522
+ sby_file = os.path.join(tmpdir, f"{design_name}.sby")
1523
+ with open(sby_file, "w") as f:
1524
+ f.write(config_text)
1525
+ try:
1526
+ completed = subprocess.run(
1527
+ [sby_cmd, "-f", os.path.basename(sby_file)],
1528
+ cwd=tmpdir,
1529
+ capture_output=True,
1530
+ text=True,
1531
+ timeout=600,
1532
+ )
1533
+ tool_result = _build_tool_result(
1534
+ "sby",
1535
+ ok=completed.returncode == 0,
1536
+ result="PASS" if completed.returncode == 0 else "FAIL",
1537
+ returncode=completed.returncode,
1538
+ stdout=completed.stdout,
1539
+ stderr=completed.stderr,
1540
+ diagnostics=_collect_diag_lines(((completed.stdout or "") + "\n" + (completed.stderr or "")).strip()),
1541
+ metrics={"mode": "formal_verification"},
1542
+ )
1543
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
1544
+ if tool_result["ok"]:
1545
+ return True, f"Formal Verification PASSED.\n{tool_result['stdout']}"
1546
+ return False, f"Formal Verification FAILED:\n{tool_result['stdout']}\n{tool_result['stderr']}"
1547
+ except subprocess.TimeoutExpired:
1548
+ return False, "Formal Verification timed out (>10 mins). Design may be too complex for bounded model checking."
1549
+ except FileNotFoundError:
1550
+ return False, "SymbiYosys (sby) tool not installed/found in path."
1551
 
1552
  def read_file_content(file_path: str):
1553
  """
 
1800
  "design_name": design_name,
1801
  "tb_path": tb_path,
1802
  "rtl_path": rtl_path,
1803
+ "tool": "verilator",
1804
  "returncode": -1,
1805
+ "stdout": "",
1806
+ "stderr": "",
1807
+ "result": "ERROR",
1808
  "issue_categories": [],
1809
  "diagnostics": [],
1810
+ "metrics": {},
1811
  "compile_output": "",
1812
  "timeout": False,
1813
  "fingerprint": "",
 
1824
  report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1825
  return False, report
1826
 
 
1827
  src_dir = os.path.dirname(rtl_path)
1828
+ all_rtl = _collect_design_rtl(src_dir)
1829
+ if rtl_path not in all_rtl:
 
1830
  all_rtl.append(rtl_path)
1831
+ with tempfile.TemporaryDirectory() as tmpdir:
1832
+ staged_map = _stage_inputs(tmpdir, all_rtl + [tb_path])
1833
+ cmd = [
1834
+ "verilator",
1835
+ "--lint-only",
1836
+ "--sv",
1837
+ "--timing",
1838
+ "-Wno-fatal",
1839
+ *[os.path.basename(_stage_path(path, staged_map)) for path in all_rtl],
1840
+ os.path.basename(_stage_path(tb_path, staged_map)),
1841
+ "--top-module",
1842
+ f"{design_name}_tb",
1843
+ ]
1844
+ report["command"] = cmd
1845
 
1846
+ try:
1847
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=120, cwd=tmpdir)
1848
+ except subprocess.TimeoutExpired:
1849
+ report["timeout"] = True
1850
+ report["compile_output"] = "TB compile gate timed out (>120s)."
1851
+ report["issue_categories"] = ["compile_timeout"]
1852
+ report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1853
+ return False, report
1854
+ except FileNotFoundError:
1855
+ report["compile_output"] = "Verilator binary not found."
1856
+ report["issue_categories"] = ["verilator_missing"]
1857
+ report["fingerprint"] = hashlib.sha256(report["compile_output"].encode("utf-8")).hexdigest()[:16]
1858
+ return False, report
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1859
 
1860
+ tool_result = _build_tool_result(
1861
+ "verilator",
1862
+ ok=completed.returncode == 0,
1863
+ result="PASS" if completed.returncode == 0 else "FAIL",
1864
+ returncode=completed.returncode,
1865
+ stdout=completed.stdout,
1866
+ stderr=completed.stderr,
1867
+ diagnostics=_collect_diag_lines(((completed.stdout or "") + "\n" + (completed.stderr or "")).strip()),
1868
+ metrics={"mode": "tb_compile_gate"},
1869
+ )
1870
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
1871
+ raw = ((tool_result["stdout"] or "") + ("\n" + tool_result["stderr"] if tool_result["stderr"] else "")).strip()
1872
+ report.update(tool_result)
1873
+ report["returncode"] = tool_result["returncode"]
1874
+ report["compile_output"] = raw[:16000]
1875
+ report["diagnostics"] = list(tool_result["diagnostics"])[:12]
1876
+
1877
+ categories = set()
1878
+ low = raw.lower()
1879
+ if completed.returncode == 0:
1880
+ categories.add("compile_ok")
1881
+ else:
1882
+ if "internal error" in low:
1883
+ categories.add("parser_internal_state_error")
1884
+ if "syntax error" in low:
1885
+ categories.add("syntax_error")
1886
+ if ("_if" in raw and ("unexpected IDENTIFIER" in raw or "expecting ')'" in raw)) or (
1887
+ "unexpected identifier" in low and "expecting ')'" in low
1888
+ ):
1889
+ categories.add("interface_typing_error")
1890
+ if "function new" in low and "_if" in low:
1891
+ categories.add("constructor_interface_type_error")
1892
+ if "covergroup" in low or "coverpoint" in low:
1893
+ categories.add("covergroup_scope_error")
1894
+ if "pin not found" in low or "pinnotfound" in low:
1895
+ categories.add("pin_mismatch")
1896
+ if "cannot find" in low and "interface" in low:
1897
+ categories.add("missing_interface")
1898
+ if "dotted reference" in low and ("missing module" in low or "missing interface" in low):
1899
+ categories.add("dotted_ref_missing_interface")
1900
+ if not categories:
1901
+ categories.add("compile_error")
1902
+ report["issue_categories"] = sorted(categories)
1903
+
1904
+ fp_base = "|".join(report["issue_categories"]) + "|" + "\n".join(report["diagnostics"][:6])
1905
+ report["fingerprint"] = hashlib.sha256(fp_base.encode("utf-8", errors="ignore")).hexdigest()[:16]
1906
+ report["ok"] = completed.returncode == 0
1907
 
1908
  # --- iverilog fallback ---
1909
  # If Verilator rejects the TB (especially for interface/class issues),
 
1928
 
1929
  def _iverilog_compile_tb(tb_path: str, rtl_path: str, design_name: str) -> Tuple[bool, str]:
1930
  """Try compiling TB + RTL with iverilog as a Verilator fallback."""
 
1931
  src_dir = os.path.dirname(rtl_path)
1932
+ all_rtl = _collect_design_rtl(src_dir)
1933
+ if rtl_path not in all_rtl:
 
1934
  all_rtl.append(rtl_path)
1935
+ with tempfile.TemporaryDirectory() as tmpdir:
1936
+ staged_map = _stage_inputs(tmpdir, all_rtl + [tb_path])
1937
+ out_path = os.path.join(tmpdir, f"{design_name}_tb_compile.out")
1938
+ cmd = [
1939
+ "iverilog", "-g2012", "-Wall", "-o", out_path,
1940
+ *[os.path.basename(_stage_path(path, staged_map)) for path in all_rtl],
1941
+ os.path.basename(_stage_path(tb_path, staged_map)),
1942
+ ]
1943
+ try:
1944
+ completed = subprocess.run(cmd, capture_output=True, text=True, timeout=60, cwd=tmpdir)
1945
+ tool_result = _build_tool_result(
1946
+ "iverilog",
1947
+ ok=completed.returncode == 0,
1948
+ result="PASS" if completed.returncode == 0 else "FAIL",
1949
+ returncode=completed.returncode,
1950
+ stdout=completed.stdout,
1951
+ stderr=completed.stderr,
1952
+ diagnostics=_collect_diag_lines(((completed.stdout or "") + "\n" + (completed.stderr or "")).strip()),
1953
+ metrics={"mode": "tb_compile_gate"},
1954
+ )
1955
+ tool_result = _rewrite_result_paths(tool_result, staged_map)
1956
+ combined = ((tool_result["stdout"] or "") + "\n" + (tool_result["stderr"] or "")).strip()
1957
+ if tool_result["ok"]:
1958
+ return True, f"iverilog compile OK: {combined[:500]}" if combined else "iverilog compile OK"
1959
+ return False, f"iverilog compile failed:\n{combined[:2000]}"
1960
+ except FileNotFoundError:
1961
+ return False, "iverilog not found"
1962
+ except subprocess.TimeoutExpired:
1963
+ return False, "iverilog compile timed out"
1964
 
1965
 
1966
  # ---------------------------------------------------------------------------
 
2497
  return False, f"RTL file not found: {rtl_file}"
2498
  if not os.path.exists(tb_file):
2499
  return False, f"Testbench file not found: {tb_file}"
 
 
 
 
2500
 
2501
+ rtl_files = _collect_design_rtl(src_dir)
2502
+ with tempfile.TemporaryDirectory() as tmpdir:
2503
+ staged_map = _stage_inputs(tmpdir, rtl_files + [tb_file])
2504
+ cmd = [
2505
+ "verilator",
2506
+ "--binary",
2507
+ "--sv",
2508
+ "-j", "0",
2509
+ "--timing",
2510
+ "--trace",
2511
+ "--assert",
2512
+ "-Wno-fatal",
2513
+ *[os.path.basename(_stage_path(path, staged_map)) for path in rtl_files],
2514
+ os.path.basename(_stage_path(tb_file, staged_map)),
2515
+ "--top-module", f"{design_name}_tb",
2516
+ "--Mdir", "obj_dir",
2517
+ "-o", "sim_exec",
2518
+ ]
2519
+
2520
+ try:
2521
+ compile_result = subprocess.run(
2522
+ cmd,
2523
+ capture_output=True,
2524
+ text=True,
2525
+ timeout=120,
2526
+ cwd=tmpdir,
2527
+ )
2528
+ except subprocess.TimeoutExpired:
2529
+ return False, "Compilation timed out (>120s)."
2530
+ except FileNotFoundError:
2531
+ return False, "Verilator not found. Please install Verilator 5.0+."
2532
+
2533
+ compile_tool = _rewrite_result_paths(
2534
+ _build_tool_result(
2535
+ "verilator",
2536
+ ok=compile_result.returncode == 0,
2537
+ result="PASS" if compile_result.returncode == 0 else "FAIL",
2538
+ returncode=compile_result.returncode,
2539
+ stdout=compile_result.stdout,
2540
+ stderr=compile_result.stderr,
2541
+ diagnostics=_collect_diag_lines((compile_result.stderr or compile_result.stdout or "").strip()),
2542
+ metrics={"mode": "simulation_compile"},
2543
+ ),
2544
+ staged_map,
2545
  )
2546
+ if compile_result.returncode != 0:
2547
+ return False, f"Verilator Compilation Failed:\n{compile_tool['stderr']}"
2548
+
2549
+ sim_exec_path = os.path.join(tmpdir, "obj_dir", "sim_exec")
2550
+ try:
2551
+ run_result = subprocess.run(
2552
+ [sim_exec_path],
2553
+ capture_output=True,
2554
+ text=True,
2555
+ timeout=300,
2556
+ cwd=tmpdir,
2557
+ )
2558
+ except subprocess.TimeoutExpired:
2559
+ return False, "Simulation Timed Out (Exceeded 300s). Infinite loop likely."
2560
+
2561
+ _promote_vcd_artifacts(tmpdir, src_dir)
2562
+ promoted_wave = os.path.join(src_dir, f"{design_name}_wave.vcd")
2563
+ run_tool = _rewrite_result_paths(
2564
+ _build_tool_result(
2565
+ "verilator",
2566
+ ok=run_result.returncode == 0,
2567
+ result="PASS" if run_result.returncode == 0 else "FAIL",
2568
+ returncode=run_result.returncode,
2569
+ stdout=run_result.stdout,
2570
+ stderr=run_result.stderr,
2571
+ diagnostics=_collect_diag_lines(((run_result.stdout or "") + "\n" + (run_result.stderr or "")).strip(), limit=20),
2572
+ metrics={
2573
+ "mode": "simulation_run",
2574
+ "trace_enabled": True,
2575
+ "waveform_generated": os.path.exists(promoted_wave),
2576
+ },
2577
+ ),
2578
+ staged_map,
2579
  )
2580
+ sim_text = (run_tool["stdout"] or "") + ("\n" + run_tool["stderr"] if run_tool["stderr"] else "")
 
 
 
2581
 
2582
  if "TEST PASSED" in sim_text:
2583
  return True, sim_text
 
2585
  if "TEST FAILED" in sim_text:
2586
  return False, sim_text
2587
 
2588
+ if run_tool["returncode"] != 0:
2589
  return False, f"Simulation Crashed:\n{sim_text}"
2590
 
2591
  return False, sim_text
 
2766
 
2767
  primitives_v = os.path.join(os.path.dirname(pdk_v_path), "primitives.v")
2768
 
2769
+ with tempfile.TemporaryDirectory() as tmpdir:
2770
+ staged_map = _stage_inputs(tmpdir, [tb_file, gl_netlist])
2771
+ sim_out = os.path.join(tmpdir, "gls_sim")
2772
+ try:
2773
+ cmd = [
2774
+ "iverilog", "-g2012", "-DFUNCTIONAL", "-DUNIT_DELAY=#1", "-o", sim_out,
2775
+ os.path.basename(_stage_path(tb_file, staged_map)),
2776
+ os.path.basename(_stage_path(gl_netlist, staged_map)),
2777
+ pdk_v_path,
2778
+ primitives_v,
2779
+ ]
2780
+ compile_result = subprocess.run(
2781
+ cmd,
2782
+ capture_output=True,
2783
+ text=True,
2784
+ timeout=300,
2785
+ cwd=tmpdir,
2786
+ )
2787
+ compile_tool = _rewrite_result_paths(
2788
+ _build_tool_result(
2789
+ "iverilog",
2790
+ ok=compile_result.returncode == 0,
2791
+ result="PASS" if compile_result.returncode == 0 else "FAIL",
2792
+ returncode=compile_result.returncode,
2793
+ stdout=compile_result.stdout,
2794
+ stderr=compile_result.stderr,
2795
+ diagnostics=_collect_diag_lines(((compile_result.stdout or "") + "\n" + (compile_result.stderr or "")).strip()),
2796
+ metrics={"mode": "gls_compile"},
2797
+ ),
2798
+ staged_map,
2799
+ )
2800
+ if compile_result.returncode != 0:
2801
+ return False, f"GLS Compilation failed:\n{compile_tool['stderr']}"
2802
+ except subprocess.TimeoutExpired:
2803
+ return False, "GLS Compilation timed out."
2804
 
2805
+ try:
2806
+ run_result = subprocess.run(
2807
+ ["vvp", sim_out],
2808
+ capture_output=True,
2809
+ text=True,
2810
+ timeout=600,
2811
+ cwd=tmpdir,
2812
+ )
2813
+ _promote_vcd_artifacts(tmpdir, src_dir)
2814
+ run_tool = _rewrite_result_paths(
2815
+ _build_tool_result(
2816
+ "vvp",
2817
+ ok=run_result.returncode == 0,
2818
+ result="PASS" if run_result.returncode == 0 else "FAIL",
2819
+ returncode=run_result.returncode,
2820
+ stdout=run_result.stdout,
2821
+ stderr=run_result.stderr,
2822
+ diagnostics=_collect_diag_lines(((run_result.stdout or "") + "\n" + (run_result.stderr or "")).strip(), limit=20),
2823
+ metrics={"mode": "gls_run"},
2824
+ ),
2825
+ staged_map,
2826
+ )
2827
+ sim_text = (run_tool["stdout"] or "") + ("\n" + run_tool["stderr"] if run_tool["stderr"] else "")
2828
+ if "TEST PASSED" in sim_text:
2829
+ return True, f"GLS Simulation PASSED.\n{sim_text}"
2830
+ return False, f"GLS Simulation FAILED or missing PASS marker.\n{sim_text}"
2831
+ except subprocess.TimeoutExpired:
2832
+ return False, "GLS Simulation Timed Out."
2833
 
2834
 
2835
  def parse_eda_log_summary(log_path: str, kind: str, top_n: int = 10) -> Dict[str, Any]:
 
3081
  def _coverage_shell(design_name: str, backend: str, coverage_mode: str = "full_oss") -> Dict[str, Any]:
3082
  return {
3083
  "ok": False,
3084
+ "tool": backend,
3085
+ "returncode": -1,
3086
+ "stdout": "",
3087
+ "stderr": "",
3088
+ "result": "ERROR",
3089
+ "metrics": {},
3090
  "backend": backend,
3091
  "coverage_mode": coverage_mode,
3092
  "infra_failure": False,
 
3138
  data = {"line_pct": 0.0, "toggle_pct": 0.0, "branch_pct": 0.0, "overall_pct": 0.0}
3139
  if not os.path.exists(cov_dat):
3140
  return data
3141
+ with tempfile.TemporaryDirectory() as tmpdir:
3142
+ annotate_dir = os.path.join(tmpdir, "cov_annotate")
3143
+ try:
3144
+ os.makedirs(annotate_dir, exist_ok=True)
3145
+ subprocess.run(
3146
+ ["verilator_coverage", "--annotate", annotate_dir, cov_dat],
3147
+ capture_output=True,
3148
+ text=True,
3149
+ timeout=60,
3150
+ )
3151
+ except Exception:
3152
+ pass
3153
+
3154
+ total_points = 0
3155
+ hit_points = 0
3156
+ toggle_points = 0
3157
+ toggle_hit = 0
3158
+ if os.path.exists(annotate_dir):
3159
+ for root, _, files in os.walk(annotate_dir):
3160
+ for fname in files:
3161
+ if not fname.endswith((".v", ".sv")):
3162
+ continue
3163
+ with open(os.path.join(root, fname), "r", errors="ignore") as f:
3164
+ for line in f:
3165
+ s = line.strip()
3166
+ if not s:
3167
+ continue
3168
+ m = re.match(r"^(\d+)\s+", s)
3169
+ if m:
3170
+ total_points += 1
3171
+ if int(m.group(1)) > 0:
3172
+ hit_points += 1
3173
+ if s.startswith("%"):
3174
+ toggle_points += 1
3175
+ p = re.match(r"%0*(\d+)", s)
3176
+ if p and int(p.group(1)) > 0:
3177
+ toggle_hit += 1
3178
 
3179
  if total_points > 0:
3180
  data["line_pct"] = round((hit_points / total_points) * 100.0, 2)
 
3189
 
3190
  def run_verilator_coverage(design_name: str, rtl_file: str, tb_file: str, coverage_mode: str = "full_oss") -> Tuple[bool, str, Dict[str, Any]]:
3191
  src_dir = os.path.dirname(rtl_file)
 
3192
  sim_exec = "sim_cov_exec"
 
 
3193
  result = _coverage_shell(design_name, backend="verilator", coverage_mode=coverage_mode)
3194
+ result["raw_diag_path"] = ""
3195
 
3196
  if not os.path.exists(rtl_file):
3197
  result["infra_failure"] = True
 
3207
  signals, rtl_line_count = _read_rtl_signal_stats(rtl_file)
3208
  result["total_signals"] = len(signals)
3209
  signal_set = set(signals)
3210
+ rtl_files = _collect_design_rtl(src_dir)
3211
+
3212
+ with tempfile.TemporaryDirectory() as tmpdir:
3213
+ staged_map = _stage_inputs(tmpdir, rtl_files + [tb_file])
3214
+ cov_dat = os.path.join(tmpdir, "coverage.dat")
3215
+ compile_cmd = [
3216
+ "verilator",
3217
+ "--binary",
3218
+ "--coverage",
3219
+ "--trace",
3220
+ "--sv",
3221
+ "--timing",
3222
+ "-Wno-fatal",
3223
+ *[os.path.basename(_stage_path(path, staged_map)) for path in rtl_files],
3224
+ os.path.basename(_stage_path(tb_file, staged_map)),
3225
+ "--top-module",
3226
+ f"{design_name}_tb",
3227
+ "--Mdir",
3228
+ "obj_dir_cov",
3229
+ "-o",
3230
+ sim_exec,
3231
+ ]
3232
+ run_cmd = [os.path.join(tmpdir, "obj_dir_cov", sim_exec), f"+verilator+coverage+file+{cov_dat}"]
3233
  try:
3234
+ comp = subprocess.run(compile_cmd, capture_output=True, text=True, timeout=240, cwd=tmpdir)
3235
+ except FileNotFoundError:
3236
+ result["infra_failure"] = True
3237
+ result["error_kind"] = "tool_missing"
3238
+ result["diagnostics"] = ["verilator binary not found."]
3239
+ return False, result["diagnostics"][0], result
3240
+ except subprocess.TimeoutExpired:
3241
+ result["infra_failure"] = True
3242
+ result["error_kind"] = "compile_timeout"
3243
+ result["diagnostics"] = ["Verilator coverage compile timed out (>240s)."]
3244
+ return False, result["diagnostics"][0], result
3245
+
3246
+ comp_tool = _rewrite_result_paths(
3247
+ _build_tool_result(
3248
+ "verilator",
3249
+ ok=comp.returncode == 0,
3250
+ result="PASS" if comp.returncode == 0 else "FAIL",
3251
+ returncode=comp.returncode,
3252
+ stdout=comp.stdout,
3253
+ stderr=comp.stderr,
3254
+ diagnostics=_collect_diag_lines((comp.stderr or comp.stdout or "").strip()),
3255
+ metrics={"mode": "coverage_compile"},
3256
+ ),
3257
+ staged_map,
3258
+ )
3259
+ result.update(
3260
+ {
3261
+ "tool": comp_tool["tool"],
3262
+ "returncode": comp_tool["returncode"],
3263
+ "stdout": comp_tool["stdout"],
3264
+ "stderr": comp_tool["stderr"],
3265
+ "result": comp_tool["result"],
3266
+ "metrics": dict(comp_tool["metrics"]),
3267
+ "trace_enabled": True,
3268
+ }
3269
+ )
3270
+ if comp.returncode != 0:
3271
+ result["infra_failure"] = True
3272
+ result["error_kind"] = "compile_error"
3273
+ result["diagnostics"] = list(comp_tool["diagnostics"])[:12]
3274
+ return False, ((comp_tool["stderr"] or comp_tool["stdout"] or "Verilator compile failed")[:1200]), result
 
 
 
 
 
 
 
 
 
 
 
 
3275
 
3276
+ try:
3277
+ run = subprocess.run(run_cmd, capture_output=True, text=True, timeout=300, cwd=tmpdir)
3278
+ except subprocess.TimeoutExpired:
3279
+ result["infra_failure"] = True
3280
+ result["error_kind"] = "run_timeout"
3281
+ result["diagnostics"] = ["Verilator coverage simulation timed out (>300s)."]
3282
+ return False, result["diagnostics"][0], result
3283
+
3284
+ _promote_vcd_artifacts(tmpdir, src_dir)
3285
+ result["waveform_generated"] = os.path.exists(os.path.join(src_dir, f"{design_name}_wave.vcd"))
3286
+ run_tool = _rewrite_result_paths(
3287
+ _build_tool_result(
3288
+ "verilator",
3289
+ ok=run.returncode == 0,
3290
+ result="PASS" if run.returncode == 0 else "FAIL",
3291
+ returncode=run.returncode,
3292
+ stdout=run.stdout,
3293
+ stderr=run.stderr,
3294
+ diagnostics=_collect_diag_lines(((run.stdout or "") + "\n" + (run.stderr or "")).strip(), limit=20),
3295
+ metrics={"mode": "coverage_run"},
3296
+ ),
3297
+ staged_map,
3298
+ )
3299
+ sim_text = (run_tool["stdout"] or "") + ("\n" + run_tool["stderr"] if run_tool["stderr"] else "")
3300
+ sim_passed = "TEST PASSED" in sim_text
3301
+ result.update(
3302
+ {
3303
+ "tool": run_tool["tool"],
3304
+ "returncode": run_tool["returncode"],
3305
+ "stdout": run_tool["stdout"],
3306
+ "stderr": run_tool["stderr"],
3307
+ "result": "PASS" if sim_passed else ("FAIL" if run.returncode != 0 else "ERROR"),
3308
+ "metrics": dict(run_tool["metrics"]),
3309
+ "coverage_metrics_valid": False,
3310
+ }
3311
+ )
3312
 
3313
+ metrics = _parse_verilator_coverage_dat(cov_dat, tmpdir)
3314
+ if not os.path.exists(cov_dat):
3315
+ result["infra_failure"] = True
3316
+ result["error_kind"] = "parse_error"
3317
+ result["diagnostics"] = ["coverage.dat not generated by Verilator run."]
3318
+ return sim_passed, sim_text, result
3319
+
3320
+ vcd_candidates = [
3321
+ os.path.join(src_dir, f"{design_name}_cov.vcd"),
3322
+ os.path.join(src_dir, f"{design_name}.vcd"),
3323
+ os.path.join(src_dir, "dump.vcd"),
3324
+ ]
3325
+ toggled = 0
3326
+ for vcd in vcd_candidates:
3327
+ if os.path.exists(vcd):
3328
+ toggled = max(toggled, _extract_vcd_toggles(vcd, signal_set))
3329
+ result["signals_toggled"] = toggled
3330
+
3331
+ line_pct = metrics["line_pct"]
3332
+ toggle_pct = metrics["toggle_pct"]
3333
+ branch_pct = metrics["branch_pct"]
3334
+ if toggle_pct <= 0.0 and result["total_signals"] > 0:
3335
+ toggle_pct = round((toggled / result["total_signals"]) * 100.0, 2)
3336
+ functional_pct = round((line_pct * 0.6 + toggle_pct * 0.4), 2) if sim_passed else round((line_pct * 0.3), 2)
3337
+ assertion_pct = 100.0 if sim_passed else 0.0
3338
+
3339
+ result.update(
3340
+ {
3341
+ "ok": True,
3342
+ "line_pct": max(0.0, min(100.0, line_pct)),
3343
+ "branch_pct": max(0.0, min(100.0, branch_pct)),
3344
+ "toggle_pct": max(0.0, min(100.0, toggle_pct)),
3345
+ "functional_pct": max(0.0, min(100.0, functional_pct)),
3346
+ "assertion_pct": assertion_pct,
3347
+ "report_path": "",
3348
+ }
3349
+ )
3350
+ if run.returncode != 0 and not sim_passed:
3351
+ result["ok"] = False
3352
+ result["infra_failure"] = True
3353
+ result["error_kind"] = "run_error"
3354
+ result["diagnostics"] = [x.strip() for x in sim_text.splitlines() if x.strip()][:10]
3355
+ elif rtl_line_count > 0 and result["line_pct"] <= 0.0 and sim_passed:
3356
+ result["ok"] = False
3357
+ result["infra_failure"] = True
3358
+ result["error_kind"] = "parse_error"
3359
+ result["diagnostics"] = ["Coverage metrics are empty despite passing simulation."]
3360
+ else:
3361
+ result["coverage_metrics_valid"] = True
3362
  return sim_passed, sim_text, result
3363
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3364
 
3365
  def run_iverilog_coverage(design_name: str, rtl_file: str, tb_file: str, coverage_mode: str = "full_oss") -> Tuple[bool, str, Dict[str, Any]]:
3366
  src_dir = os.path.dirname(rtl_file)
 
 
3367
  result = _coverage_shell(design_name, backend="iverilog", coverage_mode=coverage_mode)
3368
+ result["raw_diag_path"] = ""
3369
 
3370
  with open(tb_file, "r", errors="ignore") as f:
3371
  tb_code = f.read()
 
3374
  result["total_signals"] = len(signals)
3375
  signal_set = set(signals)
3376
 
3377
+ rtl_files = _collect_design_rtl(src_dir, include_sv=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3378
 
3379
+ with tempfile.TemporaryDirectory() as tmpdir:
3380
+ staged_map = _stage_inputs(tmpdir, rtl_files + [tb_file])
3381
+ sim_out = os.path.join(tmpdir, "sim_cov")
3382
+ compile_cmd = [
3383
+ "iverilog", "-g2012", "-o", sim_out,
3384
+ *[os.path.basename(_stage_path(path, staged_map)) for path in rtl_files],
3385
+ os.path.basename(_stage_path(tb_file, staged_map)),
3386
+ ]
3387
+ try:
3388
+ comp = subprocess.run(compile_cmd, capture_output=True, text=True, timeout=120, cwd=tmpdir)
3389
+ except FileNotFoundError:
3390
+ result["infra_failure"] = True
3391
+ result["error_kind"] = "tool_missing"
3392
+ result["diagnostics"] = ["iverilog binary not found."]
3393
+ return False, result["diagnostics"][0], result
3394
+ except subprocess.TimeoutExpired:
3395
+ result["infra_failure"] = True
3396
+ result["error_kind"] = "compile_timeout"
3397
+ result["diagnostics"] = ["Icarus compile timed out (>120s)."]
3398
+ return False, result["diagnostics"][0], result
3399
+
3400
+ comp_tool = _rewrite_result_paths(
3401
+ _build_tool_result(
3402
+ "iverilog",
3403
+ ok=comp.returncode == 0,
3404
+ result="PASS" if comp.returncode == 0 else "FAIL",
3405
+ returncode=comp.returncode,
3406
+ stdout=comp.stdout,
3407
+ stderr=comp.stderr,
3408
+ diagnostics=_collect_diag_lines(((comp.stdout or "") + "\n" + (comp.stderr or "")).strip()),
3409
+ metrics={"mode": "coverage_compile"},
3410
+ ),
3411
+ staged_map,
3412
+ )
3413
+ result.update(
3414
+ {
3415
+ "tool": comp_tool["tool"],
3416
+ "returncode": comp_tool["returncode"],
3417
+ "stdout": comp_tool["stdout"],
3418
+ "stderr": comp_tool["stderr"],
3419
+ "result": comp_tool["result"],
3420
+ "metrics": dict(comp_tool["metrics"]),
3421
+ "trace_enabled": True,
3422
+ }
3423
+ )
3424
+ if comp.returncode != 0:
3425
+ result["infra_failure"] = True
3426
+ result["error_kind"] = "compile_error"
3427
+ result["diagnostics"] = list(comp_tool["diagnostics"])[:12]
3428
+ if tb_style == "sv_class_based":
3429
+ result["error_kind"] = "unsupported_tb_style"
3430
+ result["diagnostics"].insert(0, "Class-based SV testbench is not supported by iVerilog coverage backend.")
3431
+ return False, ((comp_tool["stderr"] or comp_tool["stdout"] or "Icarus compile failed")[:1200]), result
3432
 
3433
+ try:
3434
+ run = subprocess.run(["vvp", sim_out], capture_output=True, text=True, timeout=300, cwd=tmpdir)
3435
+ except subprocess.TimeoutExpired:
3436
+ result["infra_failure"] = True
3437
+ result["error_kind"] = "run_timeout"
3438
+ result["diagnostics"] = ["Icarus simulation timed out (>300s)."]
3439
+ return False, result["diagnostics"][0], result
3440
+ except FileNotFoundError:
3441
+ result["infra_failure"] = True
3442
+ result["error_kind"] = "tool_missing"
3443
+ result["diagnostics"] = ["vvp binary not found."]
3444
+ return False, result["diagnostics"][0], result
3445
+
3446
+ _promote_vcd_artifacts(tmpdir, src_dir)
3447
+ result["waveform_generated"] = os.path.exists(os.path.join(src_dir, f"{design_name}_wave.vcd"))
3448
+ run_tool = _rewrite_result_paths(
3449
+ _build_tool_result(
3450
+ "iverilog",
3451
+ ok=run.returncode == 0,
3452
+ result="PASS" if run.returncode == 0 else "FAIL",
3453
+ returncode=run.returncode,
3454
+ stdout=run.stdout,
3455
+ stderr=run.stderr,
3456
+ diagnostics=_collect_diag_lines(((run.stdout or "") + "\n" + (run.stderr or "")).strip(), limit=20),
3457
+ metrics={"mode": "coverage_run"},
3458
+ ),
3459
+ staged_map,
3460
+ )
3461
+ sim_text = (run_tool["stdout"] or "") + ("\n" + run_tool["stderr"] if run_tool["stderr"] else "")
3462
+ sim_passed = "TEST PASSED" in sim_text
3463
+ result.update(
3464
+ {
3465
+ "tool": run_tool["tool"],
3466
+ "returncode": run_tool["returncode"],
3467
+ "stdout": run_tool["stdout"],
3468
+ "stderr": run_tool["stderr"],
3469
+ "result": "PASS" if sim_passed else ("FAIL" if run.returncode != 0 else "ERROR"),
3470
+ "metrics": dict(run_tool["metrics"]),
3471
+ "coverage_metrics_valid": False,
3472
+ }
3473
+ )
3474
 
3475
+ toggled = 0
3476
+ displayed_signals = set(re.findall(r'(\w+)\s*=\s*[0-9a-fxzXZhHbB_\']+', sim_text))
3477
+ toggled = len(displayed_signals.intersection(signal_set))
3478
+ vcd_candidates = [
3479
+ os.path.join(src_dir, f"{design_name}_cov.vcd"),
3480
+ os.path.join(src_dir, f"{design_name}.vcd"),
3481
+ os.path.join(src_dir, "dump.vcd"),
3482
+ ]
3483
+ for vcd in vcd_candidates:
3484
+ if os.path.exists(vcd):
3485
+ toggled = max(toggled, _extract_vcd_toggles(vcd, signal_set))
3486
+ break
3487
+ result["signals_toggled"] = toggled
3488
+
3489
+ line_pct = 85.0 if sim_passed else 20.0
3490
+ if result["total_signals"] > 0:
3491
+ line_pct += (toggled / result["total_signals"]) * 15.0
3492
+ line_pct = max(0.0, min(100.0, round(line_pct, 2)))
3493
+ toggle_pct = round((toggled / result["total_signals"]) * 100.0, 2) if result["total_signals"] > 0 else 0.0
3494
+ branch_pct = round(line_pct * 0.9, 2) if line_pct > 0 else 0.0
3495
+ functional_pct = round((line_pct * 0.65 + toggle_pct * 0.35), 2) if sim_passed else round(line_pct * 0.3, 2)
3496
+ assertion_pct = 100.0 if sim_passed else 0.0
3497
+ result.update(
3498
+ {
3499
+ "ok": True,
3500
+ "line_pct": line_pct,
3501
+ "branch_pct": max(0.0, min(100.0, branch_pct)),
3502
+ "toggle_pct": max(0.0, min(100.0, toggle_pct)),
3503
+ "functional_pct": max(0.0, min(100.0, functional_pct)),
3504
+ "assertion_pct": assertion_pct,
3505
+ "report_path": "",
3506
+ }
3507
+ )
3508
+ if rtl_line_count > 0 and line_pct <= 0.0 and sim_passed:
3509
+ result["ok"] = False
3510
+ result["infra_failure"] = True
3511
+ result["error_kind"] = "parse_error"
3512
+ result["diagnostics"] = ["Coverage estimate collapsed to zero despite passing simulation."]
3513
+ else:
3514
+ result["coverage_metrics_valid"] = True
3515
+ if run.returncode != 0 and not sim_passed:
3516
+ result["ok"] = False
3517
+ result["infra_failure"] = True
3518
+ result["error_kind"] = "run_error"
3519
+ result["diagnostics"] = [x.strip() for x in sim_text.splitlines() if x.strip()][:10]
3520
+ return sim_passed, sim_text, result
 
 
 
3521
 
3522
 
3523
  def run_simulation_with_coverage(
 
3826
  """
3827
  if not os.path.exists(file_path):
3828
  return False, f"File not found: {file_path}"
3829
+
3830
+ with tempfile.TemporaryDirectory() as tmpdir:
3831
+ staged_map = _stage_inputs(tmpdir, [file_path])
3832
+ staged_file = _stage_path(file_path, staged_map)
3833
+ cmd = [
3834
+ "verilator", "--lint-only", "--timing",
3835
+ "-Wall",
3836
+ "-Wwarn-CDCRSTLOGIC",
3837
+ os.path.basename(staged_file),
3838
+ ]
3839
+
3840
+ try:
3841
+ completed = subprocess.run(
3842
+ cmd,
3843
+ capture_output=True,
3844
+ text=True,
3845
+ timeout=60,
3846
+ cwd=tmpdir,
3847
+ )
3848
+ tool_result = _rewrite_result_paths(
3849
+ _build_tool_result(
3850
+ "verilator",
3851
+ ok=completed.returncode == 0,
3852
+ result="PASS" if completed.returncode == 0 else "FAIL",
3853
+ returncode=completed.returncode,
3854
+ stdout=completed.stdout,
3855
+ stderr=completed.stderr,
3856
+ diagnostics=_collect_diag_lines((completed.stderr or completed.stdout or "").strip(), limit=20),
3857
+ metrics={"mode": "cdc_check"},
3858
+ ),
3859
+ staged_map,
3860
+ )
3861
+ stderr = tool_result["stderr"] or ""
3862
+
3863
+ cdc_warnings = []
3864
+ all_warnings = []
3865
+ for line in stderr.split('\n'):
3866
+ if line.strip():
3867
+ all_warnings.append(line)
3868
+ if any(kw in line.upper() for kw in ['CDC', 'CLOCK', 'DOMAIN', 'SYNC', 'METASTAB', 'CDCRSTLOGIC']):
3869
+ cdc_warnings.append(line)
3870
+
3871
+ if not cdc_warnings and completed.returncode == 0:
3872
+ return True, f"CDC Analysis: CLEAN (no clock domain crossing issues detected)\nFull lint output:\n{stderr[:1000]}"
3873
+ if cdc_warnings:
3874
+ report = "CDC Analysis: WARNINGS FOUND\n\n"
3875
+ report += "CDC-Related Issues:\n"
3876
+ for warning in cdc_warnings:
3877
+ report += f" - {warning}\n"
3878
+ report += f"\nTotal lint warnings: {len(all_warnings)}"
3879
+ return False, report
3880
  return True, f"CDC Analysis: CLEAN (lint has non-CDC warnings)\n{stderr[:1000]}"
3881
+
3882
+ except FileNotFoundError:
3883
+ return True, "Verilator not found (Skipping CDC Check)"
3884
+ except subprocess.TimeoutExpired:
3885
+ return False, "CDC check timed out."
3886
 
3887
 
3888
  def generate_design_doc(design_name: str, spec: str = "", metrics: dict = None) -> str:
 
4229
  return result
4230
  except Exception:
4231
  return result
 
web/src/pages/DesignStudio.tsx CHANGED
@@ -48,6 +48,7 @@ export const DesignStudio = () => {
48
 
49
  // Build Options
50
  const [skipOpenlane, setSkipOpenlane] = useState(false);
 
51
  const [showAdvanced, setShowAdvanced] = useState(false);
52
  const [fullSignoff, setFullSignoff] = useState(false);
53
  const [maxRetries, setMaxRetries] = useState(5);
@@ -83,6 +84,7 @@ export const DesignStudio = () => {
83
  design_name: designName || slugify(prompt),
84
  description: prompt,
85
  skip_openlane: skipOpenlane,
 
86
  full_signoff: fullSignoff,
87
  max_retries: maxRetries,
88
  show_thinking: showThinking,
@@ -357,6 +359,10 @@ export const DesignStudio = () => {
357
  </div>
358
 
359
  <div style={{ display: 'flex', flexWrap: 'wrap', gap: '1.5rem', marginTop: '0.5rem', background: 'var(--bg)', padding: '1rem', borderRadius: 'var(--radius)', border: '1px solid var(--border-mid)' }}>
 
 
 
 
360
  <label className="toggle-label" style={{ display: 'flex', alignItems: 'center' }}>
361
  <input type="checkbox" checked={fullSignoff} onChange={e => setFullSignoff(e.target.checked)} />
362
  <span style={{ marginLeft: '0.5rem', color: 'var(--text)', fontWeight: 500 }}>Full Signoff</span>
 
48
 
49
  // Build Options
50
  const [skipOpenlane, setSkipOpenlane] = useState(false);
51
+ const [skipCoverage, setSkipCoverage] = useState(false);
52
  const [showAdvanced, setShowAdvanced] = useState(false);
53
  const [fullSignoff, setFullSignoff] = useState(false);
54
  const [maxRetries, setMaxRetries] = useState(5);
 
84
  design_name: designName || slugify(prompt),
85
  description: prompt,
86
  skip_openlane: skipOpenlane,
87
+ skip_coverage: skipCoverage,
88
  full_signoff: fullSignoff,
89
  max_retries: maxRetries,
90
  show_thinking: showThinking,
 
359
  </div>
360
 
361
  <div style={{ display: 'flex', flexWrap: 'wrap', gap: '1.5rem', marginTop: '0.5rem', background: 'var(--bg)', padding: '1rem', borderRadius: 'var(--radius)', border: '1px solid var(--border-mid)' }}>
362
+ <label className="toggle-label" style={{ display: 'flex', alignItems: 'center' }}>
363
+ <input type="checkbox" checked={skipCoverage} onChange={e => setSkipCoverage(e.target.checked)} />
364
+ <span style={{ marginLeft: '0.5rem', color: 'var(--text)', fontWeight: 500 }}>Skip Coverage</span>
365
+ </label>
366
  <label className="toggle-label" style={{ display: 'flex', alignItems: 'center' }}>
367
  <input type="checkbox" checked={fullSignoff} onChange={e => setFullSignoff(e.target.checked)} />
368
  <span style={{ marginLeft: '0.5rem', color: 'var(--text)', fontWeight: 500 }}>Full Signoff</span>
web/src/pages/HumanInLoopBuild.tsx CHANGED
@@ -96,6 +96,7 @@ export const HumanInLoopBuild = () => {
96
 
97
  // Build options
98
  const [skipOpenlane, setSkipOpenlane] = useState(false);
 
99
  const [showAdvanced, setShowAdvanced] = useState(false);
100
  const [maxRetries, setMaxRetries] = useState(5);
101
  const [showThinking, setShowThinking] = useState(false);
@@ -133,11 +134,13 @@ export const HumanInLoopBuild = () => {
133
  setError('');
134
  // Quick RTL mode implies skip_openlane
135
  const effectiveSkipOpenlane = buildMode === 'quick' || skipOpenlane;
 
136
  try {
137
  const res = await axios.post(`${API}/build`, {
138
  design_name: designName || slugify(prompt),
139
  description: prompt,
140
  skip_openlane: effectiveSkipOpenlane,
 
141
  max_retries: maxRetries,
142
  show_thinking: showThinking,
143
  min_coverage: minCoverage,
@@ -354,6 +357,7 @@ export const HumanInLoopBuild = () => {
354
  setThinkingData(null);
355
  setBuildMode('verified');
356
  setSkipStages(new Set(BUILD_MODE_SKIPS.verified));
 
357
  setShowStageToggles(false);
358
  setPartialArtifacts([]);
359
  setShowFullLog(false);
@@ -491,6 +495,10 @@ export const HumanInLoopBuild = () => {
491
  <input type="checkbox" checked={skipOpenlane} onChange={e => setSkipOpenlane(e.target.checked)} />
492
  <span>Skip OpenLane (RTL + Verify only)</span>
493
  </label>
 
 
 
 
494
  <button
495
  className="hitl-advanced-toggle"
496
  onClick={() => setShowAdvanced(!showAdvanced)}
 
96
 
97
  // Build options
98
  const [skipOpenlane, setSkipOpenlane] = useState(false);
99
+ const [skipCoverage, setSkipCoverage] = useState(false);
100
  const [showAdvanced, setShowAdvanced] = useState(false);
101
  const [maxRetries, setMaxRetries] = useState(5);
102
  const [showThinking, setShowThinking] = useState(false);
 
134
  setError('');
135
  // Quick RTL mode implies skip_openlane
136
  const effectiveSkipOpenlane = buildMode === 'quick' || skipOpenlane;
137
+ const effectiveSkipCoverage = skipCoverage || skipStages.has('COVERAGE_CHECK');
138
  try {
139
  const res = await axios.post(`${API}/build`, {
140
  design_name: designName || slugify(prompt),
141
  description: prompt,
142
  skip_openlane: effectiveSkipOpenlane,
143
+ skip_coverage: effectiveSkipCoverage,
144
  max_retries: maxRetries,
145
  show_thinking: showThinking,
146
  min_coverage: minCoverage,
 
357
  setThinkingData(null);
358
  setBuildMode('verified');
359
  setSkipStages(new Set(BUILD_MODE_SKIPS.verified));
360
+ setSkipCoverage(false);
361
  setShowStageToggles(false);
362
  setPartialArtifacts([]);
363
  setShowFullLog(false);
 
495
  <input type="checkbox" checked={skipOpenlane} onChange={e => setSkipOpenlane(e.target.checked)} />
496
  <span>Skip OpenLane (RTL + Verify only)</span>
497
  </label>
498
+ <label className="hitl-toggle">
499
+ <input type="checkbox" checked={skipCoverage} onChange={e => setSkipCoverage(e.target.checked)} />
500
+ <span>Skip Coverage</span>
501
+ </label>
502
  <button
503
  className="hitl-advanced-toggle"
504
  onClick={() => setShowAdvanced(!showAdvanced)}