tostido commited on
Commit
1754798
·
1 Parent(s): a279232

feat(diagnostics): Add code bug exposure system v0.6.0

Browse files

NEW: cascade.diagnostics module
- BugDetector: Static analysis for Python code
- CodeTracer: Runtime execution tracing with causation chains
- ExecutionMonitor: Live monitoring with anomaly detection
- DiagnosticEngine: Unified reporting with markdown output

Bug patterns detected:
- Division by zero, null pointer access, infinite loops (Critical)
- Bare except, resource leaks, race conditions (High)
- Unused variables, dead code, type mismatches (Medium)
- Style issues, naming conventions (Low)

Usage:
from cascade.diagnostics import diagnose, BugDetector
report = diagnose('path/to/code.py')
issues = BugDetector().scan_directory('./project')

Also: Updated README with diagnostics docs, keywords in pyproject.toml

README.md CHANGED
@@ -1,6 +1,6 @@
1
  # cascade-lattice
2
 
3
- **Universal AI provenance + inference intervention. See what AI sees. Choose what AI chooses.**
4
 
5
  [![PyPI](https://img.shields.io/pypi/v/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
6
  [![Python](https://img.shields.io/pypi/pyversions/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
@@ -66,6 +66,45 @@ resolution = hold.yield_point(
66
  action = resolution.action
67
  ```
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  ---
70
 
71
  ## Quick Start
 
1
  # cascade-lattice
2
 
3
+ **Universal AI provenance + inference intervention + code diagnostics. See what AI sees. Choose what AI chooses. Find bugs before they find you.**
4
 
5
  [![PyPI](https://img.shields.io/pypi/v/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
6
  [![Python](https://img.shields.io/pypi/pyversions/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
 
66
  action = resolution.action
67
  ```
68
 
69
+ ### 3. DIAGNOSE - Find bugs before they find you
70
+
71
+ ```python
72
+ from cascade.diagnostics import diagnose, BugDetector
73
+
74
+ # Quick one-liner analysis
75
+ report = diagnose("path/to/your/code.py")
76
+ print(report) # Markdown-formatted bug report
77
+
78
+ # Deep scan a whole project
79
+ detector = BugDetector()
80
+ issues = detector.scan_directory("./my_project")
81
+
82
+ for issue in issues:
83
+ print(f"[{issue.severity}] {issue.file}:{issue.line}")
84
+ print(f" {issue.message}")
85
+ print(f" Pattern: {issue.pattern.name}")
86
+ ```
87
+
88
+ **What it catches:**
89
+ - 🔴 **Critical**: Division by zero, null pointer access, infinite loops
90
+ - 🟠 **High**: Bare except clauses, resource leaks, race conditions
91
+ - 🟡 **Medium**: Unused variables, dead code, type mismatches
92
+ - 🔵 **Low**: Style issues, naming conventions, complexity warnings
93
+
94
+ **Runtime tracing:**
95
+ ```python
96
+ from cascade.diagnostics import CodeTracer
97
+
98
+ tracer = CodeTracer()
99
+
100
+ @tracer.trace
101
+ def my_function(x):
102
+ return x / (x - 1) # Potential div by zero when x=1
103
+
104
+ # After execution, trace root causes
105
+ tracer.find_root_causes("error_event_id")
106
+ ```
107
+
108
  ---
109
 
110
  ## Quick Start
cascade/__init__.py CHANGED
@@ -34,7 +34,7 @@ Quick Start:
34
  >>> monitor.trace_forwards("learning_rate_spike")
35
  """
36
 
37
- __version__ = "0.5.8"
38
  __author__ = "Cascade Team"
39
  __license__ = "MIT"
40
 
@@ -242,6 +242,9 @@ from cascade.hold import (
242
  ArcadeFeedback,
243
  )
244
 
 
 
 
245
 
246
  __all__ = [
247
  # SDK - Primary Interface
@@ -286,5 +289,7 @@ __all__ = [
286
  "HoldSession",
287
  "ArcadeFeedback",
288
  "hold_module",
 
 
289
  "__version__",
290
  ]
 
34
  >>> monitor.trace_forwards("learning_rate_spike")
35
  """
36
 
37
+ __version__ = "0.6.0"
38
  __author__ = "Cascade Team"
39
  __license__ = "MIT"
40
 
 
242
  ArcadeFeedback,
243
  )
244
 
245
+ # DIAGNOSTICS - Code Bug Exposure System
246
+ from cascade import diagnostics
247
+
248
 
249
  __all__ = [
250
  # SDK - Primary Interface
 
289
  "HoldSession",
290
  "ArcadeFeedback",
291
  "hold_module",
292
+ # Diagnostics - Code Bug Exposure
293
+ "diagnostics",
294
  "__version__",
295
  ]
cascade/diagnostics/__init__.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE DIAGNOSTICS - Code Bug Exposure System
3
+
4
+ A novel application of cascade-lattice: instead of tracing AI inference,
5
+ we trace CODE EXECUTION to expose bugs - known and unknown.
6
+
7
+ Core Insight:
8
+ - cascade-lattice traces causation chains (what caused what)
9
+ - Forensics module extracts artifacts (evidence of processing)
10
+ - System module ingests and analyzes repositories
11
+ - Monitor adapts to ANY signal format (symbiotic)
12
+
13
+ For DEBUGGING, we repurpose these:
14
+ - Events = Code execution points, function calls, variable states
15
+ - CausationLinks = Control flow, data dependencies
16
+ - Artifacts = Bug signatures, anomaly patterns
17
+ - GhostLog = Inferred sequence of execution failures
18
+ - Tracer = Backtrack from crash/bug to root cause
19
+
20
+ This creates a "debugger on steroids" that:
21
+ 1. OBSERVES code execution at any granularity
22
+ 2. TRACES causation chains to find root causes
23
+ 3. EXPOSES hidden bugs through pattern recognition
24
+ 4. PREDICTS cascading failures before they complete
25
+
26
+ Usage:
27
+ from cascade.diagnostics import diagnose, CodeTracer, BugDetector
28
+
29
+ # Quick analysis of a file
30
+ report = diagnose("path/to/file.py")
31
+ print(engine.to_markdown(report))
32
+
33
+ # Trace function execution
34
+ tracer = CodeTracer()
35
+
36
+ @tracer.trace
37
+ def my_function(x):
38
+ return x * 2
39
+
40
+ # After execution, find root causes
41
+ tracer.find_root_causes(error_event_id)
42
+
43
+ # Static bug detection
44
+ detector = BugDetector()
45
+ issues = detector.scan_directory("./my_project")
46
+ """
47
+
48
+ from cascade.diagnostics.code_tracer import CodeTracer, CodeEvent, BugSignature
49
+ from cascade.diagnostics.bug_detector import BugDetector, BugPattern, DetectedIssue
50
+ from cascade.diagnostics.execution_monitor import ExecutionMonitor, ExecutionFrame, Anomaly, monitor
51
+ from cascade.diagnostics.report import DiagnosticReport, DiagnosticFinding, DiagnosticEngine, diagnose
52
+
53
+ __all__ = [
54
+ # Main classes
55
+ "CodeTracer",
56
+ "BugDetector",
57
+ "ExecutionMonitor",
58
+ "DiagnosticReport",
59
+ "DiagnosticEngine",
60
+
61
+ # Data classes
62
+ "CodeEvent",
63
+ "BugSignature",
64
+ "BugPattern",
65
+ "DetectedIssue",
66
+ "ExecutionFrame",
67
+ "Anomaly",
68
+ "DiagnosticFinding",
69
+
70
+ # Convenience
71
+ "diagnose",
72
+ "monitor",
73
+ ]
cascade/diagnostics/bug_detector.py ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE Bug Detector - Automatic bug detection using pattern matching.
3
+
4
+ Uses cascade-lattice's forensic capabilities:
5
+ - GhostLog for inferring missing execution (what *should* have run)
6
+ - Artifact patterns for detecting anomalies
7
+ - SymbioticAdapter for interpreting signals
8
+ """
9
+
10
+ import time
11
+ import hashlib
12
+ from typing import Any, Dict, List, Optional, Set, Callable, Tuple
13
+ from dataclasses import dataclass, field
14
+ from pathlib import Path
15
+ import ast
16
+ import re
17
+
18
+ from cascade.core.adapter import SymbioticAdapter
19
+ from cascade.forensics.artifacts import ArtifactDetector, TimestampArtifacts
20
+
21
+
22
+ @dataclass
23
+ class BugPattern:
24
+ """A detectable bug pattern."""
25
+ name: str
26
+ description: str
27
+ severity: str # "critical", "error", "warning", "info"
28
+ detector: Callable[[str, ast.AST], List[Dict[str, Any]]]
29
+ category: str = "general"
30
+
31
+
32
+ @dataclass
33
+ class DetectedIssue:
34
+ """A detected code issue."""
35
+ issue_id: str
36
+ pattern_name: str
37
+ severity: str
38
+ file_path: str
39
+ line_number: int
40
+ column: int
41
+ code_snippet: str
42
+ message: str
43
+ suggestion: Optional[str] = None
44
+ confidence: float = 1.0
45
+
46
+ def to_dict(self) -> Dict[str, Any]:
47
+ return {
48
+ "id": self.issue_id,
49
+ "pattern": self.pattern_name,
50
+ "severity": self.severity,
51
+ "location": {
52
+ "file": self.file_path,
53
+ "line": self.line_number,
54
+ "column": self.column,
55
+ },
56
+ "snippet": self.code_snippet,
57
+ "message": self.message,
58
+ "suggestion": self.suggestion,
59
+ "confidence": self.confidence,
60
+ }
61
+
62
+
63
+ class BugDetector:
64
+ """
65
+ Static analysis bug detector using AST patterns.
66
+
67
+ Usage:
68
+ detector = BugDetector()
69
+ issues = detector.scan_file("path/to/file.py")
70
+ issues = detector.scan_directory("path/to/project")
71
+ """
72
+
73
+ def __init__(self):
74
+ self.patterns: List[BugPattern] = []
75
+ self._detected_issues: List[DetectedIssue] = []
76
+ self._scanned_files: Set[str] = set()
77
+
78
+ # For signal interpretation
79
+ self.adapter = SymbioticAdapter()
80
+
81
+ # Register built-in patterns
82
+ self._register_builtin_patterns()
83
+
84
+ def _register_builtin_patterns(self):
85
+ """Register built-in bug detection patterns."""
86
+ patterns = [
87
+ # Null checks
88
+ BugPattern(
89
+ name="potential_none_access",
90
+ description="Accessing attribute on potentially None value",
91
+ severity="warning",
92
+ detector=self._detect_none_access,
93
+ category="null_safety",
94
+ ),
95
+
96
+ # Exception handling
97
+ BugPattern(
98
+ name="bare_except",
99
+ description="Bare except clause catches all exceptions",
100
+ severity="warning",
101
+ detector=self._detect_bare_except,
102
+ category="exception_handling",
103
+ ),
104
+ BugPattern(
105
+ name="empty_except",
106
+ description="Empty except block silently swallows exceptions",
107
+ severity="error",
108
+ detector=self._detect_empty_except,
109
+ category="exception_handling",
110
+ ),
111
+
112
+ # Resource management
113
+ BugPattern(
114
+ name="unclosed_resource",
115
+ description="File/resource opened but not closed",
116
+ severity="warning",
117
+ detector=self._detect_unclosed_resource,
118
+ category="resource_management",
119
+ ),
120
+
121
+ # Common mistakes
122
+ BugPattern(
123
+ name="mutable_default_arg",
124
+ description="Mutable default argument in function",
125
+ severity="warning",
126
+ detector=self._detect_mutable_default,
127
+ category="common_mistakes",
128
+ ),
129
+ BugPattern(
130
+ name="comparison_to_none",
131
+ description="Using == instead of 'is' for None comparison",
132
+ severity="info",
133
+ detector=self._detect_none_comparison,
134
+ category="common_mistakes",
135
+ ),
136
+ BugPattern(
137
+ name="unreachable_code",
138
+ description="Code that can never be executed",
139
+ severity="warning",
140
+ detector=self._detect_unreachable_code,
141
+ category="common_mistakes",
142
+ ),
143
+
144
+ # Security
145
+ BugPattern(
146
+ name="hardcoded_secret",
147
+ description="Potential hardcoded secret or password",
148
+ severity="error",
149
+ detector=self._detect_hardcoded_secret,
150
+ category="security",
151
+ ),
152
+ BugPattern(
153
+ name="sql_injection_risk",
154
+ description="Potential SQL injection vulnerability",
155
+ severity="critical",
156
+ detector=self._detect_sql_injection,
157
+ category="security",
158
+ ),
159
+
160
+ # Performance
161
+ BugPattern(
162
+ name="loop_invariant",
163
+ description="Computation inside loop that could be moved outside",
164
+ severity="info",
165
+ detector=self._detect_loop_invariant,
166
+ category="performance",
167
+ ),
168
+ ]
169
+
170
+ self.patterns.extend(patterns)
171
+
172
+ def register_pattern(self, pattern: BugPattern):
173
+ """Register a custom bug pattern."""
174
+ self.patterns.append(pattern)
175
+
176
+ def scan_file(self, file_path: str) -> List[DetectedIssue]:
177
+ """Scan a Python file for bugs."""
178
+ issues = []
179
+
180
+ try:
181
+ with open(file_path, 'r', encoding='utf-8') as f:
182
+ source = f.read()
183
+ except Exception as e:
184
+ return [DetectedIssue(
185
+ issue_id=self._generate_id(file_path, 0, "read_error"),
186
+ pattern_name="file_read_error",
187
+ severity="error",
188
+ file_path=file_path,
189
+ line_number=0,
190
+ column=0,
191
+ code_snippet="",
192
+ message=f"Could not read file: {e}",
193
+ )]
194
+
195
+ try:
196
+ tree = ast.parse(source)
197
+ except SyntaxError as e:
198
+ return [DetectedIssue(
199
+ issue_id=self._generate_id(file_path, e.lineno or 0, "syntax_error"),
200
+ pattern_name="syntax_error",
201
+ severity="critical",
202
+ file_path=file_path,
203
+ line_number=e.lineno or 0,
204
+ column=e.offset or 0,
205
+ code_snippet=e.text or "",
206
+ message=f"Syntax error: {e.msg}",
207
+ )]
208
+
209
+ # Run all patterns
210
+ lines = source.splitlines()
211
+ for pattern in self.patterns:
212
+ try:
213
+ matches = pattern.detector(source, tree)
214
+ for match in matches:
215
+ line_num = match.get("line", 0)
216
+ snippet = lines[line_num - 1] if 0 < line_num <= len(lines) else ""
217
+
218
+ issues.append(DetectedIssue(
219
+ issue_id=self._generate_id(file_path, line_num, pattern.name),
220
+ pattern_name=pattern.name,
221
+ severity=pattern.severity,
222
+ file_path=file_path,
223
+ line_number=line_num,
224
+ column=match.get("column", 0),
225
+ code_snippet=snippet.strip(),
226
+ message=match.get("message", pattern.description),
227
+ suggestion=match.get("suggestion"),
228
+ confidence=match.get("confidence", 1.0),
229
+ ))
230
+ except Exception as e:
231
+ print(f"[DIAG] Pattern {pattern.name} failed on {file_path}: {e}")
232
+
233
+ self._scanned_files.add(file_path)
234
+ self._detected_issues.extend(issues)
235
+
236
+ return issues
237
+
238
+ def scan_directory(self, dir_path: str, recursive: bool = True) -> List[DetectedIssue]:
239
+ """Scan a directory for Python files and detect bugs."""
240
+ path = Path(dir_path)
241
+ issues = []
242
+
243
+ pattern = "**/*.py" if recursive else "*.py"
244
+ for py_file in path.glob(pattern):
245
+ # Skip __pycache__
246
+ if "__pycache__" in str(py_file):
247
+ continue
248
+ issues.extend(self.scan_file(str(py_file)))
249
+
250
+ return issues
251
+
252
+ def _generate_id(self, file_path: str, line: int, pattern: str) -> str:
253
+ """Generate a unique issue ID."""
254
+ content = f"{file_path}:{line}:{pattern}"
255
+ return hashlib.sha256(content.encode()).hexdigest()[:16]
256
+
257
+ # =========================================================================
258
+ # PATTERN DETECTORS
259
+ # =========================================================================
260
+
261
+ def _detect_none_access(self, source: str, tree: ast.AST) -> List[Dict]:
262
+ """Detect potential None access."""
263
+ matches = []
264
+
265
+ class Visitor(ast.NodeVisitor):
266
+ def visit_Attribute(self, node):
267
+ # Look for patterns like: x.y where x might be None
268
+ # This is heuristic - check if there's no None check before
269
+ if isinstance(node.value, ast.Name):
270
+ # Simple heuristic: flag if variable name suggests nullable
271
+ name = node.value.id
272
+ if any(word in name.lower() for word in ["result", "maybe", "optional", "response"]):
273
+ matches.append({
274
+ "line": node.lineno,
275
+ "column": node.col_offset,
276
+ "message": f"'{name}' may be None - consider adding a null check",
277
+ "confidence": 0.6,
278
+ })
279
+ self.generic_visit(node)
280
+
281
+ Visitor().visit(tree)
282
+ return matches
283
+
284
+ def _detect_bare_except(self, source: str, tree: ast.AST) -> List[Dict]:
285
+ """Detect bare except clauses."""
286
+ matches = []
287
+
288
+ class Visitor(ast.NodeVisitor):
289
+ def visit_ExceptHandler(self, node):
290
+ if node.type is None:
291
+ matches.append({
292
+ "line": node.lineno,
293
+ "column": node.col_offset,
294
+ "message": "Bare 'except:' catches all exceptions including KeyboardInterrupt",
295
+ "suggestion": "Use 'except Exception:' instead",
296
+ })
297
+ self.generic_visit(node)
298
+
299
+ Visitor().visit(tree)
300
+ return matches
301
+
302
+ def _detect_empty_except(self, source: str, tree: ast.AST) -> List[Dict]:
303
+ """Detect empty except blocks."""
304
+ matches = []
305
+ lines = source.splitlines()
306
+
307
+ class Visitor(ast.NodeVisitor):
308
+ def visit_ExceptHandler(self, node):
309
+ # Check if body is just 'pass' or empty
310
+ if len(node.body) == 1:
311
+ stmt = node.body[0]
312
+ if isinstance(stmt, ast.Pass):
313
+ # Check if there's a comment explaining the pass
314
+ line_idx = stmt.lineno - 1
315
+ if line_idx < len(lines):
316
+ line = lines[line_idx]
317
+ if '#' in line:
318
+ # Has a comment - don't flag as issue
319
+ self.generic_visit(node)
320
+ return
321
+ matches.append({
322
+ "line": node.lineno,
323
+ "column": node.col_offset,
324
+ "message": "Empty except block silently ignores exception",
325
+ "suggestion": "At minimum, log the exception",
326
+ })
327
+ self.generic_visit(node)
328
+
329
+ Visitor().visit(tree)
330
+ return matches
331
+
332
+ def _detect_unclosed_resource(self, source: str, tree: ast.AST) -> List[Dict]:
333
+ """Detect files opened without context manager."""
334
+ matches = []
335
+
336
+ class Visitor(ast.NodeVisitor):
337
+ def __init__(self):
338
+ self.in_with = False
339
+
340
+ def visit_With(self, node):
341
+ old = self.in_with
342
+ self.in_with = True
343
+ self.generic_visit(node)
344
+ self.in_with = old
345
+
346
+ def visit_Call(self, node):
347
+ if isinstance(node.func, ast.Name) and node.func.id == 'open':
348
+ if not self.in_with:
349
+ matches.append({
350
+ "line": node.lineno,
351
+ "column": node.col_offset,
352
+ "message": "File opened without 'with' context manager",
353
+ "suggestion": "Use 'with open(...) as f:' to ensure file is closed",
354
+ })
355
+ self.generic_visit(node)
356
+
357
+ Visitor().visit(tree)
358
+ return matches
359
+
360
+ def _detect_mutable_default(self, source: str, tree: ast.AST) -> List[Dict]:
361
+ """Detect mutable default arguments."""
362
+ matches = []
363
+
364
+ class Visitor(ast.NodeVisitor):
365
+ def visit_FunctionDef(self, node):
366
+ for default in node.args.defaults + node.args.kw_defaults:
367
+ if default and isinstance(default, (ast.List, ast.Dict, ast.Set)):
368
+ matches.append({
369
+ "line": node.lineno,
370
+ "column": node.col_offset,
371
+ "message": f"Mutable default argument in function '{node.name}'",
372
+ "suggestion": "Use None as default and create mutable object inside function",
373
+ })
374
+ break
375
+ self.generic_visit(node)
376
+
377
+ def visit_AsyncFunctionDef(self, node):
378
+ self.visit_FunctionDef(node)
379
+
380
+ Visitor().visit(tree)
381
+ return matches
382
+
383
+ def _detect_none_comparison(self, source: str, tree: ast.AST) -> List[Dict]:
384
+ """Detect == None instead of 'is None'."""
385
+ matches = []
386
+
387
+ class Visitor(ast.NodeVisitor):
388
+ def visit_Compare(self, node):
389
+ for i, (op, comparator) in enumerate(zip(node.ops, node.comparators)):
390
+ if isinstance(op, (ast.Eq, ast.NotEq)):
391
+ if isinstance(comparator, ast.Constant) and comparator.value is None:
392
+ matches.append({
393
+ "line": node.lineno,
394
+ "column": node.col_offset,
395
+ "message": "Use 'is None' instead of '== None'",
396
+ "suggestion": "Replace with 'is None' or 'is not None'",
397
+ })
398
+ self.generic_visit(node)
399
+
400
+ Visitor().visit(tree)
401
+ return matches
402
+
403
+ def _detect_unreachable_code(self, source: str, tree: ast.AST) -> List[Dict]:
404
+ """Detect code after return/raise/break/continue."""
405
+ matches = []
406
+
407
+ class Visitor(ast.NodeVisitor):
408
+ def check_body(self, body):
409
+ for i, stmt in enumerate(body):
410
+ if isinstance(stmt, (ast.Return, ast.Raise, ast.Break, ast.Continue)):
411
+ # Check if there's code after this
412
+ if i + 1 < len(body):
413
+ next_stmt = body[i + 1]
414
+ matches.append({
415
+ "line": next_stmt.lineno,
416
+ "column": next_stmt.col_offset,
417
+ "message": "Unreachable code after return/raise/break/continue",
418
+ })
419
+ self.visit(stmt)
420
+
421
+ def visit_FunctionDef(self, node):
422
+ self.check_body(node.body)
423
+
424
+ def visit_AsyncFunctionDef(self, node):
425
+ self.check_body(node.body)
426
+
427
+ def visit_If(self, node):
428
+ self.check_body(node.body)
429
+ self.check_body(node.orelse)
430
+
431
+ def visit_For(self, node):
432
+ self.check_body(node.body)
433
+
434
+ def visit_While(self, node):
435
+ self.check_body(node.body)
436
+
437
+ Visitor().visit(tree)
438
+ return matches
439
+
440
+ def _detect_hardcoded_secret(self, source: str, tree: ast.AST) -> List[Dict]:
441
+ """Detect potential hardcoded secrets."""
442
+ matches = []
443
+
444
+ # Pattern for secret-like variable names
445
+ secret_patterns = re.compile(
446
+ r'\b(password|passwd|pwd|secret|api_key|apikey|token|auth|credential)\s*=\s*["\'][^"\']+["\']',
447
+ re.IGNORECASE
448
+ )
449
+
450
+ for i, line in enumerate(source.splitlines(), 1):
451
+ if secret_patterns.search(line):
452
+ matches.append({
453
+ "line": i,
454
+ "column": 0,
455
+ "message": "Potential hardcoded secret detected",
456
+ "suggestion": "Use environment variables or a secrets manager",
457
+ })
458
+
459
+ return matches
460
+
461
+ def _detect_sql_injection(self, source: str, tree: ast.AST) -> List[Dict]:
462
+ """Detect potential SQL injection vulnerabilities."""
463
+ matches = []
464
+
465
+ class Visitor(ast.NodeVisitor):
466
+ def visit_Call(self, node):
467
+ # Look for .execute() calls with string formatting
468
+ if isinstance(node.func, ast.Attribute) and node.func.attr == 'execute':
469
+ for arg in node.args:
470
+ if isinstance(arg, ast.BinOp) and isinstance(arg.op, ast.Mod):
471
+ matches.append({
472
+ "line": node.lineno,
473
+ "column": node.col_offset,
474
+ "message": "Potential SQL injection - using string formatting in query",
475
+ "suggestion": "Use parameterized queries instead",
476
+ })
477
+ elif isinstance(arg, ast.JoinedStr): # f-string
478
+ matches.append({
479
+ "line": node.lineno,
480
+ "column": node.col_offset,
481
+ "message": "Potential SQL injection - using f-string in query",
482
+ "suggestion": "Use parameterized queries instead",
483
+ })
484
+ self.generic_visit(node)
485
+
486
+ Visitor().visit(tree)
487
+ return matches
488
+
489
+ def _detect_loop_invariant(self, source: str, tree: ast.AST) -> List[Dict]:
490
+ """Detect computations that could be moved outside loops."""
491
+ matches = []
492
+
493
+ class Visitor(ast.NodeVisitor):
494
+ def visit_For(self, node):
495
+ # Get loop variable name
496
+ if isinstance(node.target, ast.Name):
497
+ loop_var = node.target.id
498
+
499
+ # Look for calls that don't use the loop variable
500
+ for stmt in node.body:
501
+ if isinstance(stmt, ast.Assign):
502
+ for target in stmt.targets:
503
+ if isinstance(target, ast.Name):
504
+ # Check if value doesn't depend on loop var
505
+ value_vars = self._get_names(stmt.value)
506
+ if loop_var not in value_vars and self._is_expensive(stmt.value):
507
+ matches.append({
508
+ "line": stmt.lineno,
509
+ "column": stmt.col_offset,
510
+ "message": "Computation inside loop may be loop-invariant",
511
+ "suggestion": "Consider moving this outside the loop",
512
+ "confidence": 0.5,
513
+ })
514
+ self.generic_visit(node)
515
+
516
+ def _get_names(self, node) -> Set[str]:
517
+ names = set()
518
+ for child in ast.walk(node):
519
+ if isinstance(child, ast.Name):
520
+ names.add(child.id)
521
+ return names
522
+
523
+ def _is_expensive(self, node) -> bool:
524
+ # Heuristic: calls are potentially expensive
525
+ for child in ast.walk(node):
526
+ if isinstance(child, ast.Call):
527
+ return True
528
+ return False
529
+
530
+ Visitor().visit(tree)
531
+ return matches
532
+
533
+ # =========================================================================
534
+ # REPORTING
535
+ # =========================================================================
536
+
537
+ def get_summary(self) -> Dict[str, Any]:
538
+ """Get detection summary."""
539
+ by_severity = {}
540
+ by_category = {}
541
+
542
+ for issue in self._detected_issues:
543
+ by_severity[issue.severity] = by_severity.get(issue.severity, 0) + 1
544
+
545
+ pattern = next((p for p in self.patterns if p.name == issue.pattern_name), None)
546
+ if pattern:
547
+ by_category[pattern.category] = by_category.get(pattern.category, 0) + 1
548
+
549
+ return {
550
+ "files_scanned": len(self._scanned_files),
551
+ "total_issues": len(self._detected_issues),
552
+ "by_severity": by_severity,
553
+ "by_category": by_category,
554
+ }
555
+
556
+ def get_report(self) -> str:
557
+ """Generate a human-readable report."""
558
+ lines = [
559
+ "BUG DETECTION REPORT",
560
+ "=" * 60,
561
+ f"Files scanned: {len(self._scanned_files)}",
562
+ f"Issues found: {len(self._detected_issues)}",
563
+ "",
564
+ ]
565
+
566
+ # Group by severity
567
+ by_severity: Dict[str, List[DetectedIssue]] = {}
568
+ for issue in self._detected_issues:
569
+ if issue.severity not in by_severity:
570
+ by_severity[issue.severity] = []
571
+ by_severity[issue.severity].append(issue)
572
+
573
+ severity_order = ["critical", "error", "warning", "info"]
574
+ severity_icons = {"critical": "🔴", "error": "❌", "warning": "⚠️", "info": "ℹ️"}
575
+
576
+ for severity in severity_order:
577
+ if severity in by_severity:
578
+ issues = by_severity[severity]
579
+ icon = severity_icons.get(severity, "•")
580
+ lines.append(f"\n{icon} {severity.upper()} ({len(issues)})")
581
+ lines.append("-" * 40)
582
+
583
+ for issue in issues:
584
+ lines.append(f" {issue.file_path}:{issue.line_number}")
585
+ lines.append(f" {issue.message}")
586
+ if issue.suggestion:
587
+ lines.append(f" 💡 {issue.suggestion}")
588
+ lines.append("")
589
+
590
+ return "\n".join(lines)
cascade/diagnostics/code_tracer.py ADDED
@@ -0,0 +1,512 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE Code Tracer - Trace execution through code to find bugs.
3
+
4
+ Repurposes cascade-lattice's causation graph for code execution:
5
+ - Each function call = Event
6
+ - Each return/exception = Event
7
+ - Control flow = CausationLinks
8
+ - Data flow = CausationLinks with different type
9
+
10
+ Enables:
11
+ - "What called this function?" (trace_backwards)
12
+ - "What did this function call?" (trace_forwards)
13
+ - "What was the root cause of this crash?" (find_root_causes)
14
+ - "What will this bug affect?" (analyze_impact)
15
+ """
16
+
17
+ import sys
18
+ import time
19
+ import hashlib
20
+ import functools
21
+ import traceback
22
+ import threading
23
+ from typing import Any, Dict, List, Optional, Callable, Set
24
+ from dataclasses import dataclass, field
25
+ from pathlib import Path
26
+
27
+ from cascade.core.event import Event, CausationLink
28
+ from cascade.core.graph import CausationGraph
29
+ from cascade.analysis.tracer import Tracer, RootCauseAnalysis, ImpactAnalysis
30
+
31
+
32
+ @dataclass
33
+ class CodeEvent(Event):
34
+ """
35
+ An event in code execution - extends base Event with code-specific data.
36
+ """
37
+ # Code location
38
+ file_path: str = ""
39
+ line_number: int = 0
40
+ function_name: str = ""
41
+ class_name: Optional[str] = None
42
+ module_name: str = ""
43
+
44
+ # Execution context
45
+ call_stack_depth: int = 0
46
+ thread_id: int = 0
47
+
48
+ # Data snapshot
49
+ args: Dict[str, Any] = field(default_factory=dict)
50
+ kwargs: Dict[str, Any] = field(default_factory=dict)
51
+ return_value: Optional[Any] = None
52
+ exception: Optional[str] = None
53
+
54
+ # Timing
55
+ duration_ms: float = 0.0
56
+
57
+
58
+ @dataclass
59
+ class BugSignature:
60
+ """
61
+ A detected bug signature - pattern that indicates a problem.
62
+ """
63
+ bug_type: str # "null_reference", "type_mismatch", "infinite_loop", etc.
64
+ severity: str # "critical", "error", "warning", "info"
65
+ evidence: List[str]
66
+ affected_events: List[str] # Event IDs
67
+ root_cause_event: Optional[str] = None
68
+ confidence: float = 0.0
69
+
70
+ def to_dict(self) -> Dict[str, Any]:
71
+ return {
72
+ "bug_type": self.bug_type,
73
+ "severity": self.severity,
74
+ "evidence": self.evidence,
75
+ "affected_events": self.affected_events,
76
+ "root_cause_event": self.root_cause_event,
77
+ "confidence": self.confidence,
78
+ }
79
+
80
+
81
+ class CodeTracer:
82
+ """
83
+ Trace code execution using cascade-lattice causation graph.
84
+
85
+ Usage:
86
+ tracer = CodeTracer()
87
+
88
+ @tracer.trace
89
+ def my_function(x, y):
90
+ return x + y
91
+
92
+ # After execution:
93
+ tracer.find_root_causes(error_event_id)
94
+ tracer.analyze_impact(function_event_id)
95
+ tracer.detect_bugs()
96
+ """
97
+
98
+ def __init__(self, name: str = "code_tracer"):
99
+ self.name = name
100
+ self.graph = CausationGraph()
101
+ self.tracer = Tracer(self.graph)
102
+
103
+ # Execution tracking
104
+ self._call_stack: List[str] = [] # Stack of event IDs
105
+ self._current_depth = 0
106
+ self._lock = threading.RLock()
107
+
108
+ # Bug detection patterns
109
+ self._bug_patterns: List[Callable] = []
110
+ self._detected_bugs: List[BugSignature] = []
111
+
112
+ # Statistics
113
+ self._event_count = 0
114
+ self._error_count = 0
115
+
116
+ # Register built-in bug detectors
117
+ self._register_builtin_detectors()
118
+
119
+ def _register_builtin_detectors(self):
120
+ """Register built-in bug detection patterns."""
121
+ self._bug_patterns.extend([
122
+ self._detect_null_reference,
123
+ self._detect_type_errors,
124
+ self._detect_recursion_depth,
125
+ self._detect_slow_functions,
126
+ self._detect_exception_patterns,
127
+ ])
128
+
129
+ def trace(self, func: Callable = None, *, capture_args: bool = True):
130
+ """
131
+ Decorator to trace function execution.
132
+
133
+ Usage:
134
+ @tracer.trace
135
+ def my_function(x, y):
136
+ return x + y
137
+
138
+ # Or with options:
139
+ @tracer.trace(capture_args=False)
140
+ def sensitive_function(password):
141
+ ...
142
+ """
143
+ def decorator(fn: Callable) -> Callable:
144
+ @functools.wraps(fn)
145
+ def wrapper(*args, **kwargs):
146
+ return self._traced_call(fn, args, kwargs, capture_args)
147
+ return wrapper
148
+
149
+ if func is not None:
150
+ # Called as @tracer.trace without parentheses
151
+ return decorator(func)
152
+ return decorator
153
+
154
+ def _traced_call(self, func: Callable, args: tuple, kwargs: dict, capture_args: bool) -> Any:
155
+ """Execute a traced function call."""
156
+ # Create entry event
157
+ entry_event = self._create_entry_event(func, args, kwargs, capture_args)
158
+ self.graph.add_event(entry_event)
159
+
160
+ # Link to caller
161
+ with self._lock:
162
+ if self._call_stack:
163
+ caller_id = self._call_stack[-1]
164
+ link = CausationLink(
165
+ from_event=caller_id,
166
+ to_event=entry_event.event_id,
167
+ causation_type="call",
168
+ strength=1.0,
169
+ explanation=f"{func.__name__} called"
170
+ )
171
+ self.graph.add_link(link)
172
+
173
+ self._call_stack.append(entry_event.event_id)
174
+ self._current_depth += 1
175
+
176
+ start_time = time.perf_counter()
177
+ exception_info = None
178
+ return_value = None
179
+
180
+ try:
181
+ return_value = func(*args, **kwargs)
182
+ return return_value
183
+ except Exception as e:
184
+ exception_info = str(e)
185
+ # Re-raise after recording
186
+ raise
187
+ finally:
188
+ duration = (time.perf_counter() - start_time) * 1000
189
+
190
+ # Create exit event
191
+ exit_event = self._create_exit_event(
192
+ func, entry_event, return_value, exception_info, duration
193
+ )
194
+ self.graph.add_event(exit_event)
195
+
196
+ # Link entry -> exit
197
+ link = CausationLink(
198
+ from_event=entry_event.event_id,
199
+ to_event=exit_event.event_id,
200
+ causation_type="return" if not exception_info else "exception",
201
+ strength=1.0,
202
+ explanation=f"{func.__name__} {'returned' if not exception_info else 'raised ' + exception_info}"
203
+ )
204
+ self.graph.add_link(link)
205
+
206
+ with self._lock:
207
+ self._call_stack.pop()
208
+ self._current_depth -= 1
209
+ if exception_info:
210
+ self._error_count += 1
211
+
212
+ def _create_entry_event(self, func: Callable, args: tuple, kwargs: dict, capture_args: bool) -> CodeEvent:
213
+ """Create an event for function entry."""
214
+ # Get source location
215
+ try:
216
+ file_path = func.__code__.co_filename
217
+ line_number = func.__code__.co_firstlineno
218
+ except AttributeError:
219
+ file_path = "unknown"
220
+ line_number = 0
221
+
222
+ # Get function info
223
+ module_name = func.__module__ or ""
224
+ class_name = None
225
+ if hasattr(func, '__qualname__') and '.' in func.__qualname__:
226
+ class_name = func.__qualname__.rsplit('.', 1)[0]
227
+
228
+ # Capture arguments (sanitized)
229
+ captured_args = {}
230
+ captured_kwargs = {}
231
+ if capture_args:
232
+ try:
233
+ # Get parameter names
234
+ import inspect
235
+ sig = inspect.signature(func)
236
+ params = list(sig.parameters.keys())
237
+
238
+ for i, arg in enumerate(args):
239
+ key = params[i] if i < len(params) else f"arg_{i}"
240
+ captured_args[key] = self._sanitize_value(arg)
241
+
242
+ for k, v in kwargs.items():
243
+ captured_kwargs[k] = self._sanitize_value(v)
244
+ except Exception:
245
+ pass # Can't capture args - non-critical
246
+
247
+ self._event_count += 1
248
+
249
+ return CodeEvent(
250
+ timestamp=time.time(),
251
+ component=module_name,
252
+ event_type="function_entry",
253
+ data={
254
+ "function": func.__name__,
255
+ "args_count": len(args),
256
+ "kwargs_count": len(kwargs),
257
+ },
258
+ file_path=file_path,
259
+ line_number=line_number,
260
+ function_name=func.__name__,
261
+ class_name=class_name,
262
+ module_name=module_name,
263
+ call_stack_depth=self._current_depth,
264
+ thread_id=threading.current_thread().ident,
265
+ args=captured_args,
266
+ kwargs=captured_kwargs,
267
+ )
268
+
269
+ def _create_exit_event(self, func: Callable, entry: CodeEvent,
270
+ return_value: Any, exception: Optional[str],
271
+ duration_ms: float) -> CodeEvent:
272
+ """Create an event for function exit."""
273
+ event_type = "function_return" if not exception else "function_exception"
274
+
275
+ return CodeEvent(
276
+ timestamp=time.time(),
277
+ component=entry.module_name,
278
+ event_type=event_type,
279
+ data={
280
+ "function": func.__name__,
281
+ "duration_ms": duration_ms,
282
+ "exception": exception,
283
+ "has_return": return_value is not None,
284
+ },
285
+ file_path=entry.file_path,
286
+ line_number=entry.line_number,
287
+ function_name=func.__name__,
288
+ class_name=entry.class_name,
289
+ module_name=entry.module_name,
290
+ call_stack_depth=entry.call_stack_depth,
291
+ thread_id=entry.thread_id,
292
+ return_value=self._sanitize_value(return_value) if return_value else None,
293
+ exception=exception,
294
+ duration_ms=duration_ms,
295
+ )
296
+
297
+ def _sanitize_value(self, value: Any, max_len: int = 200) -> Any:
298
+ """Sanitize a value for storage (avoid huge objects)."""
299
+ if value is None:
300
+ return None
301
+
302
+ # Numpy arrays
303
+ if hasattr(value, 'shape'):
304
+ return f"<array shape={value.shape} dtype={value.dtype}>"
305
+
306
+ # Tensors
307
+ if hasattr(value, 'size') and hasattr(value, 'dtype'):
308
+ return f"<tensor size={value.size()} dtype={value.dtype}>"
309
+
310
+ # Large strings
311
+ if isinstance(value, str) and len(value) > max_len:
312
+ return value[:max_len] + "..."
313
+
314
+ # Lists/dicts
315
+ if isinstance(value, (list, dict)):
316
+ s = str(value)
317
+ if len(s) > max_len:
318
+ return s[:max_len] + "..."
319
+ return value
320
+
321
+ # Primitives
322
+ if isinstance(value, (int, float, bool, str)):
323
+ return value
324
+
325
+ # Fallback: type name
326
+ return f"<{type(value).__name__}>"
327
+
328
+ # =========================================================================
329
+ # CAUSATION TRACING (via cascade-lattice)
330
+ # =========================================================================
331
+
332
+ def find_root_causes(self, event_id: str) -> RootCauseAnalysis:
333
+ """
334
+ Find the root causes of an event (e.g., what caused this crash?).
335
+
336
+ Uses cascade-lattice's backwards tracing to find the origin.
337
+ """
338
+ return self.tracer.find_root_causes(event_id)
339
+
340
+ def analyze_impact(self, event_id: str) -> ImpactAnalysis:
341
+ """
342
+ Analyze the downstream impact of an event.
343
+
344
+ "What did this bug affect?"
345
+ """
346
+ return self.tracer.analyze_impact(event_id)
347
+
348
+ def trace_backwards(self, event_id: str, max_depth: int = 100):
349
+ """Trace what led to this event."""
350
+ return self.tracer.trace_backwards(event_id, max_depth)
351
+
352
+ def trace_forwards(self, event_id: str, max_depth: int = 100):
353
+ """Trace what this event caused."""
354
+ return self.tracer.trace_forwards(event_id, max_depth)
355
+
356
+ # =========================================================================
357
+ # BUG DETECTION
358
+ # =========================================================================
359
+
360
+ def detect_bugs(self) -> List[BugSignature]:
361
+ """
362
+ Run all bug detection patterns on collected events.
363
+ """
364
+ self._detected_bugs = []
365
+
366
+ for detector in self._bug_patterns:
367
+ try:
368
+ bugs = detector()
369
+ self._detected_bugs.extend(bugs)
370
+ except Exception as e:
371
+ print(f"[DIAG] Bug detector {detector.__name__} failed: {e}")
372
+
373
+ return self._detected_bugs
374
+
375
+ def _detect_null_reference(self) -> List[BugSignature]:
376
+ """Detect null/None reference patterns."""
377
+ bugs = []
378
+
379
+ for event in self.graph.get_recent_events(1000):
380
+ if event.event_type == "function_exception":
381
+ exc = event.data.get("exception", "")
382
+ if "NoneType" in exc or "null" in exc.lower():
383
+ bugs.append(BugSignature(
384
+ bug_type="null_reference",
385
+ severity="error",
386
+ evidence=[exc],
387
+ affected_events=[event.event_id],
388
+ confidence=0.9,
389
+ ))
390
+
391
+ return bugs
392
+
393
+ def _detect_type_errors(self) -> List[BugSignature]:
394
+ """Detect type mismatch patterns."""
395
+ bugs = []
396
+
397
+ for event in self.graph.get_recent_events(1000):
398
+ if event.event_type == "function_exception":
399
+ exc = event.data.get("exception", "")
400
+ if "TypeError" in exc or "type" in exc.lower():
401
+ bugs.append(BugSignature(
402
+ bug_type="type_mismatch",
403
+ severity="error",
404
+ evidence=[exc],
405
+ affected_events=[event.event_id],
406
+ confidence=0.85,
407
+ ))
408
+
409
+ return bugs
410
+
411
+ def _detect_recursion_depth(self) -> List[BugSignature]:
412
+ """Detect excessive recursion."""
413
+ bugs = []
414
+
415
+ # Find events with high call depth
416
+ for event in self.graph.get_recent_events(1000):
417
+ if hasattr(event, 'call_stack_depth') and event.call_stack_depth > 50:
418
+ bugs.append(BugSignature(
419
+ bug_type="deep_recursion",
420
+ severity="warning",
421
+ evidence=[f"Call stack depth: {event.call_stack_depth}"],
422
+ affected_events=[event.event_id],
423
+ confidence=0.7,
424
+ ))
425
+
426
+ return bugs
427
+
428
+ def _detect_slow_functions(self) -> List[BugSignature]:
429
+ """Detect unusually slow functions."""
430
+ bugs = []
431
+
432
+ for event in self.graph.get_recent_events(1000):
433
+ if event.event_type == "function_return":
434
+ duration = event.data.get("duration_ms", 0)
435
+ if duration > 1000: # > 1 second
436
+ bugs.append(BugSignature(
437
+ bug_type="slow_function",
438
+ severity="warning",
439
+ evidence=[f"Duration: {duration:.1f}ms"],
440
+ affected_events=[event.event_id],
441
+ confidence=0.8,
442
+ ))
443
+
444
+ return bugs
445
+
446
+ def _detect_exception_patterns(self) -> List[BugSignature]:
447
+ """Detect recurring exception patterns."""
448
+ bugs = []
449
+
450
+ # Group exceptions by type
451
+ exception_counts: Dict[str, List[str]] = {}
452
+
453
+ for event in self.graph.get_recent_events(1000):
454
+ if event.event_type == "function_exception":
455
+ exc = event.data.get("exception", "unknown")
456
+ if exc not in exception_counts:
457
+ exception_counts[exc] = []
458
+ exception_counts[exc].append(event.event_id)
459
+
460
+ # Flag recurring exceptions
461
+ for exc_type, event_ids in exception_counts.items():
462
+ if len(event_ids) > 3:
463
+ bugs.append(BugSignature(
464
+ bug_type="recurring_exception",
465
+ severity="error",
466
+ evidence=[f"{exc_type} occurred {len(event_ids)} times"],
467
+ affected_events=event_ids,
468
+ confidence=0.9,
469
+ ))
470
+
471
+ return bugs
472
+
473
+ # =========================================================================
474
+ # REPORTING
475
+ # =========================================================================
476
+
477
+ def get_summary(self) -> Dict[str, Any]:
478
+ """Get execution summary."""
479
+ return {
480
+ "name": self.name,
481
+ "total_events": self._event_count,
482
+ "error_count": self._error_count,
483
+ "bugs_detected": len(self._detected_bugs),
484
+ "graph_nodes": len(self.graph._events),
485
+ "graph_links": len(self.graph._links),
486
+ }
487
+
488
+ def get_error_chain(self, error_event_id: str) -> str:
489
+ """
490
+ Get a human-readable error chain from root cause to error.
491
+ """
492
+ analysis = self.find_root_causes(error_event_id)
493
+
494
+ if not analysis.chains:
495
+ return "No causal chain found"
496
+
497
+ lines = ["ERROR CHAIN ANALYSIS", "=" * 50]
498
+
499
+ # Take the deepest chain (most likely root cause)
500
+ chain = analysis.chains[0]
501
+
502
+ for i, event in enumerate(chain.events):
503
+ prefix = " " * i
504
+ if event.event_type == "function_exception":
505
+ lines.append(f"{prefix}❌ {event.component}.{event.data.get('function', '?')}")
506
+ lines.append(f"{prefix} Exception: {event.data.get('exception', '?')}")
507
+ elif event.event_type == "function_entry":
508
+ lines.append(f"{prefix}→ {event.component}.{event.data.get('function', '?')}")
509
+ elif event.event_type == "function_return":
510
+ lines.append(f"{prefix}← {event.component}.{event.data.get('function', '?')}")
511
+
512
+ return "\n".join(lines)
cascade/diagnostics/demo.py ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE Diagnostics Demo - Exposing bugs with tracing.
3
+
4
+ This demonstrates how to use the cascade.diagnostics module
5
+ to trace through code and expose issues.
6
+ """
7
+
8
+ import sys
9
+ sys.path.insert(0, "F:/End-Game/github-pypi-lattice")
10
+
11
+ from cascade.diagnostics import (
12
+ diagnose,
13
+ CodeTracer,
14
+ BugDetector,
15
+ ExecutionMonitor,
16
+ DiagnosticEngine,
17
+ monitor,
18
+ )
19
+
20
+
21
+ # =============================================================================
22
+ # DEMO 1: Trace function execution and find root causes
23
+ # =============================================================================
24
+
25
+ print("=" * 60)
26
+ print("DEMO 1: Function Execution Tracing")
27
+ print("=" * 60)
28
+
29
+ tracer = CodeTracer(name="demo_tracer")
30
+
31
+
32
+ @tracer.trace
33
+ def calculate_average(numbers):
34
+ """Calculate average - but has a bug with empty lists!"""
35
+ total = sum(numbers)
36
+ return total / len(numbers) # Bug: ZeroDivisionError if empty
37
+
38
+
39
+ @tracer.trace
40
+ def process_data(data):
41
+ """Process data and return stats."""
42
+ averages = []
43
+ for dataset in data:
44
+ avg = calculate_average(dataset)
45
+ averages.append(avg)
46
+ return averages
47
+
48
+
49
+ # Run with normal data
50
+ print("\n1a. Running with valid data...")
51
+ try:
52
+ result = process_data([[1, 2, 3], [4, 5, 6]])
53
+ print(f" Result: {result}")
54
+ except Exception as e:
55
+ print(f" Error: {e}")
56
+
57
+ # Run with buggy data (empty list)
58
+ print("\n1b. Running with empty list (trigger bug)...")
59
+ try:
60
+ result = process_data([[1, 2, 3], []]) # Empty list causes ZeroDivisionError
61
+ print(f" Result: {result}")
62
+ except Exception as e:
63
+ print(f" Error: {e}")
64
+
65
+ # Check detected bugs
66
+ print("\n1c. Detecting bugs from execution trace...")
67
+ bugs = tracer.detect_bugs()
68
+ for bug in bugs:
69
+ print(f" - {bug.bug_type}: {bug.evidence}")
70
+
71
+ # Print summary
72
+ print("\n1d. Tracer Summary:")
73
+ summary = tracer.get_summary()
74
+ for key, value in summary.items():
75
+ print(f" {key}: {value}")
76
+
77
+
78
+ # =============================================================================
79
+ # DEMO 2: Static code analysis with BugDetector
80
+ # =============================================================================
81
+
82
+ print("\n" + "=" * 60)
83
+ print("DEMO 2: Static Code Analysis")
84
+ print("=" * 60)
85
+
86
+ # Create a buggy file for demonstration
87
+ buggy_code = '''
88
+ def process_user(user_data):
89
+ """Process user - has several bugs!"""
90
+ # Bug 1: Bare except
91
+ try:
92
+ name = user_data["name"]
93
+ except:
94
+ pass
95
+
96
+ # Bug 2: Mutable default argument
97
+ def add_item(item, items=[]):
98
+ items.append(item)
99
+ return items
100
+
101
+ # Bug 3: Hardcoded password
102
+ password = "admin123"
103
+
104
+ # Bug 4: SQL injection risk
105
+ cursor.execute("SELECT * FROM users WHERE name = '%s'" % name)
106
+
107
+ # Bug 5: Comparing to None with ==
108
+ if user_data == None:
109
+ return
110
+
111
+ # Bug 6: File without context manager
112
+ f = open("data.txt", "r")
113
+ data = f.read()
114
+
115
+ return name
116
+
117
+ def unreachable_example():
118
+ """Has unreachable code."""
119
+ return 42
120
+ print("This never runs") # Unreachable
121
+ '''
122
+
123
+ # Write buggy code to temp file
124
+ import tempfile
125
+ import os
126
+
127
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
128
+ f.write(buggy_code)
129
+ buggy_file = f.name
130
+
131
+ print(f"\n2a. Scanning buggy code file...")
132
+ detector = BugDetector()
133
+ issues = detector.scan_file(buggy_file)
134
+
135
+ print(f" Found {len(issues)} issues:\n")
136
+ for issue in sorted(issues, key=lambda i: ["critical", "error", "warning", "info"].index(i.severity)
137
+ if i.severity in ["critical", "error", "warning", "info"] else 99):
138
+ severity_icon = {"critical": "🔴", "error": "❌", "warning": "⚠️", "info": "ℹ️"}.get(issue.severity, "•")
139
+ print(f" {severity_icon} [{issue.severity}] Line {issue.line_number}: {issue.message}")
140
+ if issue.suggestion:
141
+ print(f" 💡 {issue.suggestion}")
142
+
143
+ # Clean up temp file
144
+ os.unlink(buggy_file)
145
+
146
+ print("\n2b. Bug Detection Summary:")
147
+ print(detector.get_report())
148
+
149
+
150
+ # =============================================================================
151
+ # DEMO 3: Real-time Execution Monitoring
152
+ # =============================================================================
153
+
154
+ print("\n" + "=" * 60)
155
+ print("DEMO 3: Real-time Execution Monitoring")
156
+ print("=" * 60)
157
+
158
+ print("\n3a. Monitoring execution with anomaly detection...")
159
+
160
+ monitor_instance = ExecutionMonitor()
161
+
162
+
163
+ def recursive_fib(n):
164
+ """Intentionally slow recursive fibonacci."""
165
+ if n <= 1:
166
+ return n
167
+ return recursive_fib(n - 1) + recursive_fib(n - 2)
168
+
169
+
170
+ def raise_error():
171
+ """Function that raises an error."""
172
+ raise ValueError("Intentional test error")
173
+
174
+
175
+ with monitor_instance.monitoring():
176
+ # Normal computation
177
+ result = recursive_fib(15)
178
+
179
+ # Error (intentionally caught and ignored for demo)
180
+ try:
181
+ raise_error()
182
+ except ValueError:
183
+ pass # Expected - demonstrating exception capture
184
+
185
+ print(f" Frames captured: {len(monitor_instance.frames)}")
186
+ print(f" Anomalies detected: {len(monitor_instance.anomalies)}")
187
+
188
+ print("\n3b. Execution Monitoring Report:")
189
+ print(monitor_instance.get_report())
190
+
191
+
192
+ # =============================================================================
193
+ # DEMO 4: Full Diagnostic Engine
194
+ # =============================================================================
195
+
196
+ print("\n" + "=" * 60)
197
+ print("DEMO 4: Full Diagnostic Engine")
198
+ print("=" * 60)
199
+
200
+ print("\n4a. Running full diagnostics on a function...")
201
+
202
+
203
+ def buggy_function(x, y):
204
+ """Function with potential issues."""
205
+ if x is None:
206
+ raise ValueError("x cannot be None")
207
+ result = x / y # Potential ZeroDivisionError
208
+ return result
209
+
210
+
211
+ engine = DiagnosticEngine()
212
+
213
+ # Analyze the function execution
214
+ try:
215
+ report = engine.analyze_execution(buggy_function, 10, 2)
216
+ print(f" Report generated: {report.report_id}")
217
+ print(f" Total findings: {len(report.findings)}")
218
+ except Exception as e:
219
+ print(f" Analysis captured error: {e}")
220
+
221
+ # Analyze with error
222
+ print("\n4b. Analyzing execution that triggers error...")
223
+ try:
224
+ report = engine.analyze_execution(buggy_function, 10, 0)
225
+ except ZeroDivisionError:
226
+ # The error still propagates but we captured the trace
227
+ print(" Error captured in execution trace")
228
+
229
+
230
+ # =============================================================================
231
+ # DEMO 5: The diagnose() convenience function
232
+ # =============================================================================
233
+
234
+ print("\n" + "=" * 60)
235
+ print("DEMO 5: diagnose() Convenience Function")
236
+ print("=" * 60)
237
+
238
+ # Create another temp file
239
+ simple_code = '''
240
+ def hello():
241
+ try:
242
+ pass # Empty except coming
243
+ except:
244
+ pass
245
+ '''
246
+
247
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
248
+ f.write(simple_code)
249
+ simple_file = f.name
250
+
251
+ print(f"\n5a. Quick diagnosis of a file...")
252
+ report = diagnose(simple_file)
253
+ print(f" Issues found: {len(report.findings)}")
254
+
255
+ for finding in report.findings:
256
+ print(f" - {finding.title}: {finding.description}")
257
+
258
+ os.unlink(simple_file)
259
+
260
+
261
+ # =============================================================================
262
+ # SUMMARY
263
+ # =============================================================================
264
+
265
+ print("\n" + "=" * 60)
266
+ print("CASCADE DIAGNOSTICS - Summary")
267
+ print("=" * 60)
268
+
269
+ print("""
270
+ The cascade.diagnostics module repurposes cascade-lattice for debugging:
271
+
272
+ 1. CodeTracer - Trace function execution with causation graph
273
+ - @tracer.trace decorator captures every call
274
+ - find_root_causes() traces backwards from errors
275
+ - analyze_impact() predicts what a bug affects
276
+ - detect_bugs() finds patterns in execution
277
+
278
+ 2. BugDetector - Static analysis with AST pattern matching
279
+ - Detects: null refs, bare excepts, SQL injection, etc.
280
+ - scan_file() / scan_directory() for batch analysis
281
+ - Custom patterns can be registered
282
+
283
+ 3. ExecutionMonitor - Real-time sys.settrace monitoring
284
+ - Captures every function call during execution
285
+ - Detects anomalies: slow functions, deep recursion, repeated errors
286
+ - get_hotspots() finds performance bottlenecks
287
+
288
+ 4. DiagnosticEngine - Unified diagnostic reports
289
+ - Combines all analyzers
290
+ - Markdown/JSON output
291
+ - Severity-ranked findings with suggestions
292
+
293
+ 5. diagnose() - One-line convenience function
294
+ - Works on files, directories, or functions
295
+
296
+ The core insight: Events = execution points, Links = causation.
297
+ Trace backwards from bugs to find root causes.
298
+ Trace forwards from changes to predict impact.
299
+ """)
cascade/diagnostics/execution_monitor.py ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE Execution Monitor - Monitor live code execution.
3
+
4
+ Wraps Python execution similar to how cascade.observe wraps processes.
5
+ Captures execution flow, exceptions, and anomalies in real-time.
6
+ """
7
+
8
+ import sys
9
+ import time
10
+ import threading
11
+ import traceback
12
+ import functools
13
+ from typing import Any, Dict, List, Optional, Callable, Set, Tuple
14
+ from dataclasses import dataclass, field
15
+ from contextlib import contextmanager
16
+ import queue
17
+
18
+ from cascade.core.event import Event, CausationLink
19
+ from cascade.core.graph import CausationGraph
20
+ from cascade.core.adapter import SymbioticAdapter
21
+
22
+
23
+ @dataclass
24
+ class ExecutionFrame:
25
+ """A frame in the execution trace."""
26
+ frame_id: str
27
+ function_name: str
28
+ file_path: str
29
+ line_number: int
30
+ local_vars: Dict[str, str] # Sanitized string representations
31
+ timestamp: float
32
+ duration_ms: Optional[float] = None
33
+ exception: Optional[str] = None
34
+
35
+
36
+ @dataclass
37
+ class Anomaly:
38
+ """An execution anomaly detected during monitoring."""
39
+ anomaly_type: str
40
+ description: str
41
+ severity: str
42
+ frame_id: str
43
+ timestamp: float
44
+ context: Dict[str, Any] = field(default_factory=dict)
45
+
46
+
47
+ class ExecutionMonitor:
48
+ """
49
+ Monitor live code execution and capture anomalies.
50
+
51
+ Uses sys.settrace to capture every function call, return, and exception.
52
+ Integrates with cascade-lattice's causation graph for tracing.
53
+
54
+ Usage:
55
+ monitor = ExecutionMonitor()
56
+
57
+ with monitor.monitoring():
58
+ # Your code here
59
+ result = my_function()
60
+
61
+ # After execution:
62
+ monitor.get_anomalies()
63
+ monitor.get_execution_trace()
64
+ """
65
+
66
+ def __init__(self,
67
+ capture_locals: bool = True,
68
+ max_depth: int = 100,
69
+ exclude_modules: Optional[Set[str]] = None):
70
+ self.capture_locals = capture_locals
71
+ self.max_depth = max_depth
72
+ self.exclude_modules = exclude_modules or {
73
+ 'cascade', 'threading', 'queue', 'logging',
74
+ 'importlib', '_frozen_importlib', 'posixpath', 'genericpath',
75
+ }
76
+
77
+ # Execution tracking
78
+ self.frames: List[ExecutionFrame] = []
79
+ self.anomalies: List[Anomaly] = []
80
+ self.call_stack: List[str] = []
81
+
82
+ # Causation graph
83
+ self.graph = CausationGraph()
84
+ self.adapter = SymbioticAdapter()
85
+
86
+ # Thresholds for anomaly detection
87
+ self.slow_threshold_ms = 100
88
+ self.deep_recursion_threshold = 50
89
+
90
+ # State
91
+ self._monitoring = False
92
+ self._lock = threading.RLock()
93
+ self._frame_counter = 0
94
+ self._function_times: Dict[str, List[float]] = {}
95
+ self._prev_trace = None
96
+
97
+ @contextmanager
98
+ def monitoring(self):
99
+ """Context manager for monitoring execution."""
100
+ self.start()
101
+ try:
102
+ yield self
103
+ finally:
104
+ self.stop()
105
+
106
+ def start(self):
107
+ """Start execution monitoring."""
108
+ if self._monitoring:
109
+ return
110
+
111
+ self._monitoring = True
112
+ self._prev_trace = sys.gettrace()
113
+ sys.settrace(self._trace_calls)
114
+ threading.settrace(self._trace_calls)
115
+
116
+ def stop(self):
117
+ """Stop execution monitoring."""
118
+ if not self._monitoring:
119
+ return
120
+
121
+ self._monitoring = False
122
+ sys.settrace(self._prev_trace)
123
+ threading.settrace(None)
124
+
125
+ # Analyze collected data for anomalies
126
+ self._analyze_for_anomalies()
127
+
128
+ def _trace_calls(self, frame, event, arg):
129
+ """Trace function for sys.settrace."""
130
+ if not self._monitoring:
131
+ return None
132
+
133
+ # Filter out excluded modules
134
+ code = frame.f_code
135
+ module = frame.f_globals.get('__name__', '')
136
+
137
+ if any(module.startswith(exc) for exc in self.exclude_modules):
138
+ return None
139
+
140
+ # Check depth
141
+ if len(self.call_stack) > self.max_depth:
142
+ return None
143
+
144
+ try:
145
+ if event == 'call':
146
+ self._handle_call(frame, code)
147
+ elif event == 'return':
148
+ self._handle_return(frame, code, arg)
149
+ elif event == 'exception':
150
+ self._handle_exception(frame, code, arg)
151
+ except Exception:
152
+ # Don't let tracing errors affect the program
153
+ pass
154
+
155
+ return self._trace_calls
156
+
157
+ def _handle_call(self, frame, code):
158
+ """Handle function call event."""
159
+ with self._lock:
160
+ self._frame_counter += 1
161
+ frame_id = f"frame_{self._frame_counter}"
162
+
163
+ # Capture local variables
164
+ local_vars = {}
165
+ if self.capture_locals:
166
+ for name, value in list(frame.f_locals.items())[:20]:
167
+ local_vars[name] = self._sanitize_value(value)
168
+
169
+ exec_frame = ExecutionFrame(
170
+ frame_id=frame_id,
171
+ function_name=code.co_name,
172
+ file_path=code.co_filename,
173
+ line_number=frame.f_lineno,
174
+ local_vars=local_vars,
175
+ timestamp=time.time(),
176
+ )
177
+
178
+ self.frames.append(exec_frame)
179
+
180
+ # Create causation link to caller
181
+ if self.call_stack:
182
+ caller_id = self.call_stack[-1]
183
+ link = CausationLink(
184
+ from_event=caller_id,
185
+ to_event=frame_id,
186
+ causation_type="call",
187
+ strength=1.0,
188
+ explanation=f"Called {code.co_name}"
189
+ )
190
+ self.graph.add_link(link)
191
+
192
+ self.call_stack.append(frame_id)
193
+
194
+ # Track function timing
195
+ func_key = f"{code.co_filename}:{code.co_name}"
196
+ if func_key not in self._function_times:
197
+ self._function_times[func_key] = []
198
+
199
+ def _handle_return(self, frame, code, return_value):
200
+ """Handle function return event."""
201
+ with self._lock:
202
+ if not self.call_stack:
203
+ return
204
+
205
+ frame_id = self.call_stack.pop()
206
+
207
+ # Find the frame and update duration
208
+ for exec_frame in reversed(self.frames):
209
+ if exec_frame.frame_id == frame_id:
210
+ exec_frame.duration_ms = (time.time() - exec_frame.timestamp) * 1000
211
+
212
+ # Track timing
213
+ func_key = f"{code.co_filename}:{code.co_name}"
214
+ if func_key in self._function_times:
215
+ self._function_times[func_key].append(exec_frame.duration_ms)
216
+ break
217
+
218
+ def _handle_exception(self, frame, code, arg):
219
+ """Handle exception event."""
220
+ exc_type, exc_value, exc_tb = arg
221
+
222
+ with self._lock:
223
+ frame_id = self.call_stack[-1] if self.call_stack else f"frame_{self._frame_counter}"
224
+
225
+ # Update frame with exception
226
+ for exec_frame in reversed(self.frames):
227
+ if exec_frame.frame_id == frame_id:
228
+ exec_frame.exception = f"{exc_type.__name__}: {exc_value}"
229
+ break
230
+
231
+ # Record as anomaly
232
+ self.anomalies.append(Anomaly(
233
+ anomaly_type="exception",
234
+ description=f"{exc_type.__name__}: {exc_value}",
235
+ severity="error",
236
+ frame_id=frame_id,
237
+ timestamp=time.time(),
238
+ context={
239
+ "exception_type": exc_type.__name__,
240
+ "exception_message": str(exc_value),
241
+ "function": code.co_name,
242
+ "file": code.co_filename,
243
+ "line": frame.f_lineno,
244
+ },
245
+ ))
246
+
247
+ def _analyze_for_anomalies(self):
248
+ """Analyze collected data for additional anomalies."""
249
+ # Check for slow functions
250
+ for func_key, times in self._function_times.items():
251
+ if times:
252
+ avg_time = sum(times) / len(times)
253
+ max_time = max(times)
254
+
255
+ if max_time > self.slow_threshold_ms:
256
+ self.anomalies.append(Anomaly(
257
+ anomaly_type="slow_execution",
258
+ description=f"Slow function: {func_key} (max: {max_time:.1f}ms, avg: {avg_time:.1f}ms)",
259
+ severity="warning",
260
+ frame_id="",
261
+ timestamp=time.time(),
262
+ context={
263
+ "function": func_key,
264
+ "max_time_ms": max_time,
265
+ "avg_time_ms": avg_time,
266
+ "call_count": len(times),
267
+ },
268
+ ))
269
+
270
+ # Check for deep recursion
271
+ max_depth = max((len(self.call_stack) for f in self.frames), default=0)
272
+ if max_depth > self.deep_recursion_threshold:
273
+ self.anomalies.append(Anomaly(
274
+ anomaly_type="deep_recursion",
275
+ description=f"Deep call stack detected: {max_depth} frames",
276
+ severity="warning",
277
+ frame_id="",
278
+ timestamp=time.time(),
279
+ context={"max_depth": max_depth},
280
+ ))
281
+
282
+ # Check for repeated exceptions
283
+ exception_counts: Dict[str, int] = {}
284
+ for anomaly in self.anomalies:
285
+ if anomaly.anomaly_type == "exception":
286
+ exc_type = anomaly.context.get("exception_type", "unknown")
287
+ exception_counts[exc_type] = exception_counts.get(exc_type, 0) + 1
288
+
289
+ for exc_type, count in exception_counts.items():
290
+ if count > 3:
291
+ self.anomalies.append(Anomaly(
292
+ anomaly_type="repeated_exception",
293
+ description=f"{exc_type} occurred {count} times",
294
+ severity="error",
295
+ frame_id="",
296
+ timestamp=time.time(),
297
+ context={"exception_type": exc_type, "count": count},
298
+ ))
299
+
300
+ def _sanitize_value(self, value: Any, max_len: int = 100) -> str:
301
+ """Convert value to safe string representation."""
302
+ try:
303
+ if value is None:
304
+ return "None"
305
+
306
+ # Numpy arrays
307
+ if hasattr(value, 'shape'):
308
+ return f"<array {value.shape}>"
309
+
310
+ # Tensors
311
+ if hasattr(value, 'size') and callable(value.size):
312
+ return f"<tensor {value.size()}>"
313
+
314
+ # Large collections
315
+ if isinstance(value, (list, dict, set)):
316
+ s = str(value)
317
+ if len(s) > max_len:
318
+ return s[:max_len] + "..."
319
+ return s
320
+
321
+ # Strings
322
+ if isinstance(value, str):
323
+ if len(value) > max_len:
324
+ return value[:max_len] + "..."
325
+ return repr(value)
326
+
327
+ # Primitives
328
+ if isinstance(value, (int, float, bool)):
329
+ return str(value)
330
+
331
+ # Fallback
332
+ return f"<{type(value).__name__}>"
333
+ except Exception:
334
+ return "<error>"
335
+
336
+ # =========================================================================
337
+ # QUERIES
338
+ # =========================================================================
339
+
340
+ def get_anomalies(self, severity: Optional[str] = None) -> List[Anomaly]:
341
+ """Get detected anomalies, optionally filtered by severity."""
342
+ if severity:
343
+ return [a for a in self.anomalies if a.severity == severity]
344
+ return list(self.anomalies)
345
+
346
+ def get_execution_trace(self) -> List[ExecutionFrame]:
347
+ """Get the execution trace."""
348
+ return list(self.frames)
349
+
350
+ def get_call_graph(self) -> Dict[str, List[str]]:
351
+ """Get the call graph as adjacency list."""
352
+ graph: Dict[str, List[str]] = {}
353
+
354
+ for link in self.graph._links:
355
+ if link.from_event not in graph:
356
+ graph[link.from_event] = []
357
+ graph[link.from_event].append(link.to_event)
358
+
359
+ return graph
360
+
361
+ def get_hotspots(self, top_n: int = 10) -> List[Tuple[str, float, int]]:
362
+ """Get the hottest functions by total time spent."""
363
+ totals: Dict[str, Tuple[float, int]] = {}
364
+
365
+ for func_key, times in self._function_times.items():
366
+ if times:
367
+ totals[func_key] = (sum(times), len(times))
368
+
369
+ sorted_funcs = sorted(totals.items(), key=lambda x: x[1][0], reverse=True)
370
+ return [(func, total, count) for func, (total, count) in sorted_funcs[:top_n]]
371
+
372
+ # =========================================================================
373
+ # REPORTING
374
+ # =========================================================================
375
+
376
+ def get_summary(self) -> Dict[str, Any]:
377
+ """Get monitoring summary."""
378
+ return {
379
+ "total_frames": len(self.frames),
380
+ "total_anomalies": len(self.anomalies),
381
+ "anomalies_by_type": self._count_by_key(self.anomalies, lambda a: a.anomaly_type),
382
+ "anomalies_by_severity": self._count_by_key(self.anomalies, lambda a: a.severity),
383
+ "functions_traced": len(self._function_times),
384
+ }
385
+
386
+ def _count_by_key(self, items, key_func) -> Dict[str, int]:
387
+ """Count items by key function."""
388
+ counts: Dict[str, int] = {}
389
+ for item in items:
390
+ key = key_func(item)
391
+ counts[key] = counts.get(key, 0) + 1
392
+ return counts
393
+
394
+ def get_report(self) -> str:
395
+ """Generate a human-readable report."""
396
+ lines = [
397
+ "EXECUTION MONITORING REPORT",
398
+ "=" * 60,
399
+ f"Frames captured: {len(self.frames)}",
400
+ f"Functions traced: {len(self._function_times)}",
401
+ f"Anomalies detected: {len(self.anomalies)}",
402
+ "",
403
+ ]
404
+
405
+ # Anomalies by severity
406
+ if self.anomalies:
407
+ lines.append("ANOMALIES")
408
+ lines.append("-" * 40)
409
+
410
+ severity_icons = {"critical": "🔴", "error": "❌", "warning": "⚠️", "info": "ℹ️"}
411
+
412
+ for anomaly in sorted(self.anomalies, key=lambda a:
413
+ ["critical", "error", "warning", "info"].index(a.severity)
414
+ if a.severity in ["critical", "error", "warning", "info"] else 99):
415
+ icon = severity_icons.get(anomaly.severity, "•")
416
+ lines.append(f" {icon} [{anomaly.anomaly_type}] {anomaly.description}")
417
+
418
+ lines.append("")
419
+
420
+ # Hotspots
421
+ hotspots = self.get_hotspots(5)
422
+ if hotspots:
423
+ lines.append("PERFORMANCE HOTSPOTS")
424
+ lines.append("-" * 40)
425
+
426
+ for func, total_ms, count in hotspots:
427
+ avg_ms = total_ms / count if count else 0
428
+ lines.append(f" {func}")
429
+ lines.append(f" Total: {total_ms:.1f}ms | Calls: {count} | Avg: {avg_ms:.1f}ms")
430
+
431
+ lines.append("")
432
+
433
+ return "\n".join(lines)
434
+
435
+
436
+ # =============================================================================
437
+ # CONVENIENCE DECORATORS
438
+ # =============================================================================
439
+
440
+ def monitor(func: Callable = None, **kwargs) -> Callable:
441
+ """
442
+ Decorator to monitor a function's execution.
443
+
444
+ Usage:
445
+ @monitor
446
+ def my_function():
447
+ ...
448
+
449
+ # Access monitoring results
450
+ my_function._monitor_results
451
+ """
452
+ def decorator(fn: Callable) -> Callable:
453
+ @functools.wraps(fn)
454
+ def wrapper(*args, **call_kwargs):
455
+ monitor = ExecutionMonitor(**kwargs)
456
+ with monitor.monitoring():
457
+ result = fn(*args, **call_kwargs)
458
+
459
+ # Attach results to function
460
+ wrapper._monitor_results = {
461
+ "anomalies": monitor.get_anomalies(),
462
+ "summary": monitor.get_summary(),
463
+ "report": monitor.get_report(),
464
+ }
465
+
466
+ return result
467
+
468
+ wrapper._monitor_results = None
469
+ return wrapper
470
+
471
+ if func is not None:
472
+ return decorator(func)
473
+ return decorator
cascade/diagnostics/report.py ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CASCADE Diagnostic Report - Generate comprehensive diagnostic reports.
3
+
4
+ Combines:
5
+ - CodeTracer execution traces
6
+ - BugDetector static analysis
7
+ - ExecutionMonitor runtime anomalies
8
+ - GhostLog forensic reconstruction
9
+
10
+ Into a unified diagnostic report.
11
+ """
12
+
13
+ import time
14
+ import json
15
+ import hashlib
16
+ from typing import Any, Dict, List, Optional, Set
17
+ from dataclasses import dataclass, field
18
+ from pathlib import Path
19
+ from datetime import datetime
20
+
21
+ from cascade.core.graph import CausationGraph
22
+ from cascade.forensics.analyzer import DataForensics, GhostLog
23
+
24
+
25
+ @dataclass
26
+ class DiagnosticFinding:
27
+ """A single diagnostic finding."""
28
+ finding_id: str
29
+ category: str # "static", "runtime", "forensic", "trace"
30
+ severity: str # "critical", "error", "warning", "info"
31
+ title: str
32
+ description: str
33
+ location: Optional[Dict[str, Any]] = None # file, line, function
34
+ evidence: List[str] = field(default_factory=list)
35
+ related_findings: List[str] = field(default_factory=list)
36
+ suggestions: List[str] = field(default_factory=list)
37
+ confidence: float = 1.0
38
+ timestamp: float = field(default_factory=time.time)
39
+
40
+
41
+ @dataclass
42
+ class DiagnosticReport:
43
+ """
44
+ A comprehensive diagnostic report.
45
+
46
+ Aggregates findings from multiple sources:
47
+ - Static analysis (BugDetector)
48
+ - Runtime monitoring (ExecutionMonitor)
49
+ - Execution tracing (CodeTracer)
50
+ - Forensic analysis (GhostLog)
51
+ """
52
+
53
+ report_id: str
54
+ title: str
55
+ created_at: float
56
+ target: str # File, directory, or module analyzed
57
+
58
+ findings: List[DiagnosticFinding] = field(default_factory=list)
59
+ summary: Dict[str, Any] = field(default_factory=dict)
60
+
61
+ # Source data
62
+ static_analysis: Dict[str, Any] = field(default_factory=dict)
63
+ runtime_analysis: Dict[str, Any] = field(default_factory=dict)
64
+ trace_analysis: Dict[str, Any] = field(default_factory=dict)
65
+ forensic_analysis: Dict[str, Any] = field(default_factory=dict)
66
+
67
+ def add_finding(self, finding: DiagnosticFinding):
68
+ """Add a finding to the report."""
69
+ self.findings.append(finding)
70
+
71
+ def get_findings_by_severity(self, severity: str) -> List[DiagnosticFinding]:
72
+ """Get findings filtered by severity."""
73
+ return [f for f in self.findings if f.severity == severity]
74
+
75
+ def get_findings_by_category(self, category: str) -> List[DiagnosticFinding]:
76
+ """Get findings filtered by category."""
77
+ return [f for f in self.findings if f.category == category]
78
+
79
+ def compute_summary(self):
80
+ """Compute summary statistics."""
81
+ self.summary = {
82
+ "total_findings": len(self.findings),
83
+ "by_severity": {},
84
+ "by_category": {},
85
+ "critical_count": 0,
86
+ "has_critical": False,
87
+ }
88
+
89
+ for finding in self.findings:
90
+ # Count by severity
91
+ sev = finding.severity
92
+ self.summary["by_severity"][sev] = self.summary["by_severity"].get(sev, 0) + 1
93
+
94
+ # Count by category
95
+ cat = finding.category
96
+ self.summary["by_category"][cat] = self.summary["by_category"].get(cat, 0) + 1
97
+
98
+ self.summary["critical_count"] = self.summary["by_severity"].get("critical", 0)
99
+ self.summary["has_critical"] = self.summary["critical_count"] > 0
100
+
101
+ def to_dict(self) -> Dict[str, Any]:
102
+ """Convert to dictionary."""
103
+ return {
104
+ "report_id": self.report_id,
105
+ "title": self.title,
106
+ "created_at": self.created_at,
107
+ "target": self.target,
108
+ "summary": self.summary,
109
+ "findings": [
110
+ {
111
+ "id": f.finding_id,
112
+ "category": f.category,
113
+ "severity": f.severity,
114
+ "title": f.title,
115
+ "description": f.description,
116
+ "location": f.location,
117
+ "evidence": f.evidence,
118
+ "suggestions": f.suggestions,
119
+ "confidence": f.confidence,
120
+ }
121
+ for f in self.findings
122
+ ],
123
+ "static_analysis": self.static_analysis,
124
+ "runtime_analysis": self.runtime_analysis,
125
+ "trace_analysis": self.trace_analysis,
126
+ "forensic_analysis": self.forensic_analysis,
127
+ }
128
+
129
+ def to_json(self, indent: int = 2) -> str:
130
+ """Convert to JSON string."""
131
+ return json.dumps(self.to_dict(), indent=indent, default=str)
132
+
133
+ def save(self, path: str):
134
+ """Save report to file."""
135
+ with open(path, 'w') as f:
136
+ f.write(self.to_json())
137
+
138
+ @classmethod
139
+ def load(cls, path: str) -> "DiagnosticReport":
140
+ """Load report from file."""
141
+ with open(path, 'r') as f:
142
+ data = json.load(f)
143
+
144
+ report = cls(
145
+ report_id=data["report_id"],
146
+ title=data["title"],
147
+ created_at=data["created_at"],
148
+ target=data["target"],
149
+ )
150
+
151
+ report.summary = data.get("summary", {})
152
+ report.static_analysis = data.get("static_analysis", {})
153
+ report.runtime_analysis = data.get("runtime_analysis", {})
154
+ report.trace_analysis = data.get("trace_analysis", {})
155
+ report.forensic_analysis = data.get("forensic_analysis", {})
156
+
157
+ for f_data in data.get("findings", []):
158
+ finding = DiagnosticFinding(
159
+ finding_id=f_data["id"],
160
+ category=f_data["category"],
161
+ severity=f_data["severity"],
162
+ title=f_data["title"],
163
+ description=f_data["description"],
164
+ location=f_data.get("location"),
165
+ evidence=f_data.get("evidence", []),
166
+ suggestions=f_data.get("suggestions", []),
167
+ confidence=f_data.get("confidence", 1.0),
168
+ )
169
+ report.findings.append(finding)
170
+
171
+ return report
172
+
173
+
174
+ class DiagnosticEngine:
175
+ """
176
+ Engine for running comprehensive diagnostics.
177
+
178
+ Usage:
179
+ engine = DiagnosticEngine()
180
+
181
+ # Analyze a file
182
+ report = engine.analyze_file("path/to/file.py")
183
+
184
+ # Analyze a directory
185
+ report = engine.analyze_directory("path/to/project")
186
+
187
+ # Analyze with runtime monitoring
188
+ report = engine.analyze_execution(my_function, args)
189
+
190
+ # Print report
191
+ print(report.to_markdown())
192
+ """
193
+
194
+ def __init__(self):
195
+ from .code_tracer import CodeTracer
196
+ from .bug_detector import BugDetector
197
+ from .execution_monitor import ExecutionMonitor
198
+
199
+ self.tracer = CodeTracer()
200
+ self.detector = BugDetector()
201
+ self.monitor_class = ExecutionMonitor
202
+
203
+ self._report_counter = 0
204
+
205
+ def analyze_file(self, file_path: str) -> DiagnosticReport:
206
+ """Run static analysis on a single file."""
207
+ self._report_counter += 1
208
+
209
+ report = DiagnosticReport(
210
+ report_id=self._generate_report_id(file_path),
211
+ title=f"Diagnostic Report: {Path(file_path).name}",
212
+ created_at=time.time(),
213
+ target=file_path,
214
+ )
215
+
216
+ # Run static analysis
217
+ issues = self.detector.scan_file(file_path)
218
+
219
+ for issue in issues:
220
+ finding = DiagnosticFinding(
221
+ finding_id=issue.issue_id,
222
+ category="static",
223
+ severity=issue.severity,
224
+ title=issue.pattern_name.replace("_", " ").title(),
225
+ description=issue.message,
226
+ location={
227
+ "file": issue.file_path,
228
+ "line": issue.line_number,
229
+ "column": issue.column,
230
+ },
231
+ evidence=[issue.code_snippet] if issue.code_snippet else [],
232
+ suggestions=[issue.suggestion] if issue.suggestion else [],
233
+ confidence=issue.confidence,
234
+ )
235
+ report.add_finding(finding)
236
+
237
+ report.static_analysis = self.detector.get_summary()
238
+ report.compute_summary()
239
+
240
+ return report
241
+
242
+ def analyze_directory(self, dir_path: str, recursive: bool = True) -> DiagnosticReport:
243
+ """Run static analysis on a directory."""
244
+ self._report_counter += 1
245
+
246
+ report = DiagnosticReport(
247
+ report_id=self._generate_report_id(dir_path),
248
+ title=f"Diagnostic Report: {Path(dir_path).name}",
249
+ created_at=time.time(),
250
+ target=dir_path,
251
+ )
252
+
253
+ # Run static analysis
254
+ issues = self.detector.scan_directory(dir_path, recursive)
255
+
256
+ for issue in issues:
257
+ finding = DiagnosticFinding(
258
+ finding_id=issue.issue_id,
259
+ category="static",
260
+ severity=issue.severity,
261
+ title=issue.pattern_name.replace("_", " ").title(),
262
+ description=issue.message,
263
+ location={
264
+ "file": issue.file_path,
265
+ "line": issue.line_number,
266
+ "column": issue.column,
267
+ },
268
+ evidence=[issue.code_snippet] if issue.code_snippet else [],
269
+ suggestions=[issue.suggestion] if issue.suggestion else [],
270
+ confidence=issue.confidence,
271
+ )
272
+ report.add_finding(finding)
273
+
274
+ report.static_analysis = self.detector.get_summary()
275
+ report.compute_summary()
276
+
277
+ return report
278
+
279
+ def analyze_execution(self, func, *args, **kwargs) -> DiagnosticReport:
280
+ """Run diagnostics on function execution."""
281
+ self._report_counter += 1
282
+
283
+ func_name = getattr(func, '__name__', str(func))
284
+
285
+ report = DiagnosticReport(
286
+ report_id=self._generate_report_id(func_name),
287
+ title=f"Execution Diagnostic: {func_name}",
288
+ created_at=time.time(),
289
+ target=func_name,
290
+ )
291
+
292
+ # Create a monitor for this execution
293
+ monitor = self.monitor_class()
294
+
295
+ result = None
296
+ exception = None
297
+
298
+ with monitor.monitoring():
299
+ try:
300
+ result = func(*args, **kwargs)
301
+ except Exception as e:
302
+ exception = e
303
+
304
+ # Convert anomalies to findings
305
+ for anomaly in monitor.get_anomalies():
306
+ finding = DiagnosticFinding(
307
+ finding_id=f"anomaly_{anomaly.frame_id}_{anomaly.timestamp}",
308
+ category="runtime",
309
+ severity=anomaly.severity,
310
+ title=anomaly.anomaly_type.replace("_", " ").title(),
311
+ description=anomaly.description,
312
+ location=anomaly.context,
313
+ confidence=1.0,
314
+ )
315
+ report.add_finding(finding)
316
+
317
+ # Add execution summary
318
+ report.runtime_analysis = monitor.get_summary()
319
+ report.runtime_analysis["hotspots"] = [
320
+ {"function": f, "total_ms": t, "calls": c}
321
+ for f, t, c in monitor.get_hotspots(10)
322
+ ]
323
+
324
+ if exception:
325
+ report.runtime_analysis["exception"] = str(exception)
326
+
327
+ report.compute_summary()
328
+
329
+ return report
330
+
331
+ def _generate_report_id(self, target: str) -> str:
332
+ """Generate a unique report ID."""
333
+ content = f"{target}:{time.time()}:{self._report_counter}"
334
+ return hashlib.sha256(content.encode()).hexdigest()[:16]
335
+
336
+ def to_markdown(self, report: DiagnosticReport) -> str:
337
+ """Convert a report to Markdown format."""
338
+ lines = [
339
+ f"# {report.title}",
340
+ "",
341
+ f"**Report ID:** `{report.report_id}`",
342
+ f"**Generated:** {datetime.fromtimestamp(report.created_at).isoformat()}",
343
+ f"**Target:** `{report.target}`",
344
+ "",
345
+ "## Summary",
346
+ "",
347
+ f"- **Total Findings:** {report.summary.get('total_findings', 0)}",
348
+ ]
349
+
350
+ # Severity breakdown
351
+ by_severity = report.summary.get("by_severity", {})
352
+ if by_severity:
353
+ lines.append("")
354
+ lines.append("### By Severity")
355
+ lines.append("")
356
+ icons = {"critical": "🔴", "error": "❌", "warning": "⚠️", "info": "ℹ️"}
357
+ for sev in ["critical", "error", "warning", "info"]:
358
+ count = by_severity.get(sev, 0)
359
+ if count:
360
+ lines.append(f"- {icons.get(sev, '•')} **{sev.title()}:** {count}")
361
+
362
+ # Findings
363
+ if report.findings:
364
+ lines.extend(["", "## Findings", ""])
365
+
366
+ for finding in sorted(report.findings,
367
+ key=lambda f: ["critical", "error", "warning", "info"].index(f.severity)
368
+ if f.severity in ["critical", "error", "warning", "info"] else 99):
369
+ icon = {"critical": "🔴", "error": "❌", "warning": "⚠️", "info": "ℹ️"}.get(finding.severity, "•")
370
+
371
+ lines.append(f"### {icon} {finding.title}")
372
+ lines.append("")
373
+ lines.append(f"**Severity:** {finding.severity} | **Category:** {finding.category}")
374
+ lines.append("")
375
+ lines.append(finding.description)
376
+
377
+ if finding.location:
378
+ loc = finding.location
379
+ if "file" in loc:
380
+ lines.append("")
381
+ lines.append(f"**Location:** `{loc.get('file', '')}:{loc.get('line', '')}`")
382
+
383
+ if finding.evidence:
384
+ lines.append("")
385
+ lines.append("**Evidence:**")
386
+ for ev in finding.evidence:
387
+ lines.append(f"```")
388
+ lines.append(ev)
389
+ lines.append(f"```")
390
+
391
+ if finding.suggestions:
392
+ lines.append("")
393
+ lines.append("**Suggestions:**")
394
+ for sug in finding.suggestions:
395
+ lines.append(f"- {sug}")
396
+
397
+ lines.append("")
398
+
399
+ return "\n".join(lines)
400
+
401
+
402
+ # =============================================================================
403
+ # CONVENIENCE FUNCTION
404
+ # =============================================================================
405
+
406
+ def diagnose(target, **kwargs) -> DiagnosticReport:
407
+ """
408
+ Convenience function to run diagnostics.
409
+
410
+ Usage:
411
+ # Analyze a file
412
+ report = diagnose("path/to/file.py")
413
+
414
+ # Analyze a directory
415
+ report = diagnose("path/to/project/")
416
+
417
+ # Analyze a function
418
+ report = diagnose(my_function, arg1, arg2)
419
+ """
420
+ engine = DiagnosticEngine()
421
+
422
+ if callable(target):
423
+ # It's a function
424
+ return engine.analyze_execution(target, **kwargs)
425
+ elif isinstance(target, str):
426
+ path = Path(target)
427
+ if path.is_file():
428
+ return engine.analyze_file(target)
429
+ elif path.is_dir():
430
+ return engine.analyze_directory(target)
431
+
432
+ raise ValueError(f"Cannot diagnose target: {target}")
pyproject.toml CHANGED
@@ -5,7 +5,7 @@ build-backend = "hatchling.build"
5
  [project]
6
  name = "cascade-lattice"
7
  dynamic = ["version"]
8
- description = "Universal AI provenance layer — cryptographic receipts for every call, with HOLD inference halt protocol"
9
  readme = "README.md"
10
  license = "MIT"
11
  authors = [
@@ -13,7 +13,8 @@ authors = [
13
  ]
14
  keywords = [
15
  "ai", "ml", "provenance", "observability", "llm", "tracing",
16
- "cryptographic", "receipts", "monitoring", "hold-protocol"
 
17
  ]
18
  classifiers = [
19
  "Development Status :: 4 - Beta",
 
5
  [project]
6
  name = "cascade-lattice"
7
  dynamic = ["version"]
8
+ description = "Universal AI provenance layer — cryptographic receipts for every call, HOLD inference halt protocol, and code diagnostics"
9
  readme = "README.md"
10
  license = "MIT"
11
  authors = [
 
13
  ]
14
  keywords = [
15
  "ai", "ml", "provenance", "observability", "llm", "tracing",
16
+ "cryptographic", "receipts", "monitoring", "hold-protocol",
17
+ "diagnostics", "debugging", "bug-detection", "static-analysis"
18
  ]
19
  classifiers = [
20
  "Development Status :: 4 - Beta",