riazmo commited on
Commit
cfec14d
·
verified ·
1 Parent(s): ae6e181

Upload 17 files

Browse files
Dockerfile ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies for Playwright/Chromium
6
+ RUN apt-get update && apt-get install -y \
7
+ wget \
8
+ gnupg \
9
+ curl \
10
+ libnss3 \
11
+ libnspr4 \
12
+ libatk1.0-0 \
13
+ libatk-bridge2.0-0 \
14
+ libcups2 \
15
+ libdrm2 \
16
+ libdbus-1-3 \
17
+ libxkbcommon0 \
18
+ libxcomposite1 \
19
+ libxdamage1 \
20
+ libxfixes3 \
21
+ libxrandr2 \
22
+ libgbm1 \
23
+ libasound2 \
24
+ libpango-1.0-0 \
25
+ libpangocairo-1.0-0 \
26
+ libcairo2 \
27
+ libatspi2.0-0 \
28
+ libxshmfence1 \
29
+ libglib2.0-0 \
30
+ libx11-6 \
31
+ libx11-xcb1 \
32
+ libxcb1 \
33
+ libxext6 \
34
+ libxrender1 \
35
+ libxtst6 \
36
+ fonts-liberation \
37
+ libnss3-tools \
38
+ xdg-utils \
39
+ libgtk-3-0 \
40
+ libxss1 \
41
+ && rm -rf /var/lib/apt/lists/*
42
+
43
+ # Copy requirements first for caching
44
+ COPY requirements.txt .
45
+
46
+ # Install Python dependencies
47
+ RUN pip install --no-cache-dir -r requirements.txt
48
+
49
+ # Set Playwright browser path and install browsers
50
+ ENV PLAYWRIGHT_BROWSERS_PATH=/app/.playwright-browsers
51
+ RUN python -m playwright install chromium
52
+
53
+ # Copy application files
54
+ COPY . .
55
+
56
+ # Create data directories
57
+ RUN mkdir -p /app/data/figma /app/data/website /app/data/comparisons /app/reports
58
+
59
+ # Expose port
60
+ EXPOSE 7860
61
+
62
+ # Run the app
63
+ CMD ["python", "app.py"]
PROJECT_STATUS.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚀 UI Regression Testing Tool - Project Status
2
+
3
+ ## 📋 Project Overview
4
+ **Goal**: Compare Figma designs with live websites to detect ALL visual differences (114 types across 15 categories)
5
+
6
+ **Approach**: Phased development with element-by-element comparison
7
+
8
+ ---
9
+
10
+ ## 🎯 Phase Status
11
+
12
+ | Phase | Description | Status |
13
+ |-------|-------------|--------|
14
+ | **Phase 1** | Core infrastructure + Visual overlay | 🟢 IN PROGRESS |
15
+ | **Phase 2** | Figma element extraction | ⚪ Not started |
16
+ | **Phase 3** | Website DOM element extraction | ⚪ Not started |
17
+ | **Phase 4** | Element-by-element comparison | ⚪ Not started |
18
+ | **Phase 5** | Full 114-difference detection | ⚪ Not started |
19
+
20
+ ---
21
+
22
+ ## 📦 Phase 1: Core Infrastructure + Visual Overlay
23
+
24
+ ### Goals:
25
+ 1. ✅ Clean project structure
26
+ 2. ✅ Screenshot capture (Figma + Website)
27
+ 3. ✅ Fix 2x scaling issue (Figma exports at retina)
28
+ 4. ✅ Visual diff overlay (side-by-side with red highlights)
29
+ 5. ✅ Improved UI with clear status/results
30
+ 6. ✅ Proper logging system
31
+ 7. ✅ Similarity scoring (normalized)
32
+
33
+ ### Files Created:
34
+ ```
35
+ ui-regression-v2/
36
+ ├── app.py # Main Gradio interface
37
+ ├── workflow.py # LangGraph workflow orchestration
38
+ ├── state_schema.py # State definitions
39
+ ├── requirements.txt # Dependencies
40
+ ├── Dockerfile # HF Spaces deployment
41
+ ├── README.md # HF Spaces config
42
+ ├── PROJECT_STATUS.md # This file (for context)
43
+ ├── agents/
44
+ │ ├── __init__.py
45
+ │ ├── agent_0_super_agent.py # Test plan generation
46
+ │ ├── agent_1_design_inspector.py # Figma screenshot capture
47
+ │ ├── agent_2_website_inspector.py # Website screenshot capture
48
+ │ └── agent_3_visual_comparator.py # Visual diff analysis
49
+ ├── utils/
50
+ │ ├── __init__.py
51
+ │ ├── figma_client.py # Figma API integration
52
+ │ ├── website_capturer.py # Playwright website capture
53
+ │ └── image_differ.py # Image comparison & overlay
54
+ └── data/ # Output directories
55
+ ├── figma/
56
+ ├── website/
57
+ └── comparisons/
58
+ ```
59
+
60
+ ### Key Decisions:
61
+ - Using Playwright for website screenshots
62
+ - Figma API for design screenshots
63
+ - PIL/OpenCV for image comparison
64
+ - Gradio for UI
65
+ - LangGraph for workflow orchestration
66
+
67
+ ---
68
+
69
+ ## 🔮 Upcoming Phases
70
+
71
+ ### Phase 2: Figma Element Extraction
72
+ - Use Figma API to get all nodes (frames, components, text, etc.)
73
+ - Extract properties: position, size, color, text content
74
+ - Build element tree with hierarchy
75
+
76
+ ### Phase 3: Website DOM Extraction
77
+ - Use Playwright to extract DOM elements
78
+ - Get computed CSS for each element
79
+ - Extract: buttons, inputs, text, images, etc.
80
+ - Match elements by type/position/content
81
+
82
+ ### Phase 4: Element-by-Element Comparison
83
+ - Match Figma elements to Website elements
84
+ - Compare each property
85
+ - Detect: missing, extra, different elements
86
+ - Calculate per-element similarity
87
+
88
+ ### Phase 5: Full 114-Difference Detection
89
+ - Implement all 15 categories from framework
90
+ - Layout & Structure (8 checks)
91
+ - Typography (10 checks)
92
+ - Colors & Contrast (10 checks)
93
+ - And 12 more categories...
94
+
95
+ ---
96
+
97
+ ## 🔑 Important Information
98
+
99
+ ### HF Space Configuration:
100
+ - SDK: Docker
101
+ - Port: 7860
102
+ - Python: 3.10
103
+
104
+ ### Environment Variables Needed:
105
+ - `FIGMA_API_KEY` - Your Figma API token
106
+ - `HF_TOKEN` - (Optional) For enhanced AI analysis
107
+
108
+ ### Figma File Requirements:
109
+ - File must have frames named with viewport (e.g., "Checkout-Desktop", "Checkout-Mobile")
110
+ - Frame widths: Desktop=1440px, Mobile=375px
111
+
112
+ ---
113
+
114
+ ## 📝 How to Continue Development
115
+
116
+ When starting a new chat with Claude:
117
+ 1. Upload this `PROJECT_STATUS.md` file
118
+ 2. Upload any files you want to modify
119
+ 3. Say: "Continue UI Regression Tool from Phase X"
120
+
121
+ ---
122
+
123
+ ## 📅 Last Updated
124
+ Date: 2026-01-04
125
+ Phase: 1 (In Progress)
README.md CHANGED
@@ -1,10 +1,43 @@
1
  ---
2
- title: Ui Regression Testing 3
3
- emoji: 📈
4
- colorFrom: pink
5
- colorTo: red
6
  sdk: docker
 
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: UI Regression Testing v2
3
+ emoji: 🎨
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
+ app_port: 7860
8
  pinned: false
9
  ---
10
 
11
+ # 🎨 UI Regression Testing Tool v2
12
+
13
+ Compare Figma designs with live websites to detect visual differences.
14
+
15
+ ## Features
16
+
17
+ ### Phase 1 (Current)
18
+ - ✅ Screenshot capture from Figma designs
19
+ - ✅ Screenshot capture from live websites
20
+ - ✅ Visual diff overlay with highlighted differences
21
+ - ✅ Normalized image comparison (handles Figma 2x export)
22
+ - ✅ Similarity scoring
23
+ - ✅ Side-by-side comparison view
24
+
25
+ ### Coming Soon
26
+ - 🔜 Element-by-element comparison
27
+ - 🔜 Typography detection
28
+ - 🔜 Color palette analysis
29
+ - 🔜 Component detection
30
+ - 🔜 114 visual difference checks
31
+
32
+ ## Usage
33
+
34
+ 1. Enter your **Figma API Key**
35
+ 2. Enter the **Figma File ID** (from the URL)
36
+ 3. Enter the **Website URL** to compare
37
+ 4. Click **"Start Capture"** to take screenshots
38
+ 5. Click **"Run Analysis"** to compare and see differences
39
+
40
+ ## Requirements
41
+
42
+ - Figma file with frames named: `*-Desktop` (1440px) and `*-Mobile` (375px)
43
+ - Website must be publicly accessible
agents/__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agents for UI Regression Testing
3
+ Each agent handles a specific part of the comparison workflow.
4
+ """
5
+
6
+ from .agent_0_super_agent import agent_0_node
7
+ from .agent_1_design_inspector import agent_1_node
8
+ from .agent_2_website_inspector import agent_2_node
9
+ from .agent_3_visual_comparator import agent_3_node
10
+
11
+ __all__ = [
12
+ 'agent_0_node',
13
+ 'agent_1_node',
14
+ 'agent_2_node',
15
+ 'agent_3_node'
16
+ ]
agents/agent_0_super_agent.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agent 0: Super Agent
3
+ Initializes the test plan and validates inputs.
4
+ """
5
+
6
+ from typing import Dict, Any
7
+
8
+
9
+ def agent_0_node(state: Dict[str, Any]) -> Dict[str, Any]:
10
+ """
11
+ Initialize test plan and validate configuration.
12
+
13
+ This agent:
14
+ 1. Validates that all required inputs are present
15
+ 2. Sets up the test plan
16
+ 3. Logs the configuration
17
+ """
18
+ print("\n" + "="*60)
19
+ print("🤖 Agent 0: Super Agent - Initializing Test Plan")
20
+ print("="*60)
21
+
22
+ # Validate required fields
23
+ figma_key = state.get("figma_access_token", "")
24
+ figma_file = state.get("figma_file_key", "")
25
+ website_url = state.get("website_url", "")
26
+ execution_id = state.get("execution_id", "")
27
+ hf_token = state.get("hf_token", "")
28
+
29
+ errors = []
30
+ if not figma_key:
31
+ errors.append("Missing Figma API Key")
32
+ if not figma_file:
33
+ errors.append("Missing Figma File ID")
34
+ if not website_url:
35
+ errors.append("Missing Website URL")
36
+
37
+ if errors:
38
+ error_msg = "Validation failed: " + ", ".join(errors)
39
+ print(f" ❌ {error_msg}")
40
+ return {
41
+ "status": "validation_failed",
42
+ "error_message": error_msg,
43
+ "logs": state.get("logs", []) + [f"❌ {error_msg}"]
44
+ }
45
+
46
+ # Log configuration
47
+ print(f" ✅ Figma API Key: {'*' * 20}...{figma_key[-4:] if len(figma_key) > 4 else '****'}")
48
+ print(f" ✅ Figma File ID: {figma_file}")
49
+ print(f" ✅ Website URL: {website_url}")
50
+ print(f" ✅ Execution ID: {execution_id}")
51
+ print(f" ✅ HF Token: {'Provided' if hf_token else 'Not provided (basic mode)'}")
52
+
53
+ # Define test plan
54
+ test_plan = {
55
+ "viewports": ["desktop", "mobile"],
56
+ "desktop_width": 1440,
57
+ "mobile_width": 375,
58
+ "checks": [
59
+ "layout_comparison",
60
+ "color_comparison",
61
+ "visual_diff_overlay"
62
+ ]
63
+ }
64
+
65
+ print(f"\n 📋 Test Plan:")
66
+ print(f" • Viewports: {test_plan['viewports']}")
67
+ print(f" • Desktop width: {test_plan['desktop_width']}px")
68
+ print(f" • Mobile width: {test_plan['mobile_width']}px")
69
+ print(f" • Checks: {len(test_plan['checks'])} types")
70
+
71
+ logs = state.get("logs", [])
72
+ logs.append("✅ Test plan initialized")
73
+ logs.append(f" Viewports: desktop ({test_plan['desktop_width']}px), mobile ({test_plan['mobile_width']}px)")
74
+
75
+ return {
76
+ "status": "plan_ready",
77
+ "logs": logs
78
+ }
agents/agent_1_design_inspector.py ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agent 1: Design Inspector
3
+ Captures screenshots from Figma design file.
4
+ """
5
+
6
+ from typing import Dict, Any
7
+ import sys
8
+ import os
9
+
10
+ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
11
+
12
+ from utils.figma_client import FigmaClient
13
+
14
+
15
+ def agent_1_node(state: Dict[str, Any]) -> Dict[str, Any]:
16
+ """
17
+ Capture screenshots from Figma design.
18
+
19
+ This agent:
20
+ 1. Connects to Figma API
21
+ 2. Finds Desktop and Mobile frames
22
+ 3. Exports screenshots at proper scale
23
+ 4. Stores paths and dimensions in state
24
+ """
25
+ print("\n" + "="*60)
26
+ print("🎨 Agent 1: Design Inspector - Capturing Figma Screenshots")
27
+ print("="*60)
28
+
29
+ figma_key = state.get("figma_access_token", "")
30
+ figma_file = state.get("figma_file_key", "")
31
+ execution_id = state.get("execution_id", "")
32
+ logs = state.get("logs", [])
33
+
34
+ try:
35
+ # Initialize Figma client
36
+ client = FigmaClient(figma_key)
37
+
38
+ # Export frames
39
+ screenshots, dimensions = client.export_frames_for_comparison(
40
+ file_key=figma_file,
41
+ output_dir="data/figma",
42
+ execution_id=execution_id
43
+ )
44
+
45
+ if not screenshots:
46
+ raise ValueError("No frames found in Figma file. Ensure frames are named with 'Desktop' or 'Mobile'.")
47
+
48
+ print(f"\n ✅ Captured {len(screenshots)} Figma screenshots")
49
+ for viewport, path in screenshots.items():
50
+ dims = dimensions.get(viewport, {})
51
+ print(f" • {viewport}: {dims.get('width', '?')}x{dims.get('height', '?')}px")
52
+ logs.append(f"📸 Figma {viewport}: {dims.get('width', '?')}x{dims.get('height', '?')}px")
53
+
54
+ return {
55
+ "figma_screenshots": screenshots,
56
+ "figma_dimensions": dimensions,
57
+ "status": "figma_captured",
58
+ "logs": logs
59
+ }
60
+
61
+ except Exception as e:
62
+ error_msg = f"Failed to capture Figma screenshots: {str(e)}"
63
+ print(f"\n ❌ {error_msg}")
64
+ logs.append(f"❌ {error_msg}")
65
+ return {
66
+ "status": "figma_capture_failed",
67
+ "error_message": error_msg,
68
+ "logs": logs
69
+ }
agents/agent_2_website_inspector.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agent 2: Website Inspector
3
+ Captures screenshots from live website.
4
+ """
5
+
6
+ from typing import Dict, Any
7
+ import sys
8
+ import os
9
+
10
+ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
11
+
12
+ from utils.website_capturer import capture_website_screenshots
13
+
14
+
15
+ def agent_2_node(state: Dict[str, Any]) -> Dict[str, Any]:
16
+ """
17
+ Capture screenshots from live website.
18
+
19
+ This agent:
20
+ 1. Opens website in headless browser
21
+ 2. Captures full-page screenshots at desktop and mobile viewports
22
+ 3. Stores paths and dimensions in state
23
+ """
24
+ print("\n" + "="*60)
25
+ print("🌐 Agent 2: Website Inspector - Capturing Website Screenshots")
26
+ print("="*60)
27
+
28
+ website_url = state.get("website_url", "")
29
+ execution_id = state.get("execution_id", "")
30
+ logs = state.get("logs", [])
31
+
32
+ try:
33
+ # Capture screenshots
34
+ screenshots, dimensions = capture_website_screenshots(
35
+ website_url=website_url,
36
+ output_dir="data/website",
37
+ execution_id=execution_id,
38
+ desktop_width=1440,
39
+ mobile_width=375
40
+ )
41
+
42
+ if not screenshots:
43
+ raise ValueError("Failed to capture any website screenshots")
44
+
45
+ print(f"\n ✅ Captured {len(screenshots)} website screenshots")
46
+ for viewport, path in screenshots.items():
47
+ dims = dimensions.get(viewport, {})
48
+ print(f" • {viewport}: {dims.get('width', '?')}x{dims.get('height', '?')}px")
49
+ logs.append(f"📸 Website {viewport}: {dims.get('width', '?')}x{dims.get('height', '?')}px")
50
+
51
+ return {
52
+ "website_screenshots": screenshots,
53
+ "website_dimensions": dimensions,
54
+ "status": "website_captured",
55
+ "logs": logs
56
+ }
57
+
58
+ except Exception as e:
59
+ error_msg = f"Failed to capture website screenshots: {str(e)}"
60
+ print(f"\n ❌ {error_msg}")
61
+ logs.append(f"❌ {error_msg}")
62
+ return {
63
+ "status": "website_capture_failed",
64
+ "error_message": error_msg,
65
+ "logs": logs
66
+ }
agents/agent_3_visual_comparator.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agent 3: Visual Comparator
3
+ Compares Figma and Website screenshots, generates diff overlays.
4
+ """
5
+
6
+ from typing import Dict, Any
7
+ import sys
8
+ import os
9
+
10
+ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
11
+
12
+ from utils.image_differ import ImageDiffer
13
+
14
+
15
+ def agent_3_node(state: Dict[str, Any]) -> Dict[str, Any]:
16
+ """
17
+ Compare screenshots and generate visual diff report.
18
+
19
+ This agent:
20
+ 1. Normalizes image sizes (handles Figma 2x)
21
+ 2. Calculates similarity scores
22
+ 3. Creates side-by-side comparison with diff overlay
23
+ 4. Detects specific differences (color, layout, structure)
24
+ """
25
+ print("\n" + "="*60)
26
+ print("🔍 Agent 3: Visual Comparator - Analyzing Differences")
27
+ print("="*60)
28
+
29
+ figma_screenshots = state.get("figma_screenshots", {})
30
+ website_screenshots = state.get("website_screenshots", {})
31
+ figma_dims = state.get("figma_dimensions", {})
32
+ website_dims = state.get("website_dimensions", {})
33
+ execution_id = state.get("execution_id", "")
34
+ logs = state.get("logs", [])
35
+
36
+ if not figma_screenshots or not website_screenshots:
37
+ error_msg = "Missing screenshots for comparison"
38
+ print(f"\n ❌ {error_msg}")
39
+ return {
40
+ "status": "comparison_failed",
41
+ "error_message": error_msg,
42
+ "logs": logs + [f"❌ {error_msg}"]
43
+ }
44
+
45
+ try:
46
+ # Initialize differ
47
+ differ = ImageDiffer(output_dir="data/comparisons")
48
+
49
+ # Run comparison
50
+ results = differ.compare_all_viewports(
51
+ figma_screenshots=figma_screenshots,
52
+ website_screenshots=website_screenshots,
53
+ figma_dims=figma_dims,
54
+ website_dims=website_dims,
55
+ execution_id=execution_id
56
+ )
57
+
58
+ # Build comparison images dict
59
+ comparison_images = {}
60
+ for viewport, comparison in results["comparisons"].items():
61
+ comparison_images[viewport] = comparison["comparison_image"]
62
+
63
+ # Log results
64
+ print(f"\n" + "="*60)
65
+ print("📊 COMPARISON RESULTS")
66
+ print("="*60)
67
+ print(f" Overall Similarity: {results['overall_score']:.1f}%")
68
+
69
+ for viewport, score in results["viewport_scores"].items():
70
+ status_emoji = "✅" if score >= 90 else "⚠️" if score >= 70 else "❌"
71
+ print(f" {status_emoji} {viewport.capitalize()}: {score:.1f}%")
72
+ logs.append(f"{status_emoji} {viewport} similarity: {score:.1f}%")
73
+
74
+ if results["all_differences"]:
75
+ print(f"\n 🔍 Found {len(results['all_differences'])} differences:")
76
+ for diff in results["all_differences"]:
77
+ severity_emoji = "🔴" if diff["severity"] == "High" else "🟡" if diff["severity"] == "Medium" else "🟢"
78
+ print(f" {severity_emoji} [{diff['category']}] {diff['title']}")
79
+ logs.append(f"{severity_emoji} {diff['title']}")
80
+ else:
81
+ print(f"\n ✅ No significant differences detected!")
82
+ logs.append("✅ No significant differences detected")
83
+
84
+ # Convert numpy types to Python native types for serialization
85
+ def convert_to_native(obj):
86
+ """Convert numpy types to Python native types."""
87
+ import numpy as np
88
+ if isinstance(obj, np.floating):
89
+ return float(obj)
90
+ elif isinstance(obj, np.integer):
91
+ return int(obj)
92
+ elif isinstance(obj, np.ndarray):
93
+ return obj.tolist()
94
+ elif isinstance(obj, dict):
95
+ return {k: convert_to_native(v) for k, v in obj.items()}
96
+ elif isinstance(obj, list):
97
+ return [convert_to_native(i) for i in obj]
98
+ return obj
99
+
100
+ # Convert all results to native Python types
101
+ similarity_scores_native = {k: float(v) for k, v in results["viewport_scores"].items()}
102
+ overall_score_native = float(results["overall_score"])
103
+ differences_native = convert_to_native(results["all_differences"])
104
+
105
+ return {
106
+ "comparison_images": comparison_images,
107
+ "visual_differences": differences_native,
108
+ "similarity_scores": similarity_scores_native,
109
+ "overall_score": overall_score_native,
110
+ "status": "analysis_complete",
111
+ "logs": logs
112
+ }
113
+
114
+ except Exception as e:
115
+ import traceback
116
+ error_msg = f"Comparison failed: {str(e)}"
117
+ print(f"\n ❌ {error_msg}")
118
+ traceback.print_exc()
119
+ return {
120
+ "status": "comparison_failed",
121
+ "error_message": error_msg,
122
+ "logs": logs + [f"❌ {error_msg}"]
123
+ }
app.py ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ UI Regression Testing Tool v2 - Phase 1
3
+ Main Gradio interface with single action button and blue theme.
4
+ """
5
+
6
+ import gradio as gr
7
+ import os
8
+ import sys
9
+ from datetime import datetime
10
+ from pathlib import Path
11
+
12
+ # Add current directory to path
13
+ sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
14
+
15
+ from workflow import run_workflow_step_1, resume_workflow
16
+
17
+ # Log file
18
+ LOG_FILE = "app.log"
19
+
20
+
21
+ # Custom CSS for blue theme
22
+ CUSTOM_CSS = """
23
+ /* Main background */
24
+ .gradio-container {
25
+ background: linear-gradient(135deg, #1a1a2e 0%, #16213e 50%, #0f3460 100%) !important;
26
+ }
27
+
28
+ /* Headers */
29
+ h1, h2, h3, .markdown-text h1, .markdown-text h2, .markdown-text h3 {
30
+ color: #00d4ff !important;
31
+ }
32
+
33
+ /* Labels */
34
+ label, .label-wrap {
35
+ color: #e0e0e0 !important;
36
+ background: #1e3a5f !important;
37
+ padding: 4px 8px !important;
38
+ border-radius: 4px !important;
39
+ }
40
+
41
+ /* Input fields */
42
+ input, textarea, .textbox {
43
+ background: #0d1b2a !important;
44
+ border: 1px solid #1e3a5f !important;
45
+ color: #ffffff !important;
46
+ }
47
+
48
+ /* Primary button - Blue */
49
+ .primary {
50
+ background: linear-gradient(135deg, #0066ff 0%, #0052cc 100%) !important;
51
+ border: none !important;
52
+ color: white !important;
53
+ }
54
+
55
+ .primary:hover {
56
+ background: linear-gradient(135deg, #0052cc 0%, #003d99 100%) !important;
57
+ }
58
+
59
+ /* Secondary elements */
60
+ .secondary {
61
+ background: #1e3a5f !important;
62
+ border: 1px solid #2d5a87 !important;
63
+ color: #e0e0e0 !important;
64
+ }
65
+
66
+ /* Code blocks / Logs */
67
+ .code-wrap, pre, code {
68
+ background: #0d1b2a !important;
69
+ border: 1px solid #1e3a5f !important;
70
+ color: #00ff88 !important;
71
+ }
72
+
73
+ /* Status boxes */
74
+ .output-class, .input-class {
75
+ background: #0d1b2a !important;
76
+ border: 1px solid #1e3a5f !important;
77
+ }
78
+
79
+ /* Panels */
80
+ .panel {
81
+ background: rgba(30, 58, 95, 0.5) !important;
82
+ border: 1px solid #2d5a87 !important;
83
+ border-radius: 8px !important;
84
+ }
85
+
86
+ /* Success text */
87
+ .success {
88
+ color: #00ff88 !important;
89
+ }
90
+
91
+ /* Error text */
92
+ .error {
93
+ color: #ff4757 !important;
94
+ }
95
+
96
+ /* Warning text */
97
+ .warning {
98
+ color: #ffa502 !important;
99
+ }
100
+ """
101
+
102
+
103
+ def clear_logs():
104
+ """Clear log file for fresh start."""
105
+ with open(LOG_FILE, "w") as f:
106
+ f.write(f"===== New Session: {datetime.now()} =====\n\n")
107
+
108
+
109
+ def get_logs():
110
+ """Get current log contents."""
111
+ try:
112
+ with open(LOG_FILE, "r") as f:
113
+ return f.read()
114
+ except:
115
+ return "Logs will appear here..."
116
+
117
+
118
+ def run_full_analysis(figma_key, figma_id, url, hf_token, progress=gr.Progress()):
119
+ """
120
+ Combined action: Capture screenshots AND run analysis in one step.
121
+ """
122
+ clear_logs()
123
+
124
+ print(f"\n{'='*60}")
125
+ print(f"🚀 STARTING UI REGRESSION TEST")
126
+ print(f"{'='*60}")
127
+ print(f"⏰ Time: {datetime.now()}")
128
+ print(f"🌐 Website: {url}")
129
+ print(f"📁 Figma File: {figma_id}")
130
+ print(f"🔑 Figma Key: {'✅ Provided' if figma_key else '❌ Missing'}")
131
+ print(f"🤗 HF Token: {'✅ Provided' if hf_token else '⚪ Not provided'}")
132
+ print(f"{'='*60}\n")
133
+
134
+ # Validation
135
+ if not figma_key or not figma_id or not url:
136
+ return (
137
+ "❌ Please fill in all required fields:\n• Figma API Key\n• Figma File ID\n• Website URL",
138
+ None, None, None, None, get_logs()
139
+ )
140
+
141
+ execution_id = f"exec_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
142
+ thread_id = f"thread_{execution_id}"
143
+
144
+ try:
145
+ # ===== STEP 1: Capture Screenshots =====
146
+ progress(0.1, desc="📸 Capturing Figma screenshots...")
147
+ print(f"🧵 Thread ID: {thread_id}")
148
+ print(f"\n📋 STEP 1: Capturing Screenshots...")
149
+ print("-" * 40)
150
+
151
+ state = run_workflow_step_1(figma_id, figma_key, url, execution_id, thread_id, hf_token)
152
+
153
+ if not state or not state.values:
154
+ return "❌ Workflow failed to start", None, None, None, None, get_logs()
155
+
156
+ values = state.values
157
+ status = values.get("status", "unknown")
158
+
159
+ if "failed" in status:
160
+ error = values.get("error_message", "Unknown error")
161
+ return f"❌ Capture Failed: {error}", None, None, None, None, get_logs()
162
+
163
+ # Get captured screenshots
164
+ figma_shots = values.get("figma_screenshots", {})
165
+ website_shots = values.get("website_screenshots", {})
166
+
167
+ print(f"\n✅ Screenshots captured successfully!")
168
+ progress(0.5, desc="🔍 Running visual analysis...")
169
+
170
+ # ===== STEP 2: Run Analysis =====
171
+ print(f"\n{'='*60}")
172
+ print(f"🔍 STEP 2: RUNNING VISUAL ANALYSIS")
173
+ print(f"{'='*60}")
174
+
175
+ state = resume_workflow(thread_id, user_approval=True)
176
+
177
+ if not state or not state.values:
178
+ return "❌ Analysis failed - no state returned", None, None, None, None, get_logs()
179
+
180
+ values = state.values
181
+ status = values.get("status", "unknown")
182
+
183
+ if "failed" in status:
184
+ error = values.get("error_message", "Unknown error")
185
+ return f"❌ Analysis Failed: {error}", None, None, None, None, get_logs()
186
+
187
+ progress(0.9, desc="📊 Generating results...")
188
+
189
+ # Get results
190
+ overall_score = values.get("overall_score", 0)
191
+ viewport_scores = values.get("similarity_scores", {})
192
+ differences = values.get("visual_differences", [])
193
+ comparison_images = values.get("comparison_images", {})
194
+ figma_dims = values.get("figma_dimensions", {})
195
+ website_dims = values.get("website_dimensions", {})
196
+
197
+ print(f"\n{'='*60}")
198
+ print(f"✅ ANALYSIS COMPLETE!")
199
+ print(f"{'='*60}")
200
+
201
+ # ===== Build Results =====
202
+
203
+ # Status message
204
+ status_msg = f"✅ Analysis Complete!\n\n"
205
+
206
+ score_emoji = "🟢" if overall_score >= 90 else "🟡" if overall_score >= 70 else "🔴"
207
+ status_msg += f"{score_emoji} Overall Similarity: {overall_score:.1f}%\n\n"
208
+
209
+ status_msg += "📱 Viewport Scores:\n"
210
+ for vp, score in viewport_scores.items():
211
+ vp_emoji = "✅" if score >= 90 else "⚠️" if score >= 70 else "❌"
212
+ status_msg += f" {vp_emoji} {vp.capitalize()}: {score:.1f}%\n"
213
+
214
+ status_msg += f"\n🔍 Differences: {len(differences)} found\n"
215
+
216
+ # Detailed results
217
+ result_msg = f"{'='*50}\n"
218
+ result_msg += f"📊 DETAILED ANALYSIS REPORT\n"
219
+ result_msg += f"{'='*50}\n\n"
220
+
221
+ result_msg += f"📸 Screenshots Captured:\n"
222
+ result_msg += f" Figma: {len(figma_shots)} viewports\n"
223
+ result_msg += f" Website: {len(website_shots)} viewports\n\n"
224
+
225
+ if differences:
226
+ result_msg += f"🔍 DIFFERENCES FOUND: {len(differences)}\n"
227
+ result_msg += "-" * 40 + "\n"
228
+
229
+ for i, diff in enumerate(differences, 1):
230
+ sev = diff.get("severity", "Medium")
231
+ sev_emoji = "🔴 HIGH" if sev == "High" else "🟡 MEDIUM" if sev == "Medium" else "🟢 LOW"
232
+ result_msg += f"\n[{i}] {diff.get('title', 'Unknown')}\n"
233
+ result_msg += f" Severity: {sev_emoji}\n"
234
+ result_msg += f" Category: {diff.get('category', 'N/A')}\n"
235
+ result_msg += f" Viewport: {diff.get('viewport', 'N/A')}\n"
236
+ if diff.get("description"):
237
+ desc = diff["description"][:150]
238
+ result_msg += f" Details: {desc}\n"
239
+ else:
240
+ result_msg += "\n✅ No significant differences detected!\n"
241
+ result_msg += "The website matches the Figma design closely.\n"
242
+
243
+ # Get images
244
+ figma_preview = figma_shots.get("desktop") or figma_shots.get("mobile")
245
+ website_preview = website_shots.get("desktop") or website_shots.get("mobile")
246
+ comparison_img = comparison_images.get("desktop") or comparison_images.get("mobile")
247
+ gallery_images = list(comparison_images.values()) if comparison_images else None
248
+
249
+ progress(1.0, desc="✅ Complete!")
250
+
251
+ return status_msg, result_msg, figma_preview, website_preview, comparison_img, get_logs()
252
+
253
+ except Exception as e:
254
+ import traceback
255
+ error_msg = f"❌ Error: {str(e)}"
256
+ print(error_msg)
257
+ traceback.print_exc()
258
+ return error_msg, None, None, None, None, get_logs()
259
+
260
+
261
+ def create_interface():
262
+ """Create the Gradio interface with blue theme."""
263
+
264
+ with gr.Blocks(
265
+ title="UI Regression Testing v2",
266
+ theme=gr.themes.Base(
267
+ primary_hue="blue",
268
+ secondary_hue="slate",
269
+ neutral_hue="slate",
270
+ ),
271
+ css=CUSTOM_CSS
272
+ ) as demo:
273
+
274
+ gr.Markdown("# 🎨 UI Regression Testing Tool")
275
+ gr.Markdown("*Compare Figma designs with live websites to detect visual differences*")
276
+
277
+ with gr.Row():
278
+ # LEFT: Configuration
279
+ with gr.Column(scale=1):
280
+ gr.Markdown("### 📝 Configuration")
281
+
282
+ figma_key = gr.Textbox(
283
+ label="Figma API Key *",
284
+ type="password",
285
+ placeholder="figd_xxxxx..."
286
+ )
287
+ figma_id = gr.Textbox(
288
+ label="Figma File ID *",
289
+ placeholder="e.g., ENieX2p3Gy3TAtxaB36cZA"
290
+ )
291
+ website_url = gr.Textbox(
292
+ label="Website URL *",
293
+ placeholder="https://your-site.com"
294
+ )
295
+ hf_token = gr.Textbox(
296
+ label="HF Token (Optional)",
297
+ type="password",
298
+ placeholder="For future AI enhancements"
299
+ )
300
+
301
+ gr.Markdown("### 🎮 Action")
302
+ btn_run = gr.Button(
303
+ "🚀 Run Full Analysis",
304
+ variant="primary",
305
+ size="lg"
306
+ )
307
+
308
+ gr.Markdown("""
309
+ *This will:*
310
+ 1. Capture Figma design screenshots
311
+ 2. Capture website screenshots
312
+ 3. Compare and highlight differences
313
+ 4. Generate similarity scores
314
+ """)
315
+
316
+ # RIGHT: Status & Results
317
+ with gr.Column(scale=2):
318
+ gr.Markdown("### 📊 Results Summary")
319
+ status_box = gr.Textbox(
320
+ label="Status",
321
+ lines=8,
322
+ interactive=False
323
+ )
324
+
325
+ gr.Markdown("### 📋 Detailed Report")
326
+ results_box = gr.Textbox(
327
+ label="Analysis Details",
328
+ lines=12,
329
+ interactive=False
330
+ )
331
+
332
+ # Preview Section
333
+ gr.Markdown("---")
334
+ gr.Markdown("### 🖼️ Screenshot Preview")
335
+ with gr.Row():
336
+ figma_preview = gr.Image(label="📐 Figma Design", height=250)
337
+ website_preview = gr.Image(label="🌐 Live Website", height=250)
338
+
339
+ # Comparison Section
340
+ gr.Markdown("---")
341
+ gr.Markdown("### 🔍 Visual Comparison")
342
+ gr.Markdown("*Red areas indicate differences between design and website*")
343
+ comparison_image = gr.Image(
344
+ label="Figma | Website | Differences",
345
+ height=500
346
+ )
347
+
348
+ # Logs Section
349
+ gr.Markdown("---")
350
+ with gr.Accordion("📜 Execution Logs", open=False):
351
+ log_box = gr.Code(
352
+ label="Live Logs",
353
+ language="python",
354
+ lines=20,
355
+ interactive=False
356
+ )
357
+ log_timer = gr.Timer(value=2)
358
+ log_timer.tick(get_logs, outputs=[log_box])
359
+
360
+ # Button handler
361
+ btn_run.click(
362
+ run_full_analysis,
363
+ inputs=[figma_key, figma_id, website_url, hf_token],
364
+ outputs=[status_box, results_box, figma_preview, website_preview, comparison_image, log_box]
365
+ )
366
+
367
+ return demo
368
+
369
+
370
+ # Main entry point
371
+ if __name__ == "__main__":
372
+ # Setup logging
373
+ class Logger:
374
+ def __init__(self):
375
+ self.terminal = sys.stdout
376
+ Path(LOG_FILE).parent.mkdir(parents=True, exist_ok=True)
377
+ with open(LOG_FILE, "w") as f:
378
+ f.write(f"===== Application Started: {datetime.now()} =====\n\n")
379
+ self.log = open(LOG_FILE, "a")
380
+
381
+ def write(self, message):
382
+ self.terminal.write(message)
383
+ self.log.write(message)
384
+ self.log.flush()
385
+
386
+ def flush(self):
387
+ self.terminal.flush()
388
+ self.log.flush()
389
+
390
+ def isatty(self):
391
+ return hasattr(self.terminal, 'isatty') and self.terminal.isatty()
392
+
393
+ sys.stdout = Logger()
394
+ sys.stderr = Logger()
395
+
396
+ print(f"🚀 Starting UI Regression Testing Tool v2")
397
+ print(f"📅 Date: {datetime.now()}")
398
+ print(f"📦 Phase: 1 (Visual Comparison)")
399
+
400
+ # Create and launch
401
+ demo = create_interface()
402
+ demo.launch(server_name="0.0.0.0", server_port=7860)
data/.DS_Store ADDED
Binary file (6.15 kB). View file
 
requirements.txt ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core Dependencies
2
+ python-dotenv>=1.0.0
3
+ requests>=2.31.0
4
+
5
+ # LangGraph & LangChain
6
+ langgraph>=0.0.50
7
+ langchain>=0.1.0
8
+ langchain-core>=0.1.0
9
+
10
+ # Web Automation
11
+ playwright>=1.40.0
12
+
13
+ # Data Processing
14
+ numpy>=1.24.3
15
+ pillow>=11.0.0
16
+
17
+ # Image Processing & Comparison
18
+ opencv-python-headless>=4.8.0
19
+ scikit-image>=0.21.0
20
+
21
+ # Async Support
22
+ aiohttp>=3.9.1
23
+
24
+ # Gradio (for dashboard)
25
+ gradio>=4.0.0
26
+
27
+ # Hugging Face (optional - for enhanced AI analysis)
28
+ huggingface-hub>=0.19.0
state_schema.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ State Schema for UI Regression Testing Workflow
3
+ Defines the structure of data passed between agents.
4
+ """
5
+
6
+ from typing import TypedDict, Dict, List, Any, Optional
7
+
8
+
9
+ class WorkflowState(TypedDict, total=False):
10
+ """Complete state for the UI regression testing workflow."""
11
+
12
+ # Configuration
13
+ figma_file_key: str
14
+ figma_access_token: str
15
+ website_url: str
16
+ execution_id: str
17
+ hf_token: Optional[str]
18
+
19
+ # Screenshot paths
20
+ figma_screenshots: Dict[str, str] # {"desktop": "path/to/file.png", "mobile": "..."}
21
+ website_screenshots: Dict[str, str]
22
+
23
+ # Screenshot metadata (for proper comparison)
24
+ figma_dimensions: Dict[str, Dict[str, int]] # {"desktop": {"width": 1440, "height": 1649}}
25
+ website_dimensions: Dict[str, Dict[str, int]]
26
+
27
+ # Comparison results
28
+ comparison_images: Dict[str, str] # {"desktop": "path/to/diff.png"}
29
+ visual_differences: List[Dict[str, Any]]
30
+ similarity_scores: Dict[str, float] # {"desktop": 85.5, "mobile": 78.2}
31
+ overall_score: float
32
+
33
+ # Workflow control
34
+ user_approval: bool
35
+ status: str
36
+ error_message: Optional[str]
37
+
38
+ # Logs
39
+ logs: List[str]
40
+
41
+
42
+ def create_initial_state(
43
+ figma_file_key: str,
44
+ figma_access_token: str,
45
+ website_url: str,
46
+ execution_id: str,
47
+ hf_token: str = ""
48
+ ) -> WorkflowState:
49
+ """Create initial state for a new workflow run."""
50
+ return WorkflowState(
51
+ figma_file_key=figma_file_key,
52
+ figma_access_token=figma_access_token,
53
+ website_url=website_url,
54
+ execution_id=execution_id,
55
+ hf_token=hf_token,
56
+ figma_screenshots={},
57
+ website_screenshots={},
58
+ figma_dimensions={},
59
+ website_dimensions={},
60
+ comparison_images={},
61
+ visual_differences=[],
62
+ similarity_scores={},
63
+ overall_score=0.0,
64
+ user_approval=False,
65
+ status="initialized",
66
+ error_message=None,
67
+ logs=[]
68
+ )
utils/__init__.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Utility modules for UI Regression Testing
3
+ """
4
+
5
+ from .figma_client import FigmaClient
6
+ from .website_capturer import capture_website_screenshots
7
+ from .image_differ import ImageDiffer
8
+
9
+ __all__ = [
10
+ 'FigmaClient',
11
+ 'capture_website_screenshots',
12
+ 'ImageDiffer'
13
+ ]
utils/figma_client.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Figma API Client
3
+ Handles communication with Figma API to extract design screenshots.
4
+ """
5
+
6
+ import requests
7
+ from typing import Dict, List, Tuple, Optional
8
+ from pathlib import Path
9
+
10
+
11
+ class FigmaClient:
12
+ """Client for interacting with Figma API."""
13
+
14
+ BASE_URL = "https://api.figma.com/v1"
15
+
16
+ def __init__(self, access_token: str):
17
+ self.access_token = access_token
18
+ self.headers = {
19
+ "X-Figma-Token": access_token
20
+ }
21
+
22
+ def get_file(self, file_key: str) -> Dict:
23
+ """Get file metadata from Figma."""
24
+ url = f"{self.BASE_URL}/files/{file_key}"
25
+ response = requests.get(url, headers=self.headers)
26
+ response.raise_for_status()
27
+ return response.json()
28
+
29
+ def get_frame_nodes(self, file_key: str) -> List[Dict]:
30
+ """
31
+ Get all top-level frames from the Figma file.
32
+ Returns frames that match viewport patterns (Desktop/Mobile).
33
+ """
34
+ file_data = self.get_file(file_key)
35
+ frames = []
36
+
37
+ # Navigate through document -> pages -> frames
38
+ document = file_data.get("document", {})
39
+ for page in document.get("children", []):
40
+ if page.get("type") == "CANVAS":
41
+ for child in page.get("children", []):
42
+ if child.get("type") == "FRAME":
43
+ frame_name = child.get("name", "")
44
+ frame_id = child.get("id", "")
45
+ bounds = child.get("absoluteBoundingBox", {})
46
+
47
+ # Determine viewport type from name
48
+ viewport = None
49
+ if "desktop" in frame_name.lower() or bounds.get("width", 0) >= 1000:
50
+ viewport = "desktop"
51
+ elif "mobile" in frame_name.lower() or bounds.get("width", 0) <= 500:
52
+ viewport = "mobile"
53
+
54
+ if viewport:
55
+ frames.append({
56
+ "id": frame_id,
57
+ "name": frame_name,
58
+ "viewport": viewport,
59
+ "width": bounds.get("width", 0),
60
+ "height": bounds.get("height", 0)
61
+ })
62
+
63
+ return frames
64
+
65
+ def export_frame(
66
+ self,
67
+ file_key: str,
68
+ frame_id: str,
69
+ output_path: str,
70
+ scale: float = 1.0,
71
+ format: str = "png"
72
+ ) -> Tuple[str, Dict[str, int]]:
73
+ """
74
+ Export a frame as an image.
75
+
76
+ Args:
77
+ file_key: Figma file key
78
+ frame_id: Node ID of the frame to export
79
+ output_path: Where to save the image
80
+ scale: Export scale (1.0 = actual size, 0.5 = half size)
81
+ format: Image format (png, jpg, svg, pdf)
82
+
83
+ Returns:
84
+ Tuple of (saved_path, dimensions_dict)
85
+ """
86
+ # Get export URL
87
+ url = f"{self.BASE_URL}/images/{file_key}"
88
+ params = {
89
+ "ids": frame_id,
90
+ "scale": scale,
91
+ "format": format
92
+ }
93
+
94
+ response = requests.get(url, headers=self.headers, params=params)
95
+ response.raise_for_status()
96
+
97
+ data = response.json()
98
+ image_url = data.get("images", {}).get(frame_id)
99
+
100
+ if not image_url:
101
+ raise ValueError(f"Could not get export URL for frame {frame_id}")
102
+
103
+ # Download the image
104
+ img_response = requests.get(image_url)
105
+ img_response.raise_for_status()
106
+
107
+ # Save to file
108
+ Path(output_path).parent.mkdir(parents=True, exist_ok=True)
109
+ with open(output_path, "wb") as f:
110
+ f.write(img_response.content)
111
+
112
+ # Get image dimensions
113
+ from PIL import Image
114
+ with Image.open(output_path) as img:
115
+ width, height = img.size
116
+
117
+ return output_path, {"width": width, "height": height}
118
+
119
+ def export_frames_for_comparison(
120
+ self,
121
+ file_key: str,
122
+ output_dir: str,
123
+ execution_id: str
124
+ ) -> Tuple[Dict[str, str], Dict[str, Dict[str, int]]]:
125
+ """
126
+ Export all relevant frames for UI comparison.
127
+
128
+ Automatically finds Desktop and Mobile frames and exports them
129
+ at 1x scale (not 2x) for proper comparison with website screenshots.
130
+
131
+ Returns:
132
+ Tuple of (screenshot_paths, dimensions)
133
+ """
134
+ frames = self.get_frame_nodes(file_key)
135
+ screenshots = {}
136
+ dimensions = {}
137
+
138
+ # Group by viewport, prefer larger frames
139
+ viewport_frames = {}
140
+ for frame in frames:
141
+ viewport = frame["viewport"]
142
+ if viewport not in viewport_frames:
143
+ viewport_frames[viewport] = frame
144
+ elif frame["width"] * frame["height"] > viewport_frames[viewport]["width"] * viewport_frames[viewport]["height"]:
145
+ viewport_frames[viewport] = frame
146
+
147
+ # Export each viewport
148
+ for viewport, frame in viewport_frames.items():
149
+ print(f" 📥 Exporting frame: {frame['name']} ({frame['width']}px width)")
150
+ print(f" Frame ID: {frame['id']}")
151
+ print(f" Dimensions: {frame['width']}x{frame['height']}")
152
+
153
+ output_path = f"{output_dir}/{viewport}_{execution_id}.png"
154
+
155
+ # Export at scale 1.0 (actual design size)
156
+ # Note: Figma often has designs at 2x, we'll handle normalization in comparison
157
+ saved_path, dims = self.export_frame(
158
+ file_key,
159
+ frame["id"],
160
+ output_path,
161
+ scale=1.0 # Export at 1x to get actual design dimensions
162
+ )
163
+
164
+ screenshots[viewport] = saved_path
165
+ dimensions[viewport] = dims
166
+ print(f" ✓ Exported: {saved_path}")
167
+
168
+ return screenshots, dimensions
utils/image_differ.py ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Image Differ Utility
3
+ Compares Figma and Website screenshots, generates visual diff overlays.
4
+
5
+ Key Features:
6
+ - Normalizes image sizes (handles Figma 2x export)
7
+ - Creates side-by-side comparison
8
+ - Highlights differences in red
9
+ - Calculates similarity score
10
+ """
11
+
12
+ import numpy as np
13
+ from PIL import Image, ImageDraw, ImageFont
14
+ from pathlib import Path
15
+ from typing import Dict, List, Tuple, Any
16
+ import cv2
17
+
18
+
19
+ class ImageDiffer:
20
+ """
21
+ Compares two images and generates visual diff output.
22
+ """
23
+
24
+ def __init__(self, output_dir: str = "data/comparisons"):
25
+ self.output_dir = output_dir
26
+ Path(output_dir).mkdir(parents=True, exist_ok=True)
27
+
28
+ def normalize_images(
29
+ self,
30
+ figma_path: str,
31
+ website_path: str,
32
+ figma_dims: Dict[str, int],
33
+ website_dims: Dict[str, int]
34
+ ) -> Tuple[np.ndarray, np.ndarray, float]:
35
+ """
36
+ Normalize images to the same size for comparison.
37
+
38
+ Handles Figma 2x export by detecting and adjusting.
39
+
40
+ Returns:
41
+ Tuple of (figma_array, website_array, scale_factor)
42
+ """
43
+ figma_img = Image.open(figma_path).convert("RGB")
44
+ website_img = Image.open(website_path).convert("RGB")
45
+
46
+ figma_w, figma_h = figma_img.size
47
+ website_w, website_h = website_img.size
48
+
49
+ # Detect if Figma is at 2x (common for retina exports)
50
+ # If Figma is roughly 2x the website width, scale it down
51
+ width_ratio = figma_w / website_w
52
+ scale_factor = 1.0
53
+
54
+ if 1.8 <= width_ratio <= 2.2:
55
+ # Figma is at 2x, scale down to match website
56
+ scale_factor = 0.5
57
+ new_figma_w = int(figma_w * scale_factor)
58
+ new_figma_h = int(figma_h * scale_factor)
59
+ figma_img = figma_img.resize((new_figma_w, new_figma_h), Image.Resampling.LANCZOS)
60
+ print(f" 📐 Detected Figma 2x export, scaled to {new_figma_w}x{new_figma_h}")
61
+
62
+ # Now resize both to match (use the smaller dimensions)
63
+ target_w = min(figma_img.size[0], website_img.size[0])
64
+ target_h = min(figma_img.size[1], website_img.size[1])
65
+
66
+ figma_img = figma_img.resize((target_w, target_h), Image.Resampling.LANCZOS)
67
+ website_img = website_img.resize((target_w, target_h), Image.Resampling.LANCZOS)
68
+
69
+ return np.array(figma_img), np.array(website_img), scale_factor
70
+
71
+ def calculate_similarity(
72
+ self,
73
+ img1: np.ndarray,
74
+ img2: np.ndarray
75
+ ) -> Tuple[float, np.ndarray]:
76
+ """
77
+ Calculate similarity score between two images.
78
+
79
+ Uses structural similarity (SSIM) for perceptual comparison.
80
+
81
+ Returns:
82
+ Tuple of (similarity_score_0_to_100, diff_mask)
83
+ """
84
+ from skimage.metrics import structural_similarity as ssim
85
+
86
+ # Convert to grayscale for SSIM
87
+ gray1 = cv2.cvtColor(img1, cv2.COLOR_RGB2GRAY)
88
+ gray2 = cv2.cvtColor(img2, cv2.COLOR_RGB2GRAY)
89
+
90
+ # Calculate SSIM
91
+ score, diff = ssim(gray1, gray2, full=True)
92
+
93
+ # Convert to 0-100 scale
94
+ similarity = score * 100
95
+
96
+ # Create diff mask (areas with low similarity)
97
+ diff_mask = ((1 - diff) * 255).astype(np.uint8)
98
+
99
+ return similarity, diff_mask
100
+
101
+ def create_diff_overlay(
102
+ self,
103
+ figma_img: np.ndarray,
104
+ website_img: np.ndarray,
105
+ diff_mask: np.ndarray,
106
+ threshold: int = 30
107
+ ) -> np.ndarray:
108
+ """
109
+ Create an overlay image highlighting differences.
110
+
111
+ Args:
112
+ figma_img: Figma screenshot as numpy array
113
+ website_img: Website screenshot as numpy array
114
+ diff_mask: Difference mask from SSIM
115
+ threshold: Minimum difference to highlight (0-255)
116
+
117
+ Returns:
118
+ Overlay image with differences highlighted in red
119
+ """
120
+ # Create output image (copy of website)
121
+ overlay = website_img.copy()
122
+
123
+ # Find areas with significant differences
124
+ significant_diff = diff_mask > threshold
125
+
126
+ # Highlight differences in semi-transparent red
127
+ red_overlay = overlay.copy()
128
+ red_overlay[significant_diff] = [255, 0, 0] # Red
129
+
130
+ # Blend with original (50% opacity for red areas)
131
+ alpha = 0.5
132
+ overlay[significant_diff] = (
133
+ alpha * red_overlay[significant_diff] +
134
+ (1 - alpha) * overlay[significant_diff]
135
+ ).astype(np.uint8)
136
+
137
+ return overlay
138
+
139
+ def create_comparison_image(
140
+ self,
141
+ figma_path: str,
142
+ website_path: str,
143
+ output_path: str,
144
+ figma_dims: Dict[str, int],
145
+ website_dims: Dict[str, int],
146
+ viewport: str
147
+ ) -> Dict[str, Any]:
148
+ """
149
+ Create a comprehensive comparison image.
150
+
151
+ Generates a side-by-side view:
152
+ [Figma Design] | [Website] | [Diff Overlay]
153
+
154
+ Returns:
155
+ Dict with comparison results
156
+ """
157
+ print(f"\n 🔍 Comparing {viewport} screenshots...")
158
+
159
+ # Normalize images
160
+ figma_arr, website_arr, scale = self.normalize_images(
161
+ figma_path, website_path, figma_dims, website_dims
162
+ )
163
+
164
+ # Calculate similarity
165
+ similarity, diff_mask = self.calculate_similarity(figma_arr, website_arr)
166
+ print(f" 📊 Similarity Score: {similarity:.1f}%")
167
+
168
+ # Create diff overlay
169
+ overlay = self.create_diff_overlay(figma_arr, website_arr, diff_mask)
170
+
171
+ # Count different pixels
172
+ significant_diff = diff_mask > 30
173
+ diff_percentage = (np.sum(significant_diff) / significant_diff.size) * 100
174
+ print(f" 📍 Pixels with differences: {diff_percentage:.1f}%")
175
+
176
+ # Create side-by-side comparison
177
+ h, w = figma_arr.shape[:2]
178
+ padding = 20
179
+ label_height = 40
180
+
181
+ # Create canvas
182
+ canvas_w = (w * 3) + (padding * 4)
183
+ canvas_h = h + label_height + (padding * 2)
184
+ canvas = np.ones((canvas_h, canvas_w, 3), dtype=np.uint8) * 240 # Light gray bg
185
+
186
+ # Place images
187
+ y_offset = label_height + padding
188
+
189
+ # Figma (left)
190
+ x1 = padding
191
+ canvas[y_offset:y_offset+h, x1:x1+w] = figma_arr
192
+
193
+ # Website (center)
194
+ x2 = padding * 2 + w
195
+ canvas[y_offset:y_offset+h, x2:x2+w] = website_arr
196
+
197
+ # Diff overlay (right)
198
+ x3 = padding * 3 + w * 2
199
+ canvas[y_offset:y_offset+h, x3:x3+w] = overlay
200
+
201
+ # Convert to PIL for text
202
+ canvas_pil = Image.fromarray(canvas)
203
+ draw = ImageDraw.Draw(canvas_pil)
204
+
205
+ # Try to use a font, fall back to default
206
+ try:
207
+ font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 20)
208
+ small_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 14)
209
+ except:
210
+ font = ImageFont.load_default()
211
+ small_font = font
212
+
213
+ # Add labels
214
+ draw.text((x1 + w//2 - 60, 10), "FIGMA DESIGN", fill=(0, 0, 0), font=font)
215
+ draw.text((x2 + w//2 - 40, 10), "WEBSITE", fill=(0, 0, 0), font=font)
216
+ draw.text((x3 + w//2 - 80, 10), "DIFFERENCES", fill=(255, 0, 0), font=font)
217
+
218
+ # Add similarity score
219
+ score_text = f"Similarity: {similarity:.1f}%"
220
+ draw.text((canvas_w - 150, canvas_h - 30), score_text, fill=(0, 100, 0), font=small_font)
221
+
222
+ # Save
223
+ Path(output_path).parent.mkdir(parents=True, exist_ok=True)
224
+ canvas_pil.save(output_path)
225
+ print(f" ✓ Saved comparison: {output_path}")
226
+
227
+ # Detect specific differences
228
+ differences = self._detect_differences(figma_arr, website_arr, diff_mask, viewport)
229
+
230
+ return {
231
+ "viewport": viewport,
232
+ "similarity_score": similarity,
233
+ "diff_percentage": diff_percentage,
234
+ "comparison_image": output_path,
235
+ "differences": differences,
236
+ "scale_applied": scale
237
+ }
238
+
239
+ def _detect_differences(
240
+ self,
241
+ figma_arr: np.ndarray,
242
+ website_arr: np.ndarray,
243
+ diff_mask: np.ndarray,
244
+ viewport: str
245
+ ) -> List[Dict[str, Any]]:
246
+ """
247
+ Detect and categorize specific differences.
248
+
249
+ Returns:
250
+ List of detected differences with details
251
+ """
252
+ differences = []
253
+
254
+ # 1. Check overall color difference
255
+ figma_mean = np.mean(figma_arr, axis=(0, 1))
256
+ website_mean = np.mean(website_arr, axis=(0, 1))
257
+ color_diff = np.linalg.norm(figma_mean - website_mean)
258
+
259
+ if color_diff > 10:
260
+ differences.append({
261
+ "category": "colors",
262
+ "severity": "Medium" if color_diff < 30 else "High",
263
+ "title": "Color scheme differs",
264
+ "description": f"Average color difference detected (delta: {color_diff:.1f})",
265
+ "viewport": viewport
266
+ })
267
+
268
+ # 2. Check for significant regions of difference
269
+ # Find contours in diff mask
270
+ _, binary = cv2.threshold(diff_mask, 50, 255, cv2.THRESH_BINARY)
271
+ contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
272
+
273
+ # Filter significant contours (larger than 1% of image)
274
+ min_area = (figma_arr.shape[0] * figma_arr.shape[1]) * 0.01
275
+ significant_regions = [c for c in contours if cv2.contourArea(c) > min_area]
276
+
277
+ if len(significant_regions) > 0:
278
+ differences.append({
279
+ "category": "layout",
280
+ "severity": "High" if len(significant_regions) > 5 else "Medium",
281
+ "title": f"Layout differences in {len(significant_regions)} regions",
282
+ "description": f"Found {len(significant_regions)} areas with significant visual differences",
283
+ "viewport": viewport,
284
+ "regions_count": len(significant_regions)
285
+ })
286
+
287
+ # 3. Check edges/borders
288
+ figma_edges = cv2.Canny(cv2.cvtColor(figma_arr, cv2.COLOR_RGB2GRAY), 50, 150)
289
+ website_edges = cv2.Canny(cv2.cvtColor(website_arr, cv2.COLOR_RGB2GRAY), 50, 150)
290
+ edge_diff = np.abs(figma_edges.astype(float) - website_edges.astype(float))
291
+ edge_diff_percentage = np.mean(edge_diff) / 255 * 100
292
+
293
+ if edge_diff_percentage > 5:
294
+ differences.append({
295
+ "category": "structure",
296
+ "severity": "Medium",
297
+ "title": "Element borders/edges differ",
298
+ "description": f"Edge structure differs by {edge_diff_percentage:.1f}%",
299
+ "viewport": viewport
300
+ })
301
+
302
+ return differences
303
+
304
+ def compare_all_viewports(
305
+ self,
306
+ figma_screenshots: Dict[str, str],
307
+ website_screenshots: Dict[str, str],
308
+ figma_dims: Dict[str, Dict[str, int]],
309
+ website_dims: Dict[str, Dict[str, int]],
310
+ execution_id: str
311
+ ) -> Dict[str, Any]:
312
+ """
313
+ Compare all viewports and generate comprehensive results.
314
+
315
+ Returns:
316
+ Complete comparison results
317
+ """
318
+ results = {
319
+ "comparisons": {},
320
+ "all_differences": [],
321
+ "viewport_scores": {},
322
+ "overall_score": 0.0
323
+ }
324
+
325
+ viewports = set(figma_screenshots.keys()) & set(website_screenshots.keys())
326
+
327
+ for viewport in viewports:
328
+ output_path = f"{self.output_dir}/comparison_{viewport}_{execution_id}.png"
329
+
330
+ comparison = self.create_comparison_image(
331
+ figma_screenshots[viewport],
332
+ website_screenshots[viewport],
333
+ output_path,
334
+ figma_dims.get(viewport, {}),
335
+ website_dims.get(viewport, {}),
336
+ viewport
337
+ )
338
+
339
+ results["comparisons"][viewport] = comparison
340
+ results["all_differences"].extend(comparison["differences"])
341
+ results["viewport_scores"][viewport] = comparison["similarity_score"]
342
+
343
+ # Calculate overall score (average of viewports)
344
+ if results["viewport_scores"]:
345
+ results["overall_score"] = sum(results["viewport_scores"].values()) / len(results["viewport_scores"])
346
+
347
+ return results
utils/website_capturer.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Website Screenshot Capturer
3
+ Uses Playwright to capture full-page screenshots of websites.
4
+ """
5
+
6
+ import asyncio
7
+ from pathlib import Path
8
+ from typing import Dict, Tuple
9
+
10
+
11
+ async def _capture_screenshots_async(
12
+ website_url: str,
13
+ output_dir: str,
14
+ execution_id: str,
15
+ desktop_width: int = 1440,
16
+ mobile_width: int = 375
17
+ ) -> Tuple[Dict[str, str], Dict[str, Dict[str, int]]]:
18
+ """
19
+ Async function to capture website screenshots.
20
+
21
+ Captures full-page screenshots at desktop and mobile viewports.
22
+ """
23
+ from playwright.async_api import async_playwright
24
+
25
+ Path(output_dir).mkdir(parents=True, exist_ok=True)
26
+ screenshots = {}
27
+ dimensions = {}
28
+
29
+ async with async_playwright() as p:
30
+ browser = await p.chromium.launch(headless=True)
31
+
32
+ try:
33
+ # Desktop capture
34
+ print(f" 📱 Capturing desktop ({desktop_width}px width, full height)...")
35
+ page = await browser.new_page(viewport={"width": desktop_width, "height": 1080})
36
+ await page.goto(website_url, wait_until="networkidle", timeout=60000)
37
+
38
+ # Wait for any lazy-loaded content
39
+ await page.wait_for_timeout(2000)
40
+
41
+ # Get full page height
42
+ desktop_height = await page.evaluate("() => document.documentElement.scrollHeight")
43
+ print(f" ℹ️ Desktop full height: {desktop_height}px")
44
+
45
+ # Set viewport to full height and capture
46
+ await page.set_viewport_size({"width": desktop_width, "height": desktop_height})
47
+ desktop_path = f"{output_dir}/desktop_{execution_id}.png"
48
+ await page.screenshot(path=desktop_path, full_page=True)
49
+
50
+ screenshots["desktop"] = desktop_path
51
+ dimensions["desktop"] = {"width": desktop_width, "height": desktop_height}
52
+ print(f" ✓ Saved: {desktop_path}")
53
+
54
+ await page.close()
55
+
56
+ # Mobile capture
57
+ print(f" 📱 Capturing mobile ({mobile_width}px width, full height)...")
58
+ page = await browser.new_page(viewport={"width": mobile_width, "height": 812})
59
+
60
+ # Set mobile user agent
61
+ await page.set_extra_http_headers({
62
+ "User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 15_0 like Mac OS X) AppleWebKit/605.1.15"
63
+ })
64
+
65
+ await page.goto(website_url, wait_until="networkidle", timeout=60000)
66
+ await page.wait_for_timeout(2000)
67
+
68
+ # Get full page height
69
+ mobile_height = await page.evaluate("() => document.documentElement.scrollHeight")
70
+ print(f" ℹ️ Mobile full height: {mobile_height}px")
71
+
72
+ # Set viewport to full height and capture
73
+ await page.set_viewport_size({"width": mobile_width, "height": mobile_height})
74
+ mobile_path = f"{output_dir}/mobile_{execution_id}.png"
75
+ await page.screenshot(path=mobile_path, full_page=True)
76
+
77
+ screenshots["mobile"] = mobile_path
78
+ dimensions["mobile"] = {"width": mobile_width, "height": mobile_height}
79
+ print(f" ✓ Saved: {mobile_path}")
80
+
81
+ await page.close()
82
+
83
+ finally:
84
+ await browser.close()
85
+
86
+ return screenshots, dimensions
87
+
88
+
89
+ def capture_website_screenshots(
90
+ website_url: str,
91
+ output_dir: str,
92
+ execution_id: str,
93
+ desktop_width: int = 1440,
94
+ mobile_width: int = 375
95
+ ) -> Tuple[Dict[str, str], Dict[str, Dict[str, int]]]:
96
+ """
97
+ Synchronous wrapper for capturing website screenshots.
98
+
99
+ Args:
100
+ website_url: URL of the website to capture
101
+ output_dir: Directory to save screenshots
102
+ execution_id: Unique ID for this execution
103
+ desktop_width: Desktop viewport width (default 1440)
104
+ mobile_width: Mobile viewport width (default 375)
105
+
106
+ Returns:
107
+ Tuple of (screenshot_paths_dict, dimensions_dict)
108
+ """
109
+ return asyncio.run(
110
+ _capture_screenshots_async(
111
+ website_url,
112
+ output_dir,
113
+ execution_id,
114
+ desktop_width,
115
+ mobile_width
116
+ )
117
+ )
workflow.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ LangGraph Workflow Orchestration
3
+ Manages the flow between agents with persistence and breakpoints.
4
+ """
5
+
6
+ from typing import Dict, Any
7
+ from langgraph.graph import StateGraph, END
8
+ from langgraph.checkpoint.memory import MemorySaver
9
+ from state_schema import WorkflowState, create_initial_state
10
+ from agents import (
11
+ agent_0_node,
12
+ agent_1_node,
13
+ agent_2_node,
14
+ agent_3_node
15
+ )
16
+
17
+ # Global checkpointer to persist state between function calls
18
+ _global_checkpointer = MemorySaver()
19
+
20
+
21
+ def create_workflow():
22
+ """Create the LangGraph workflow with persistence."""
23
+ global _global_checkpointer
24
+
25
+ workflow = StateGraph(WorkflowState)
26
+
27
+ # Add nodes (agents)
28
+ workflow.add_node("initialize", agent_0_node) # Validate & plan
29
+ workflow.add_node("capture_figma", agent_1_node) # Figma screenshots
30
+ workflow.add_node("capture_website", agent_2_node) # Website screenshots
31
+ workflow.add_node("compare", agent_3_node) # Visual comparison
32
+
33
+ # Define flow
34
+ workflow.set_entry_point("initialize")
35
+ workflow.add_edge("initialize", "capture_figma")
36
+ workflow.add_edge("capture_figma", "capture_website")
37
+ workflow.add_edge("capture_website", "compare") # Breakpoint before compare
38
+ workflow.add_edge("compare", END)
39
+
40
+ # Compile with persistence and breakpoint
41
+ return workflow.compile(
42
+ checkpointer=_global_checkpointer,
43
+ interrupt_before=["compare"] # Pause before analysis for human review
44
+ )
45
+
46
+
47
+ def run_workflow_step_1(
48
+ figma_id: str,
49
+ figma_key: str,
50
+ url: str,
51
+ execution_id: str,
52
+ thread_id: str,
53
+ hf_token: str = ""
54
+ ):
55
+ """
56
+ Run the first part of the workflow (capture screenshots).
57
+ Stops at the breakpoint before analysis.
58
+ """
59
+ print(f" ⚙️ Initializing workflow for thread: {thread_id}")
60
+
61
+ app = create_workflow()
62
+ config = {"configurable": {"thread_id": thread_id}}
63
+
64
+ initial_state = create_initial_state(
65
+ figma_file_key=figma_id,
66
+ figma_access_token=figma_key,
67
+ website_url=url,
68
+ execution_id=execution_id,
69
+ hf_token=hf_token
70
+ )
71
+
72
+ print(" 🏃 Running workflow (Step 1: Capture)...")
73
+
74
+ try:
75
+ for event in app.stream(initial_state, config, stream_mode="values"):
76
+ if event:
77
+ status = event.get("status", "")
78
+ if status:
79
+ print(f" 📍 Status: {status}")
80
+ except Exception as e:
81
+ print(f" ❌ Workflow error: {str(e)}")
82
+ raise
83
+
84
+ return app.get_state(config)
85
+
86
+
87
+ def resume_workflow(thread_id: str, user_approval: bool = True):
88
+ """
89
+ Resume the workflow after human approval.
90
+ Continues from the breakpoint to run analysis.
91
+ """
92
+ print(f" 🔄 Resuming workflow for thread: {thread_id}")
93
+
94
+ app = create_workflow()
95
+ config = {"configurable": {"thread_id": thread_id}}
96
+
97
+ # Update state with approval
98
+ app.update_state(config, {"user_approval": user_approval})
99
+
100
+ print(" 🏃 Running workflow (Step 2: Analysis)...")
101
+
102
+ # Resume execution
103
+ state = None
104
+ for event in app.stream(None, config, stream_mode="values"):
105
+ state = event
106
+
107
+ return app.get_state(config)