metaphi-ai commited on
Commit
42485f3
·
verified ·
1 Parent(s): c16d2d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md CHANGED
@@ -27,4 +27,122 @@ configs:
27
  data_files:
28
  - split: train
29
  path: data/train-*
 
 
 
 
 
 
 
 
 
30
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  data_files:
28
  - split: train
29
  path: data/train-*
30
+ license: mit
31
+ language:
32
+ - en
33
+ tags:
34
+ - figma-to-code
35
+ - autonomous-agents
36
+ pretty_name: CREW:Figma-to-Code
37
+ size_categories:
38
+ - n<1K
39
  ---
40
+
41
+
42
+ # CREW: Figma to Code
43
+
44
+ A benchmark for evaluating AI coding agents on **Figma-to-code generation** — converting real-world Figma community designs
45
+ into production-ready React + Tailwind CSS applications.
46
+
47
+ Each task gives an agent full access to a Figma file via MCP tools. The agent must extract the design system, generate
48
+ components, build successfully, and deploy a live preview. Outputs are evaluated through human preference (ELO) and
49
+ automated verifiers.
50
+
51
+ ## Benchmark at a Glance
52
+
53
+ | | |
54
+ |---|---|
55
+ | **Tasks** | 37 real Figma community designs |
56
+ | **Agents tested** | Claude Code (Opus 4.6), Codex (GPT-5.2), Gemini CLI (3.1 Pro) |
57
+ | **Total runs** | 96 autonomous agent executions |
58
+ | **Human evaluations** | 135 pairwise preference votes |
59
+ | **Primary metric** | Human Preference ELO (Bradley-Terry) |
60
+ | **Task duration** | 6–40 expert-hours equivalent per task |
61
+
62
+ ## Leaderboard (Human Preference ELO)
63
+
64
+ | Rank | Agent | Model | ELO | 95% CI | Win % |
65
+ |------|-------|-------|-----|--------|-------|
66
+ | 1 | Codex | GPT-5.2 | 1054 | [1005, 1114] | 56.8% |
67
+ | 2 | Claude Code | Opus 4.6 | 1039 | [987, 1093] | 55.1% |
68
+ | 3 | Gemini CLI | 3.1 Pro | 907 | [842, 965] | 31.1% |
69
+
70
+ Top two agents are statistically indistinguishable (p=0.67, Cohen's h=0.08); both significantly outperform Gemini CLI
71
+ (p<0.05).
72
+
73
+ Live leaderboard: [evals.metaphi.ai](https://evals.metaphi.ai)
74
+
75
+ ## Dataset Schema
76
+
77
+ Each row represents one Figma design task:
78
+
79
+ | Column | Type | Description |
80
+ |--------|------|-------------|
81
+ | `id` | string | Unique task identifier |
82
+ | `figma_data.figma_file_key` | string | Figma file ID for API access |
83
+ | `figma_data.figma_file_url` | string | Full Figma URL |
84
+ | `figma_data.figma_file_name` | string | Design name/title |
85
+ | `figma_data.is_site` | bool | Whether design is a Figma Site |
86
+ | `description` | string | Design context and task description |
87
+
88
+ ## Task Scenarios
89
+
90
+ Tasks span 7 complexity levels, from single-component extraction to full multi-page applications:
91
+
92
+ 1. **E-commerce Product Page** (12 hrs) — PDP with image gallery, variant selectors, inventory states, cart integration
93
+ 2. **Mobile Onboarding Flow** (16 hrs) — Multi-step flow with transitions, conditional branching, state management
94
+ 3. **Component Set with States** (6 hrs) — Variant matrix extraction, typed props, conditional rendering
95
+ 4. **Design Tokens to Theme** (8 hrs) — Variables, typography, effects → Tailwind config + CSS custom properties
96
+ 5. **Multi-Page Webapp** (40 hrs) — 5+ pages with routing, shared components, consistent theming
97
+ 6. **Animation-Heavy Interface** (20 hrs) — Smart Animate → Framer Motion with precise timing choreography
98
+ 7. **Messy Enterprise File** (24 hrs) — Real-world chaos: inconsistent naming, duplicates, orphaned components
99
+
100
+ ## Data Curation
101
+
102
+ Sourced from licensing partnerships with enterprises, domain-experts and community dataset curation.
103
+
104
+ ## Evaluation Framework
105
+
106
+ | Verifier | Type | Method |
107
+ |----------|------|--------|
108
+ | **Human Preference** | Subjective | Pairwise comparison → Bradley-Terry ELO |
109
+ | **Visual Judge** | Subjective | VLM screenshot comparison (design vs. output) |
110
+ | **Skill Verifier** | Subjective | Task-specific rubrics (build, tokens, components, fidelity) |
111
+ | **Behavior Verifier** | Subjective | Agent trajectory analysis (error recovery, tool usage) |
112
+
113
+ ## Agent Error Recovery
114
+
115
+ Across 96 runs, agents encountered 590 errors with a 70.3% autonomous recovery rate:
116
+
117
+ | Error Type | Count | Recovery |
118
+ |------------|-------|----------|
119
+ | Tool call failure | 419 | 66.3% |
120
+ | Git error | 64 | 75.0% |
121
+ | Syntax error | 33 | 90.9% |
122
+ | Build error | 11 | 100.0% |
123
+
124
+ ## Usage
125
+
126
+ ```python
127
+ from datasets import load_dataset
128
+
129
+ ds = load_dataset("metaphilabs/figma", split="train")
130
+
131
+ # Each row contains a Figma file key for API access
132
+ for task in ds:
133
+ print(task["id"], task["figma_data"]["figma_file_name"])
134
+
135
+ Citation
136
+
137
+ @misc{metaphi2026crew,
138
+ title={CREW: Enterprise Agent Benchmarks},
139
+ author={Metaphi Labs},
140
+ year={2026},
141
+ url={https://evals.metaphi.ai}
142
+ }
143
+
144
+ Links
145
+
146
+ - Leaderboard: https://evals.metaphi.ai
147
+ - Website: https://metaphi.ai
148
+ - Collection: https://huggingface.co/collections/metaphilabs/crew