File size: 4,975 Bytes
abefb0c e36ee3e abefb0c caa9604 abefb0c caa9604 abefb0c caa9604 abefb0c caa9604 abefb0c caa9604 abefb0c c16d2d9 abefb0c 42485f3 abefb0c 42485f3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | ---
task_type: figma_to_code
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: tags
list: string
- name: description
dtype: string
- name: figma_data
dtype: string
- name: access_level
dtype: string
- name: created_at
dtype: string
splits:
- name: train
num_bytes: 64990
num_examples: 37
download_size: 35843
dataset_size: 64990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
language:
- en
tags:
- figma-to-code
- autonomous-agents
pretty_name: CREW:Figma-to-Code
size_categories:
- n<1K
---
# CREW: Figma to Code
A benchmark for evaluating AI coding agents on **Figma-to-code generation** — converting real-world Figma community designs
into production-ready React + Tailwind CSS applications.
Each task gives an agent full access to a Figma file via MCP tools. The agent must extract the design system, generate
components, build successfully, and deploy a live preview. Outputs are evaluated through human preference (ELO) and
automated verifiers.
## Benchmark at a Glance
| | |
|---|---|
| **Tasks** | 37 real Figma community designs |
| **Agents tested** | Claude Code (Opus 4.6), Codex (GPT-5.2), Gemini CLI (3.1 Pro) |
| **Total runs** | 96 autonomous agent executions |
| **Human evaluations** | 135 pairwise preference votes |
| **Primary metric** | Human Preference ELO (Bradley-Terry) |
| **Task duration** | 6–40 expert-hours equivalent per task |
## Leaderboard (Human Preference ELO)
| Rank | Agent | Model | ELO | 95% CI | Win % |
|------|-------|-------|-----|--------|-------|
| 1 | Codex | GPT-5.2 | 1054 | [1005, 1114] | 56.8% |
| 2 | Claude Code | Opus 4.6 | 1039 | [987, 1093] | 55.1% |
| 3 | Gemini CLI | 3.1 Pro | 907 | [842, 965] | 31.1% |
Top two agents are statistically indistinguishable (p=0.67, Cohen's h=0.08); both significantly outperform Gemini CLI
(p<0.05).
Live leaderboard: [evals.metaphi.ai](https://evals.metaphi.ai)
## Dataset Schema
Each row represents one Figma design task:
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Unique task identifier |
| `figma_data.figma_file_key` | string | Figma file ID for API access |
| `figma_data.figma_file_url` | string | Full Figma URL |
| `figma_data.figma_file_name` | string | Design name/title |
| `figma_data.is_site` | bool | Whether design is a Figma Site |
| `description` | string | Design context and task description |
## Task Scenarios
Tasks span 7 complexity levels, from single-component extraction to full multi-page applications:
1. **E-commerce Product Page** (12 hrs) — PDP with image gallery, variant selectors, inventory states, cart integration
2. **Mobile Onboarding Flow** (16 hrs) — Multi-step flow with transitions, conditional branching, state management
3. **Component Set with States** (6 hrs) — Variant matrix extraction, typed props, conditional rendering
4. **Design Tokens to Theme** (8 hrs) — Variables, typography, effects → Tailwind config + CSS custom properties
5. **Multi-Page Webapp** (40 hrs) — 5+ pages with routing, shared components, consistent theming
6. **Animation-Heavy Interface** (20 hrs) — Smart Animate → Framer Motion with precise timing choreography
7. **Messy Enterprise File** (24 hrs) — Real-world chaos: inconsistent naming, duplicates, orphaned components
## Data Curation
Sourced from licensing partnerships with enterprises, domain-experts and community dataset curation.
## Evaluation Framework
| Verifier | Type | Method |
|----------|------|--------|
| **Human Preference** | Subjective | Pairwise comparison → Bradley-Terry ELO |
| **Visual Judge** | Subjective | VLM screenshot comparison (design vs. output) |
| **Skill Verifier** | Subjective | Task-specific rubrics (build, tokens, components, fidelity) |
| **Behavior Verifier** | Subjective | Agent trajectory analysis (error recovery, tool usage) |
## Agent Error Recovery
Across 96 runs, agents encountered 590 errors with a 70.3% autonomous recovery rate:
| Error Type | Count | Recovery |
|------------|-------|----------|
| Tool call failure | 419 | 66.3% |
| Git error | 64 | 75.0% |
| Syntax error | 33 | 90.9% |
| Build error | 11 | 100.0% |
## Usage
```python
from datasets import load_dataset
ds = load_dataset("metaphilabs/figma", split="train")
# Each row contains a Figma file key for API access
for task in ds:
print(task["id"], task["figma_data"]["figma_file_name"])
Citation
@misc{metaphi2026crew,
title={CREW: Enterprise Agent Benchmarks},
author={Metaphi Labs},
year={2026},
url={https://evals.metaphi.ai}
}
Links
- Leaderboard: https://evals.metaphi.ai
- Website: https://metaphi.ai
- Collection: https://huggingface.co/collections/metaphilabs/crew |