| --- |
| task_type: figma_to_code |
| dataset_info: |
| features: |
| - name: id |
| dtype: string |
| - name: url |
| dtype: string |
| - name: tags |
| list: string |
| - name: description |
| dtype: string |
| - name: figma_data |
| dtype: string |
| - name: access_level |
| dtype: string |
| - name: created_at |
| dtype: string |
| splits: |
| - name: train |
| num_bytes: 64990 |
| num_examples: 37 |
| download_size: 35843 |
| dataset_size: 64990 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/train-* |
| license: mit |
| language: |
| - en |
| tags: |
| - figma-to-code |
| - autonomous-agents |
| pretty_name: CREW:Figma-to-Code |
| size_categories: |
| - n<1K |
| --- |
| |
| |
| # CREW: Figma to Code |
|
|
| A benchmark for evaluating AI coding agents on **Figma-to-code generation** — converting real-world Figma community designs |
| into production-ready React + Tailwind CSS applications. |
|
|
| Each task gives an agent full access to a Figma file via MCP tools. The agent must extract the design system, generate |
| components, build successfully, and deploy a live preview. Outputs are evaluated through human preference (ELO) and |
| automated verifiers. |
|
|
| ## Benchmark at a Glance |
|
|
| | | | |
| |---|---| |
| | **Tasks** | 37 real Figma community designs | |
| | **Agents tested** | Claude Code (Opus 4.6), Codex (GPT-5.2), Gemini CLI (3.1 Pro) | |
| | **Total runs** | 96 autonomous agent executions | |
| | **Human evaluations** | 135 pairwise preference votes | |
| | **Primary metric** | Human Preference ELO (Bradley-Terry) | |
| | **Task duration** | 6–40 expert-hours equivalent per task | |
|
|
| ## Leaderboard (Human Preference ELO) |
|
|
| | Rank | Agent | Model | ELO | 95% CI | Win % | |
| |------|-------|-------|-----|--------|-------| |
| | 1 | Codex | GPT-5.2 | 1054 | [1005, 1114] | 56.8% | |
| | 2 | Claude Code | Opus 4.6 | 1039 | [987, 1093] | 55.1% | |
| | 3 | Gemini CLI | 3.1 Pro | 907 | [842, 965] | 31.1% | |
|
|
| Top two agents are statistically indistinguishable (p=0.67, Cohen's h=0.08); both significantly outperform Gemini CLI |
| (p<0.05). |
|
|
| Live leaderboard: [evals.metaphi.ai](https://evals.metaphi.ai) |
|
|
| ## Dataset Schema |
|
|
| Each row represents one Figma design task: |
|
|
| | Column | Type | Description | |
| |--------|------|-------------| |
| | `id` | string | Unique task identifier | |
| | `figma_data.figma_file_key` | string | Figma file ID for API access | |
| | `figma_data.figma_file_url` | string | Full Figma URL | |
| | `figma_data.figma_file_name` | string | Design name/title | |
| | `figma_data.is_site` | bool | Whether design is a Figma Site | |
| | `description` | string | Design context and task description | |
|
|
| ## Task Scenarios |
|
|
| Tasks span 7 complexity levels, from single-component extraction to full multi-page applications: |
|
|
| 1. **E-commerce Product Page** (12 hrs) — PDP with image gallery, variant selectors, inventory states, cart integration |
| 2. **Mobile Onboarding Flow** (16 hrs) — Multi-step flow with transitions, conditional branching, state management |
| 3. **Component Set with States** (6 hrs) — Variant matrix extraction, typed props, conditional rendering |
| 4. **Design Tokens to Theme** (8 hrs) — Variables, typography, effects → Tailwind config + CSS custom properties |
| 5. **Multi-Page Webapp** (40 hrs) — 5+ pages with routing, shared components, consistent theming |
| 6. **Animation-Heavy Interface** (20 hrs) — Smart Animate → Framer Motion with precise timing choreography |
| 7. **Messy Enterprise File** (24 hrs) — Real-world chaos: inconsistent naming, duplicates, orphaned components |
|
|
| ## Data Curation |
|
|
| Sourced from licensing partnerships with enterprises, domain-experts and community dataset curation. |
|
|
| ## Evaluation Framework |
|
|
| | Verifier | Type | Method | |
| |----------|------|--------| |
| | **Human Preference** | Subjective | Pairwise comparison → Bradley-Terry ELO | |
| | **Visual Judge** | Subjective | VLM screenshot comparison (design vs. output) | |
| | **Skill Verifier** | Subjective | Task-specific rubrics (build, tokens, components, fidelity) | |
| | **Behavior Verifier** | Subjective | Agent trajectory analysis (error recovery, tool usage) | |
|
|
| ## Agent Error Recovery |
|
|
| Across 96 runs, agents encountered 590 errors with a 70.3% autonomous recovery rate: |
|
|
| | Error Type | Count | Recovery | |
| |------------|-------|----------| |
| | Tool call failure | 419 | 66.3% | |
| | Git error | 64 | 75.0% | |
| | Syntax error | 33 | 90.9% | |
| | Build error | 11 | 100.0% | |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("metaphilabs/figma", split="train") |
| |
| # Each row contains a Figma file key for API access |
| for task in ds: |
| print(task["id"], task["figma_data"]["figma_file_name"]) |
| |
| Citation |
| |
| @misc{metaphi2026crew, |
| title={CREW: Enterprise Agent Benchmarks}, |
| author={Metaphi Labs}, |
| year={2026}, |
| url={https://evals.metaphi.ai} |
| } |
| |
| Links |
| |
| - Leaderboard: https://evals.metaphi.ai |
| - Website: https://metaphi.ai |
| - Collection: https://huggingface.co/collections/metaphilabs/crew |