Update README.md
Browse files
README.md
CHANGED
|
@@ -16,8 +16,7 @@ size_categories:
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# APEX–Agents
|
| 19 |
-
|
| 20 |
-
APEX–Agents is a benchmark for evaluating whether AI agents can execute long-horizon, cross-application professional services tasks. Tasks were created by **investment banking analysts**, **management consultants**, and **corporate lawyers**, and require agents to navigate realistic work environments with files and tools (e.g., docs, spreadsheets, PDFs, email, chat, calendar).
|
| 21 |
|
| 22 |
- **Tasks:** 480 total (160 per job category)
|
| 23 |
- **Worlds:** 33 total (10 banking, 11 consulting, 12 law)
|
|
@@ -26,12 +25,15 @@ APEX–Agents is a benchmark for evaluating whether AI agents can execute long-h
|
|
| 26 |
- **World assets:** included (files + metadata)
|
| 27 |
- **License:** CC-BY 4.0
|
| 28 |
|
| 29 |
-
##
|
|
|
|
|
|
|
|
|
|
| 30 |
|
|
|
|
| 31 |
Many agent evaluations don’t reflect day-to-day professional work. APEX–Agents is designed around realistic “project worlds” and tasks that require planning, tool use, and working with complex in-world artifacts—closer to how real professionals operate.
|
| 32 |
|
| 33 |
## Dataset contents
|
| 34 |
-
|
| 35 |
Each example corresponds to a **task** inside a **world**. A task includes:
|
| 36 |
|
| 37 |
- **Prompt**: single-turn instruction given to the agent
|
|
@@ -41,7 +43,6 @@ Each example corresponds to a **task** inside a **world**. A task includes:
|
|
| 41 |
- **World context**: pointers/IDs for the world plus associated files/artifacts
|
| 42 |
|
| 43 |
### World design
|
| 44 |
-
|
| 45 |
A “world” is a realistic project scenario created by experts. Worlds contain files and tools required to complete tasks. Web search is disabled to keep evaluations reproducible.
|
| 46 |
|
| 47 |
Worlds expose applications such as:
|
|
@@ -49,7 +50,6 @@ Worlds expose applications such as:
|
|
| 49 |
(Some worlds include additional finance data applications.)
|
| 50 |
|
| 51 |
## Key dataset statistics
|
| 52 |
-
|
| 53 |
| Split / Job | # Worlds | Avg files / world | # Tasks | Avg criteria / task | Avg est. hours | Tasks w/ file outputs |
|
| 54 |
|---|---:|---:|---:|---:|---:|---:|
|
| 55 |
| Investment banking | 10 | 172 | 160 | 2.93 | 1.36 | 27 (16.9%) |
|
|
@@ -57,10 +57,8 @@ Worlds expose applications such as:
|
|
| 57 |
| Management consulting | 11 | 165 | 160 | 4.68 | 1.69 | 11 (6.9%) |
|
| 58 |
| **Benchmark total** | **33** | **166** | **480** | **4.06** | **1.82** | **58 (12.1%)** |
|
| 59 |
|
| 60 |
-
Most tasks (422/480) require returning a message; the remainder require creating or editing a file (docs/sheets/slides).
|
| 61 |
|
| 62 |
## Workflows / task types
|
| 63 |
-
|
| 64 |
Tasks are tagged with workflow categories (examples):
|
| 65 |
- Investment banking: DCF, sensitivity analysis, comps, merger model, LBO, etc.
|
| 66 |
- Consulting: benchmarking/competitive analysis, market sizing, ops analysis, scenario analysis, survey analysis, etc.
|
|
@@ -69,13 +67,12 @@ Tasks are tagged with workflow categories (examples):
|
|
| 69 |
(See the paper appendix for the full workflow breakdown.)
|
| 70 |
|
| 71 |
## Evaluation (how tasks are graded)
|
| 72 |
-
|
| 73 |
APEX–Agents uses **rubric-based grading**:
|
| 74 |
- Each rubric contains multiple **criteria** (binary: Met / Not met).
|
|
|
|
| 75 |
- A judge model grades each criterion independently, using the prompt, the agent output, and relevant artifacts/changes.
|
| 76 |
|
| 77 |
### Judge model
|
| 78 |
-
|
| 79 |
The benchmark uses an LLM judge to produce:
|
| 80 |
- **Binary decision per criterion** (Met / Not met)
|
| 81 |
- **Short explanation** (free text)
|
|
@@ -83,27 +80,52 @@ The benchmark uses an LLM judge to produce:
|
|
| 83 |
An auxiliary judge is used to identify the correct artifact to grade from each criterion’s “grading target” (e.g., a console message vs. an edited spreadsheet).
|
| 84 |
|
| 85 |
## Leaderboard baselines (paper)
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
|
|
|
|
| 88 |
|
| 89 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
-
## Intended uses
|
| 92 |
|
| 93 |
-
|
| 94 |
- Agent benchmarking (tool-use, planning, long-horizon workflows)
|
| 95 |
- Training/evaluating agentic systems on professional services tasks
|
| 96 |
- Analysis of failure modes, tool usage, and rubric/criteria difficulty
|
| 97 |
|
| 98 |
-
**Not recommended (without care):**
|
| 99 |
-
- Training models to imitate proprietary tools or private workflows outside the dataset’s contained worlds
|
| 100 |
-
- Claims about real-world deployment readiness without additional validation
|
| 101 |
-
|
| 102 |
## How to load
|
| 103 |
-
|
| 104 |
```python
|
| 105 |
from datasets import load_dataset
|
| 106 |
|
| 107 |
ds = load_dataset("mercor/apex-agents") # replace if your org/name differs
|
| 108 |
print(ds)
|
| 109 |
-
print(ds["train"][0].keys())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# APEX–Agents
|
| 19 |
+
APEX–Agents is a benchmark from [Mercor](https://www.mercor.com/apex/) for evaluating whether AI agents can execute long-horizon, cross-application professional services tasks. Tasks were created by **investment banking analysts**, **management consultants**, and **corporate lawyers**, and require agents to navigate realistic work environments with files and tools (e.g., docs, spreadsheets, PDFs, email, chat, calendar).
|
|
|
|
| 20 |
|
| 21 |
- **Tasks:** 480 total (160 per job category)
|
| 22 |
- **Worlds:** 33 total (10 banking, 11 consulting, 12 law)
|
|
|
|
| 25 |
- **World assets:** included (files + metadata)
|
| 26 |
- **License:** CC-BY 4.0
|
| 27 |
|
| 28 |
+
## Archipelago
|
| 29 |
+
✨ [View the code](https://github.com/Mercor-Intelligence/archipelago)
|
| 30 |
+
Our service for executing and evalling agents is available open-source on Github.
|
| 31 |
+
|
| 32 |
|
| 33 |
+
## Why we made APEX-Agents
|
| 34 |
Many agent evaluations don’t reflect day-to-day professional work. APEX–Agents is designed around realistic “project worlds” and tasks that require planning, tool use, and working with complex in-world artifacts—closer to how real professionals operate.
|
| 35 |
|
| 36 |
## Dataset contents
|
|
|
|
| 37 |
Each example corresponds to a **task** inside a **world**. A task includes:
|
| 38 |
|
| 39 |
- **Prompt**: single-turn instruction given to the agent
|
|
|
|
| 43 |
- **World context**: pointers/IDs for the world plus associated files/artifacts
|
| 44 |
|
| 45 |
### World design
|
|
|
|
| 46 |
A “world” is a realistic project scenario created by experts. Worlds contain files and tools required to complete tasks. Web search is disabled to keep evaluations reproducible.
|
| 47 |
|
| 48 |
Worlds expose applications such as:
|
|
|
|
| 50 |
(Some worlds include additional finance data applications.)
|
| 51 |
|
| 52 |
## Key dataset statistics
|
|
|
|
| 53 |
| Split / Job | # Worlds | Avg files / world | # Tasks | Avg criteria / task | Avg est. hours | Tasks w/ file outputs |
|
| 54 |
|---|---:|---:|---:|---:|---:|---:|
|
| 55 |
| Investment banking | 10 | 172 | 160 | 2.93 | 1.36 | 27 (16.9%) |
|
|
|
|
| 57 |
| Management consulting | 11 | 165 | 160 | 4.68 | 1.69 | 11 (6.9%) |
|
| 58 |
| **Benchmark total** | **33** | **166** | **480** | **4.06** | **1.82** | **58 (12.1%)** |
|
| 59 |
|
|
|
|
| 60 |
|
| 61 |
## Workflows / task types
|
|
|
|
| 62 |
Tasks are tagged with workflow categories (examples):
|
| 63 |
- Investment banking: DCF, sensitivity analysis, comps, merger model, LBO, etc.
|
| 64 |
- Consulting: benchmarking/competitive analysis, market sizing, ops analysis, scenario analysis, survey analysis, etc.
|
|
|
|
| 67 |
(See the paper appendix for the full workflow breakdown.)
|
| 68 |
|
| 69 |
## Evaluation (how tasks are graded)
|
|
|
|
| 70 |
APEX–Agents uses **rubric-based grading**:
|
| 71 |
- Each rubric contains multiple **criteria** (binary: Met / Not met).
|
| 72 |
+
- There are between 1 and 10 criteria, with a mean of 4.06.
|
| 73 |
- A judge model grades each criterion independently, using the prompt, the agent output, and relevant artifacts/changes.
|
| 74 |
|
| 75 |
### Judge model
|
|
|
|
| 76 |
The benchmark uses an LLM judge to produce:
|
| 77 |
- **Binary decision per criterion** (Met / Not met)
|
| 78 |
- **Short explanation** (free text)
|
|
|
|
| 80 |
An auxiliary judge is used to identify the correct artifact to grade from each criterion’s “grading target” (e.g., a console message vs. an edited spreadsheet).
|
| 81 |
|
| 82 |
## Leaderboard baselines (paper)
|
| 83 |
+
Our paper reports results for eight agents against multiple metrics (Pass@1, Pass@8, mean score.). The leaderboard uses **Pass@1**: the probability that a uniformly sampled task passes all criteria in a single run.
|
| 84 |
+
> Note: The dataset is released for open research; leaderboard results may differ depending on agent harness, tool APIs, or evaluation settings.
|
| 85 |
|
| 86 |
+
**Performance of agents on the APEX–Agents benchmark.**
|
| 87 |
+
Where available, models have thinking / reasoning effort set to **high**.
|
| 88 |
|
| 89 |
+
| Model | Pass@1 (95% CI) | Pass@8 (95% CI) | Pass^8 | Mean score | IB analyst Pass@1 | Consultant Pass@1 | Lawyer Pass@1 |
|
| 90 |
+
|---|---:|---:|---:|---:|---:|---:|---:|
|
| 91 |
+
| Claude Opus 4.5 | 18.4% \[15.5–21.3] | 34.0% \[29.8–38.3] | 8.8% | 34.8% | 21.6% | 13.2% | 20.2% |
|
| 92 |
+
| Gemini 3 Flash | 24.0% \[20.7–27.3] | 36.7% \[32.3–41.0] | 13.4% | 39.5% | 26.7% | 19.3% | 25.9% |
|
| 93 |
+
| Gemini 3 Pro | 18.4% \[15.7–21.1] | 37.3% \[32.9–41.7] | 6.5% | 34.1% | 18.8% | 12.4% | 23.9% |
|
| 94 |
+
| GPT-5 | 18.3% \[15.4–21.3] | 31.0% \[26.9–35.4] | 7.7% | 32.9% | 27.3% | 12.3% | 15.3% |
|
| 95 |
+
| GPT-5.2 | 23.0% \[19.8–26.2] | 40.0% \[35.6–44.4] | 11.0% | 38.7% | 27.3% | 22.7% | 18.9% |
|
| 96 |
+
| GPT-OSS-120B | 4.7% \[3.3–6.1] | 11.5% \[8.8–14.4] | 1.2% | 14.5% | 2.7% | 3.5% | 7.8% |
|
| 97 |
+
| Grok 4 | 15.2% \[12.8–17.7] | 32.9% \[28.7–37.3] | 4.7% | 30.3% | 17.0% | 12.0% | 16.5% |
|
| 98 |
+
| Kimi K2 Thinking | 4.0% \[2.9–5.2] | 14.4% \[11.5–17.5] | 0.3% | 11.5% | 1.2% | 2.9% | 8.0% |
|
| 99 |
+
|
| 100 |
+
**Metric notes**
|
| 101 |
+
- **Pass@1 / Pass@8:** task-level success under rubric-based evaluation, with 95% confidence intervals where shown.
|
| 102 |
+
- **IB analyst / Consultant / Lawyer Pass@1:** Pass@1 broken out by job category.
|
| 103 |
+
- **Pass^8** and **Mean score:** reported exactly as defined in the accompanying paper.
|
| 104 |
|
|
|
|
| 105 |
|
| 106 |
+
## Intended uses
|
| 107 |
- Agent benchmarking (tool-use, planning, long-horizon workflows)
|
| 108 |
- Training/evaluating agentic systems on professional services tasks
|
| 109 |
- Analysis of failure modes, tool usage, and rubric/criteria difficulty
|
| 110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
## How to load
|
|
|
|
| 112 |
```python
|
| 113 |
from datasets import load_dataset
|
| 114 |
|
| 115 |
ds = load_dataset("mercor/apex-agents") # replace if your org/name differs
|
| 116 |
print(ds)
|
| 117 |
+
print(ds["train"][0].keys())
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
## Citation
|
| 121 |
+
If you use APEX–Agents in academic work, please cite our paper:
|
| 122 |
+
|
| 123 |
+
@misc{vidgen2026apexagents,
|
| 124 |
+
title = {APEX--Agents},
|
| 125 |
+
author = {Vidgen, Bertie and Mann, Austin and Fennelly, Abby and Wright Stanly, John and Rothman, Lucas and Burstein, Marco and Benchek, Julien and Ostrofsky, David and Ravichandran, Anirudh and Sur, Debnil and Venugopal, Neel and Hsia, Alannah and Robinson, Isaac and Huang, Calix and Varones, Olivia and Khan, Daniyal and Haines, Michael and Richards, Zach and Mahapatra, Chirag and Foody, Brendan and Nitski, Osvald},
|
| 126 |
+
year = {2026},
|
| 127 |
+
howpublished = {arXiv},
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
## Contact
|
| 131 |
+
[apex@mercor.com](mailto:apex@mercor.com)
|