testing
#4
by
madhavan113 - opened
README.md
CHANGED
|
@@ -13,11 +13,6 @@ tags:
|
|
| 13 |
pretty_name: apex-agents
|
| 14 |
size_categories:
|
| 15 |
- n<1K
|
| 16 |
-
configs:
|
| 17 |
-
- config_name: default
|
| 18 |
-
data_files:
|
| 19 |
-
- split: train
|
| 20 |
-
path: tasks_and_rubrics.json
|
| 21 |
---
|
| 22 |
|
| 23 |
# APEX–Agents
|
|
@@ -41,7 +36,7 @@ APEX–Agents is a benchmark from [Mercor](https://www.mercor.com/apex/) for eva
|
|
| 41 |
| **Benchmark total** | **33** | **166** | **480** | **4.06** | **1.82** | **58 (12.1%)** |
|
| 42 |
|
| 43 |
|
| 44 |
-
Each case
|
| 45 |
Worlds contain applications such as: Calendar, Chat, Code Execution, Documents, File system, Mail, PDFs, Spreadsheets, Presentations. Some worlds include additional finance data applications.
|
| 46 |
|
| 47 |
A task includes:
|
|
@@ -55,13 +50,14 @@ A task includes:
|
|
| 55 |
|
| 56 |
## Evaluation
|
| 57 |
APEX–Agents uses **rubric-based grading**:
|
| 58 |
-
- Each rubric contains multiple criteria (binary: Met / Not met).
|
| 59 |
- There are between 1 and 10 criteria, with a mean of 4.06.
|
| 60 |
- A judge model grades each criterion independently, using the prompt, the agent output, and relevant artifacts/changes.
|
| 61 |
|
| 62 |
## Leaderboard baselines
|
| 63 |
-
|
| 64 |
-
|
|
|
|
| 65 |
|
| 66 |
| Model | Pass@1 (95% CI) | Pass@8 (95% CI) | Pass^8 | Mean score | IB analyst Pass@1 | Consultant Pass@1 | Lawyer Pass@1 |
|
| 67 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
|
|
|
| 13 |
pretty_name: apex-agents
|
| 14 |
size_categories:
|
| 15 |
- n<1K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# APEX–Agents
|
|
|
|
| 36 |
| **Benchmark total** | **33** | **166** | **480** | **4.06** | **1.82** | **58 (12.1%)** |
|
| 37 |
|
| 38 |
|
| 39 |
+
Each case corresponds to a **task** inside a **world**. A “world” is a realistic project scenario created by experts. Worlds contain files and tools required to complete tasks. Web search is disabled to keep evaluations reproducible.
|
| 40 |
Worlds contain applications such as: Calendar, Chat, Code Execution, Documents, File system, Mail, PDFs, Spreadsheets, Presentations. Some worlds include additional finance data applications.
|
| 41 |
|
| 42 |
A task includes:
|
|
|
|
| 50 |
|
| 51 |
## Evaluation
|
| 52 |
APEX–Agents uses **rubric-based grading**:
|
| 53 |
+
- Each rubric contains multiple **criteria** (binary: Met / Not met).
|
| 54 |
- There are between 1 and 10 criteria, with a mean of 4.06.
|
| 55 |
- A judge model grades each criterion independently, using the prompt, the agent output, and relevant artifacts/changes.
|
| 56 |
|
| 57 |
## Leaderboard baselines
|
| 58 |
+
Our paper reports results for eight agents against multiple metrics (Pass@1, Pass@8, mean score). The leaderboard uses **Pass@1**: the probability that a uniformly sampled task passes all criteria in a single run.
|
| 59 |
+
|
| 60 |
+
Where available, models have thinking / reasoning effort set to **high**.
|
| 61 |
|
| 62 |
| Model | Pass@1 (95% CI) | Pass@8 (95% CI) | Pass^8 | Mean score | IB analyst Pass@1 | Consultant Pass@1 | Lawyer Pass@1 |
|
| 63 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|