Would Mercor release per-task or per-verifier breakdowns for leaderboard models?
Hi,
I’m using the APEX-Agents benchmark and the Archipelago harness to evaluate agents (e.g. Gemini 3 Flash). When we run a task locally we get detailed results in grades.json: final_score and, for each criterion, verifier_results with pass/fail and grade_rationale. That’s very useful for understanding where a model fails (e.g. which rubric items or task types).
The public leaderboard only shows aggregate metrics (e.g. Pass@1 per domain). Do you have any plans to release per-task or per-verifier results (e.g. which tasks passed/failed, or which criteria failed) for the models on the leaderboard: either as a downloadable dataset (e.g. on Hugging Face) or via an API? That would help with error analysis and model improvement without having to re-run all 480 tasks per model.
If nothing is planned, would you consider sharing such breakdowns for a subset of models (e.g. Gemini 3 Flash) for research use?
Thanks for maintaining APEX-Agents and Archipelago.
Hey, thanks for your comment and thoughtful engagement with apex-agents! We are not planning on releasing the trajectories publicly just yet, but we are open to releasing a subset to researchers for specific projects.
Please drop me an email at bertie@mercor.com and we can discuss further. Thanks!