File size: 10,820 Bytes
70d5da6 1e570e7 4b7e335 1e570e7 06ff886 e8700e9 06ff886 1e570e7 06ff886 e8700e9 e42edee 1e570e7 e42edee e8700e9 06ff886 e8700e9 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 e8700e9 06ff886 1e570e7 739aac3 e8700e9 1e570e7 e8700e9 1e570e7 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 3d6a5d7 1e570e7 3d6a5d7 06ff886 e8700e9 40612ba 1e570e7 40612ba 1e570e7 40612ba 1e570e7 40612ba e8700e9 40612ba e8700e9 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 1e570e7 06ff886 e8700e9 06ff886 bc5bc0d 4b7e335 bc5bc0d 4b7e335 bc5bc0d 06ff886 e8700e9 06ff886 1e570e7 06ff886 1e570e7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | ---
tags:
- benchmark
- tool-use
- openenv
- rl-environment
- adversarial
- grpo
language:
- en
---
<p align="center">
<img src="banner.png" width="100%" alt="ComtradeBench β An OpenEnv Benchmark for Reliable LLM Tool-Use"/>
</p>
<p align="center">
<a href="https://github.com/yonghongzhang-io/comtrade-openenv">
<img src="https://img.shields.io/badge/GitHub-Repository-181717?logo=github" alt="GitHub"/>
</a>
<a href="https://huggingface.co/spaces/yonghongzhang/comtrade-env">
<img src="https://img.shields.io/badge/HF%20Space-Live%20Demo-FFD21E?logo=huggingface&logoColor=black" alt="HF Space"/>
</a>
<img src="https://img.shields.io/badge/OpenEnv-Native-4B8BBE" alt="OpenEnv"/>
<img src="https://img.shields.io/badge/Tasks-10-brightgreen" alt="10 Tasks"/>
<img src="https://img.shields.io/badge/Training-GRPO-orange" alt="GRPO"/>
</p>
<p align="center"><em>AgentBeats Phase 2 β OpenEnv Challenge Submission | Author: MateFin</em></p>
---
## Agents should be judged by whether they finish the job
Large language models are often evaluated on what they can **say**.
Real agents, however, are judged by whether they can **finish the job** when tools fail.
In practical API workflows, failure rarely comes from language alone. Pages drift. Duplicate rows appear across requests. Rate limits interrupt execution. Transient server errors force retries. Summary rows contaminate aggregates. Budgets make brute-force strategies impossible.
These are not unusual edge cases. **They are normal operating conditions for production systems.**
ComtradeBench is an OpenEnv benchmark designed to measure exactly this problem: can an LLM agent execute a multi-step API workflow reliably under realistic failure modes?
---
## Why this benchmark matters
Many current evaluations still focus on final answers, clean tool calls, or static environments. But deployed agents fail for more operational reasons:
| Failure | What goes wrong |
|---------|----------------|
| Miss pages | Incomplete data submitted as complete |
| Retry incorrectly | Page skipped after error β silent data gap |
| Double-count duplicates | Overcounted rows, inflated aggregates |
| Leak summary rows | Contaminated totals corrupt downstream analysis |
| Waste budget | Redundant fetches exhaust request limit |
| Recover silently | No auditable trace β failure invisible in production |
These are **execution failures**, not just reasoning failures.
If we want useful agents, we need benchmarks that measure reliable task completion under imperfect conditions β not only answer quality in idealized settings.
---
## What ComtradeBench is
> ComtradeBench is an OpenEnv-native benchmark and training environment for reliable tool-use. The domain is trade-data retrieval; the problem is broader: robust multi-step API execution under shifting, imperfect, and partially adversarial conditions.
The environment asks an agent to retrieve, clean, and submit records from a paginated API while handling:
- **Pagination drift** β page ordering randomized between calls
- **Duplicate records** β within-page (8%) and cross-page (3%) overlap
- **Transient errors** β HTTP 429 rate-limits and HTTP 500 server faults
- **Totals trap** β synthetic summary rows mixed into real data
- **Mixed faults** β rate-limit retry + dedup simultaneously
- **Constrained budget** β halved request limit, no room for waste
The goal is not to test whether the agent can *describe* the workflow.
The goal is to test whether it can *execute* it β correctly, completely, efficiently, and robustly.
---
## Environment design
Each episode gives the agent a parameterized retrieval task and a limited request budget. The agent interacts through **three MCP tools only**:
```
get_task_info() β task parameters + request budget
fetch_page(page, size) β {rows, has_more} or {status: 429|500, retry: true}
submit_results(...) β {reward, score, breakdown}
```
The benchmark is structured as a **curriculum of ten tasks**:
| # | Task | Core challenge |
|---|------|----------------|
| T1 | Single page | Baseline correctness |
| T2 | Multi-page pagination | Merge 2,345+ rows across pages |
| T3 | Duplicates | Primary-key deduplication |
| T4 | HTTP 429 | Backoff + retry without data loss |
| T5 | HTTP 500 | Transient error recovery |
| T6 | Page drift | Canonicalize under non-deterministic ordering |
| T7 | Totals trap | Filter `is_total=true` rows |
| T8 | Mixed faults | Retry AND dedup simultaneously |
| **T9** | **Adaptive adversary** | **Fault intensity escalates mid-episode** |
| **T10** | **Constrained budget** | **50 requests instead of 100** |
T9 is, to our knowledge, among the earliest OpenEnv-style tasks to model **within-episode fault escalation** β where the environment becomes harder as the agent makes progress.
---
## Why OpenEnv
We built ComtradeBench on OpenEnv because this benchmark is meant to be more than a one-off simulator.
OpenEnv gives us a standard environment interface, reproducible execution, and clean integration with evaluation and post-training workflows. The same environment code runs both in-process during GRPO training and as a deployed Docker service during evaluation β with no divergence.
Our goal is not only to score agents, but to provide a **reusable environment where robustness can be studied and trained systematically**.
---
## Scoring what actually matters
ComtradeBench uses structured evaluation across **six dimensions** β not a binary pass/fail:
| Dimension | Weight | What it measures |
|-----------|:------:|-----------------|
| Correctness | **30%** | All expected rows present with correct field values |
| Completeness | 15% | Zero missing records |
| Robustness | 15% | Correct fault handling with logged evidence |
| Efficiency | 15% | Request count vs. task-optimal minimum |
| Data Quality | 15% | No duplicates or leaked totals rows |
| Observability | 10% | Structured execution trace in the run log |
**Why multi-dimensional scoring matters:**
An agent that retrieves correct data but skips retry logging loses 15 points on Robustness. An agent that skips pages to save budget loses Completeness and all Efficiency credit. These behaviors are not equivalent β the benchmark does not treat them as equivalent.
The **Observability** dimension deserves special note: requiring structured log entries incentivizes the agent to maintain explicit execution state. This is not artificial β structured logs are how production ETL pipelines are monitored and debugged.
---
## Baselines and results
### Rule-based baseline (no LLM)
A deterministic rule-based agent achieves **96.8 / 100** average across all ten tasks, confirming the environment is well-calibrated and solvable.
| Task | Score | Reward |
|------|------:|-------:|
| T1 Single page | 98.0 | 0.980 |
| T2 Multi-page | 98.0 | 0.980 |
| T3 Duplicates | 98.0 | 0.980 |
| T4 Rate limit (429) | 95.0 | 0.950 |
| T5 Server error (500) | 95.7 | 0.957 |
| T6 Page drift | 94.0 | 0.940 |
| T7 Totals trap | 98.0 | 0.980 |
| T8 Mixed faults | 96.4 | 0.964 |
| T9 Adaptive adversary | 96.9 | 0.969 |
| T10 Constrained budget | 98.0 | 0.980 |
| **Average** | **96.8** | **0.968** |
### LLM agent β Moonshot V1-8K (Kimi API)
| Task | Score | Reward |
|------|------:|-------:|
| T1 Single page | 98.7 | 0.987 |
| T2 Multi-page | 98.7 | 0.987 |
| T3 Duplicates | 98.7 | 0.987 |
| T4 Rate limit | 83.7 | 0.837 |
| T5 Server error | 84.3 | 0.843 |
| T6 Page drift | 94.7 | 0.947 |
| T7 Totals trap | 98.7 | 0.987 |
| T8 Mixed faults | 97.3 | 0.973 |
| **Average** | **94.4** | **0.944** |
The LLM outperforms the rule-based baseline on Observability β natural language models generate more informative execution traces. The gap on T4/T5 reflects that the Robustness dimension requires **explicit logged evidence** of retry behavior, not just correct output.
### GRPO training curve
We ran 8 iterations of GRPO-style rollouts with group-relative advantage normalization. Training signal is reward-only β no human labels, no reward model. Mean reward exceeded the rule-based baseline in **6 of 8 iterations**.
<p align="center">
<img src="training_curve.png" width="80%" alt="GRPO Training Curve"/>
</p>
---
## What this benchmark reveals
ComtradeBench is designed to expose a gap that clean evaluations often miss: agents can appear capable in idealized settings while remaining brittle under operational noise.
The hardest problems are not "knowing what the API is." They are:
- continuing correctly **after an interruption**
- maintaining data integrity **across many pages**
- adapting when **conditions shift mid-episode**
- balancing **coverage against cost**
This is where reliable agents differ from merely fluent ones.
---
## Benchmark and training substrate
ComtradeBench is not just an evaluation harness β it is built to support agent improvement.
The environment ships with a full **GRPO training pipeline**: reproducible rollouts, group-relative advantage normalization, and reward-only optimization. No human labels needed. No separate reward model.
This is an intentional design choice: if robust tool-use is a real bottleneck for agentic AI, we need environments that can **both measure and train** that capability β with identical conditions in evaluation and training.
---
## Quick start
```bash
# No LLM, no GPU, no API key required
git clone https://github.com/yonghongzhang-io/comtrade-openenv
pip install openenv-core[core]
python agent/smoke_test.py --task T1_single_page
python agent/smoke_test.py --task T9_adaptive_adversary
# GRPO training via local Ollama (CPU-capable)
python agent/train_grpo.py \
--api-url http://localhost:11434/v1 \
--api-model qwen2.5:7b \
--num-iterations 200 --group-size 4
```
All benchmark data is generated procedurally from a seeded PRNG β no external fixtures, no live API dependency. Every result is fully reproducible from a task ID and a random seed.
---
## Conclusion
<p align="center">
---
### π¬ *Can an agent still finish the job when the API fights back?*
---
</p>
That question matters far beyond trade data. It applies to any agent expected to operate against real interfaces with pagination, retries, noisy outputs, and resource limits.
If we want more reliable agents, we need environments that reward reliability directly.
That is the role ComtradeBench is designed to play.
---
<p align="center">
<a href="https://github.com/yonghongzhang-io/comtrade-openenv">GitHub</a>
Β·
<a href="https://huggingface.co/spaces/yonghongzhang/comtrade-env">HF Space</a>
Β·
<a href="https://github.com/meta-pytorch/OpenEnv">OpenEnv Framework</a>
</p>
|