Dataset Viewer
Auto-converted to Parquet Duplicate
harness_way
stringclasses
6 values
trials
int64
40
40
correct
int64
18
23
failed
int64
17
22
pass_rate
float64
0.45
0.58
avg_reward
float64
0.46
0.61
turn_limit_values
int64
60
100
avg_turn_limit
float64
60
100
sum_input_tokens
int64
361k
20.8M
sum_output_tokens
int64
155k
358k
sum_total_tokens
int64
515k
21.2M
rows_with_total_tokens
int64
33
37
avg_input_tokens
float64
9.75k
563k
avg_output_tokens
float64
4.18k
9.68k
avg_total_tokens
float64
13.9k
573k
estimated_token_rows
int64
0
37
avg_trial_seconds
float64
336
501
avg_agent_execution_seconds
float64
278
461
goose-base
40
18
22
0.45
0.461538
100
100
452,163
193,783
645,946
35
12,918.94
5,536.66
18,455.6
35
498.87
461.35
goose-base-60
40
21
19
0.525
0.538462
60
60
367,987
157,710
525,697
37
9,945.59
4,262.43
14,208.03
37
362.4
299.77
goose-tweaked
40
20
20
0.5
0.512821
60
60
360,667
154,574
515,241
37
9,747.76
4,177.68
13,925.43
37
349.21
279.95
openhands-sdk-base
40
23
17
0.575
0.605263
100
100
16,620,443
270,059
16,890,502
33
503,649.79
8,183.61
511,833.39
0
501.03
439.2
openhands-sdk-base-60
40
20
20
0.5
0.526316
60
60
17,756,494
287,932
18,044,426
36
493,235.94
7,998.11
501,234.06
0
389.76
325.05
openhands-sdk-tweaked
40
23
17
0.575
0.605263
60
60
20,845,914
358,187
21,204,101
37
563,403.08
9,680.73
573,083.81
0
336.42
278.16

Same Model, Opposite Results: Goose vs OpenHands Turn Budget Study on Harbor Terminal-Bench-Pro

Trial-level results from a small controlled study comparing two agent harnesses — Goose and OpenHands-SDK — on a frozen 40-task Harbor Terminal-Bench-Pro slice.

All runs used minimax/minimax-m2.5 via OpenRouter with Daytona as the sandbox backend.

Key Findings

  • Reducing the turn budget from 100 to 60 pushed the two harnesses in opposite directions under the base setup: Goose improved (0.450 → 0.525), while OpenHands dropped (0.575 → 0.500).
  • At 60 turns, the tweaked setup restored OpenHands to its 100-turn baseline (0.575), while Goose did not gain beyond its base-60 score (0.525 → 0.500).
  • At their best, both harnesses reached the same 0.575 pass rate on this 40-task slice.
  • Reported token usage was much higher for OpenHands than Goose in this setup, though cross-harness token accounting may not be directly comparable (~500K vs ~14K avg per trial).

Files

File Description
trials.csv One row per trial (240 rows across 6 variants)
trials.jsonl Same data in JSONL format
variant_summary.csv Aggregated pass rate, tokens, timing per variant

Variant Design

Suffix Meaning
base original prompt and config at 100 turns
base-60 same base setup, turn budget reduced to 60
tweaked 60-turn budget with added skill/guidance changes

Each task was run once per variant (n=1 attempt per task).

Variants

Variant Harness Turns Prompt Pass Rate
goose-base Goose 100 base 0.450
goose-base-60 Goose 60 base 0.525
goose-tweaked Goose 60 tweaked 0.500
openhands-sdk-base OpenHands-SDK 100 base 0.575
openhands-sdk-base-60 OpenHands-SDK 60 base 0.500
openhands-sdk-tweaked OpenHands-SDK 60 tweaked 0.575

Caveats

n=40 tasks. A 2–3 task swing is within run-to-run variance. No statistical significance testing performed.

Full Code & Configs

GitHub: https://github.com/namanvats/harbor-agent-ablation

Citation

If you use this dataset, please cite:

@dataset{vats2026harness,
  author       = {Naman Vats},
  title        = {Same Model, Opposite Results: Goose vs OpenHands Turn Budget Study on Harbor Terminal-Bench-Pro},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/namanvats/harbor-goose-openhands-benchmark}
}

References

Downloads last month
51

Papers for namanvats/harbor-goose-openhands-benchmark