zRzRzRzRzRzRzR commited on
Commit
146e9f2
·
1 Parent(s): 278f446

update bench

Browse files
.eval_results/terminal_bench_2_claudecode.yaml CHANGED
@@ -1,7 +1,7 @@
1
  - dataset:
2
  id: harborframework/terminal-bench-2.0
3
  task_id: terminalbench_2
4
- value: 66.5
5
  date: '2026-04-07'
6
  source:
7
  url: https://huggingface.co/zai-org/GLM-5.1
 
1
  - dataset:
2
  id: harborframework/terminal-bench-2.0
3
  task_id: terminalbench_2
4
+ value: 69.0
5
  date: '2026-04-07'
6
  source:
7
  url: https://huggingface.co/zai-org/GLM-5.1
README.md CHANGED
@@ -51,14 +51,14 @@ GLM-5.1, by contrast, is built to stay effective on agentic tasks over much long
51
  | SWE-Bench Pro | **58.4** | 55.1 | 56.6 | 56.2 | - | 53.8 | 57.3 | 54.2 | 57.7 |
52
  | NL2Repo | 42.7 | 35.9 | 37.9 | 39.8 | - | 32.0 | **49.8** | 33.4 | 41.3 |
53
  | Terminal-Bench 2.0 (Terminus-2) | 63.5 | 56.2 | 61.6 | - | 39.3 | 50.8 | 65.4 | **68.5** | - |
54
- | Terminal-Bench 2.0 (Best self-reported) | 66.5 (Claude Code) | 56.2 (Claude Code) | - | 57.0 (Claude Code) | 46.4 (Claude Code) | - | - | - | **75.1** (Codex) |
55
  | CyberGym | **68.7** | 48.3 | - | - | 17.3 | 41.3 | 66.6 | - | - |
56
  | BrowseComp | **68.0** | 62.0 | - | - | 51.4 | 60.6 | - | - | - |
57
  | BrowseComp (w/ Context Manage) | 79.3 | 75.9 | - | - | 67.6 | 74.9 | 84.0 | **85.9** | 82.7 |
58
  | τ³-Bench | 70.6 | 69.2 | 70.7 | 67.6 | 69.2 | 66.0 | 72.4 | 67.1 | **72.9** |
59
  | MCP-Atlas (Public Set) | 71.8 | 69.2 | **74.1** | 48.8 | 62.2 | 63.8 | 73.8 | 69.2 | 67.2 |
60
  | Tool-Decathlon | 40.7 | 38.0 | 39.8 | 46.3 | 35.2 | 27.8 | 47.2 | 48.8 | **54.6** |
61
- | Vending Bench 2 | $5,634.00 | $4,432.12 | $5,114.87 | - | $1,034.00 | $1,198.46 | **$8,017.59** | $911.21 | $6,144.18 |
62
  ## Serve GLM-5.1 Locally
63
 
64
  The following open-source frameworks support local deployment of GLM-5.1:
 
51
  | SWE-Bench Pro | **58.4** | 55.1 | 56.6 | 56.2 | - | 53.8 | 57.3 | 54.2 | 57.7 |
52
  | NL2Repo | 42.7 | 35.9 | 37.9 | 39.8 | - | 32.0 | **49.8** | 33.4 | 41.3 |
53
  | Terminal-Bench 2.0 (Terminus-2) | 63.5 | 56.2 | 61.6 | - | 39.3 | 50.8 | 65.4 | **68.5** | - |
54
+ | Terminal-Bench 2.0 (Best self-reported) | 69.0 (Claude Code) | 56.2 (Claude Code) | - | 57.0 (Claude Code) | 46.4 (Claude Code) | - | - | - | **75.1** (Codex) |
55
  | CyberGym | **68.7** | 48.3 | - | - | 17.3 | 41.3 | 66.6 | - | - |
56
  | BrowseComp | **68.0** | 62.0 | - | - | 51.4 | 60.6 | - | - | - |
57
  | BrowseComp (w/ Context Manage) | 79.3 | 75.9 | - | - | 67.6 | 74.9 | 84.0 | **85.9** | 82.7 |
58
  | τ³-Bench | 70.6 | 69.2 | 70.7 | 67.6 | 69.2 | 66.0 | 72.4 | 67.1 | **72.9** |
59
  | MCP-Atlas (Public Set) | 71.8 | 69.2 | **74.1** | 48.8 | 62.2 | 63.8 | 73.8 | 69.2 | 67.2 |
60
  | Tool-Decathlon | 40.7 | 38.0 | 39.8 | 46.3 | 35.2 | 27.8 | 47.2 | 48.8 | **54.6** |
61
+ | Vending Bench 2 | $5,634.41 | $4,432.12 | $5,114.87 | - | $1,034.00 | $1,198.46 | **$8,017.59** | $911.21 | $6,144.18 |
62
  ## Serve GLM-5.1 Locally
63
 
64
  The following open-source frameworks support local deployment of GLM-5.1: