mohit-raghavendra commited on
Commit
fca6aac
·
verified ·
1 Parent(s): 25f7dbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -46,8 +46,9 @@ Each task's `rubric` field is a JSON array:
46
  ]
47
  ```
48
 
49
- - `positive hli verifier` — a factual claim the answer must contain. If the answer includes this claim, the rubric item is a PASS.
50
- - `negative hli verifier` — something the answer must *not* claim. If the answer includes this claim, the rubric item is a FAIL.
 
51
 
52
  ## Environments
53
 
@@ -57,4 +58,17 @@ Each task includes a `docker_image` field pointing to a pre-built Docker Hub ima
57
  docker pull andrewparkscaleai/coding-agent:kovidgoyal__kitty__815df1e210e0a9ab4622f5c7f2d6891d7dbeddf1
58
  docker run -it <image> bash
59
  # repo is at /app
60
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ]
47
  ```
48
 
49
+ - `positive hli verifier` — a factual claim the answer must contain. If the claim is met my the agent's answer, the rubric item result is a PASS.
50
+ - `negative hli verifier` — something the answer must *not* claim. If the claim is met my the agent's answer, the rubric item result is a FAIL.
51
+
52
 
53
  ## Environments
54
 
 
58
  docker pull andrewparkscaleai/coding-agent:kovidgoyal__kitty__815df1e210e0a9ab4622f5c7f2d6891d7dbeddf1
59
  docker run -it <image> bash
60
  # repo is at /app
61
+ ```
62
+
63
+ ## Inference and Eval
64
+
65
+ We follow the standard SWE-Agent scaffold, and we provide a sample config (with the prompts) in [default_code_qa.yaml](default_code_qa.yaml)
66
+
67
+
68
+ Evaluation is performed by an LLM judge (Claude Opus 4.5) that scores the agent's answer against each rubric criterion independently. Each criterion receives a binary score (met or not met) indicating and is then aggregated.
69
+
70
+ The primary metric is the Task Resolve Rate: the percentage of tasks for which the agent's answer is comprehensive (i.e. passes all rubric items and scores 1.0), as graded by a set of task-specific rubrics.
71
+
72
+ The agents are also instructed to avoid modifying source-code files, and clean up any temporary scripts made. So we add a programmatic check that fails a task that has any code changes after submission.
73
+
74
+ Our rubric evaluation prompt and other relevant details are in [rubric_evaluation_config.yaml](rubric_evaluation_config.yaml)