zhber commited on
Commit
5cefa64
·
verified ·
1 Parent(s): 27e3d6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -99,7 +99,7 @@ For Python and Java evaluation, we use a double time limit.
99
 
100
  The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
101
 
102
- The rating evaluation follows [CodeELO](https://arxiv.org/abs/2501.01257) methodology. For pass@8 metrics, we calculate the expected score with a fail-penalty but no time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
103
 
104
  ## License
105
 
 
99
 
100
  The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
101
 
102
+ The rating evaluation follows [CodeELO](https://arxiv.org/abs/2501.01257) methodology. For pass@8 metrics, we calculate the expected score with a fail-penalty but no submission-time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
103
 
104
  ## License
105