Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -99,7 +99,7 @@ For Python and Java evaluation, we use a double time limit.
|
|
| 99 |
|
| 100 |
The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
|
| 101 |
|
| 102 |
-
The rating evaluation follows [CodeELO](https://arxiv.org/abs/2501.01257) methodology. For pass@8 metrics, we calculate the expected score with a fail-penalty but no time-penalty.
|
| 103 |
|
| 104 |
## License
|
| 105 |
|
|
|
|
| 99 |
|
| 100 |
The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
|
| 101 |
|
| 102 |
+
The rating evaluation follows [CodeELO](https://arxiv.org/abs/2501.01257) methodology. For pass@8 metrics, we calculate the expected score with a fail-penalty but no time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
|
| 103 |
|
| 104 |
## License
|
| 105 |
|