Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -99,8 +99,11 @@ For Python and Java evaluation, we use a double time limit.
|
|
| 99 |
|
| 100 |
The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
|
| 101 |
|
| 102 |
-
The rating evaluation follows [CodeELO](https://
|
| 103 |
|
| 104 |
## License
|
| 105 |
|
| 106 |
We are releasing the benchmark under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
The benchmark kits follow the [testlib](https://github.com/MikeMirzayanov/testlib) pipeline in validation and evaluation. There is a validator for each problem to check test case integrity, and a specific checker to verify output correctness.
|
| 101 |
|
| 102 |
+
The rating evaluation follows [CodeELO](https://github.com/QwenLM/CodeElo) methodology. For pass@8 metrics, we calculate the expected score across 8 tries for each problem, with a fail-penalty but no submission-time-penalty. While this approach deviates from empirical competitive scenarios and may result in ratings that are not directly comparable to human participants, it provides a standardized benchmark for consistent cross-model comparison.
|
| 103 |
|
| 104 |
## License
|
| 105 |
|
| 106 |
We are releasing the benchmark under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
|
| 107 |
+
|
| 108 |
+
## Citation
|
| 109 |
+
Please cite [Step-3.5-Flash](https://github.com/stepfun-ai/Step-3.5-Flash).
|