Add HLE evaluation result
Browse files## Evaluation Results
This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).
### What This Enables
- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

### Format Details
Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.
---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*
- .eval_results/hle.yaml +7 -0
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
- dataset:
|
| 2 |
+
id: cais/hle
|
| 3 |
+
value: 24.8
|
| 4 |
+
date: '2026-01-14'
|
| 5 |
+
source:
|
| 6 |
+
url: https://huggingface.co/zai-org/GLM-4.7
|
| 7 |
+
name: Model Card
|