burtenshaw HF Staff commited on
Commit
4f70a9b
·
verified ·
1 Parent(s): 475d85c

Add HLE evaluation result

Browse files

## Evaluation Results

This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).

### What This Enables

- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

![Model Evaluation Results](https://huggingface.co/huggingface/documentation-images/resolve/main/evaluation-results/eval-results-previw.png)

### Format Details

Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.

---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*

Files changed (1) hide show
  1. .eval_results/hle.yaml +7 -0
.eval_results/hle.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ - dataset:
2
+ id: cais/hle
3
+ value: 24.8
4
+ date: '2026-01-14'
5
+ source:
6
+ url: https://huggingface.co/zai-org/GLM-4.7
7
+ name: Model Card