Create .eval_results/gpqa.yaml
Browse files## Evaluation Results
This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).
### What This Enables
- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

### Format Details
Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.
---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*
- .eval_results/gpqa.yaml +8 -0
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
- dataset:
|
| 2 |
+
id: Idavidrein/gpqa
|
| 3 |
+
task_id: diamond
|
| 4 |
+
value: 85.7
|
| 5 |
+
date: '2026-01-21'
|
| 6 |
+
source:
|
| 7 |
+
url: https://huggingface.co/zai-org/GLM-4.7
|
| 8 |
+
name: Model Card
|