Add dataset card with metadata and links
Browse filesThis PR adds metadata to the dataset card, including task categories, language, and relevant tags. It also includes links to the paper, GitHub repository, and project page for more information.
README.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- text-generation
|
| 4 |
+
- question-answering
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- benchmark
|
| 9 |
+
- evaluation
|
| 10 |
+
license: apache-2.0
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# BenchHub: A Unified Benchmark Suite for Holistic and Customizable LLM Evaluation
|
| 14 |
+
|
| 15 |
+
[Paper](https://huggingface.co/papers/2506.00482) | [GitHub](https://github.com/rladmstn1714/BenchHub) | [Project Page](https://huggingface.co/BenchHub)
|
| 16 |
+
|
| 17 |
+
BenchHub is a unified benchmark suite designed to help researchers and developers easily load, filter, and process various LLM benchmark datasets. It enables efficient dataset handling for training and evaluation, providing flexible filtering capabilities by subject, skill, and target. This allows users to build custom benchmarks tailored to specific needs and conduct holistic evaluations of language models.
|