| task_categories: | |
| - text-generation | |
| license: mit | |
| language: | |
| - en | |
| tags: | |
| - benchmark | |
| - llm | |
| - evaluation | |
| # BenchHub: A Unified Benchmark Suite for Holistic and Customizable LLM Evaluation | |
| [Paper](https://huggingface.co/papers/2506.00482) | [GitHub](https://github.com/rladmstn1714/BenchHub) | [Project Page](https://huggingface.co/BenchHub) | |
| BenchHub is a unified benchmark suite designed to help researchers and developers easily load, filter, and process various LLM benchmark datasets. | |
| It enables efficient dataset handling for training and evaluation, providing flexible filtering capabilities by: | |
| * Subject | |
| * Skill | |
| * Target | |
| This allows users to build custom benchmarks tailored to specific needs and conduct holistic evaluations of language models. |