| pretty_name: Alloy-Bench | |
| language: | |
| - ru | |
| license: cc-by-nc-sa-4.0 | |
| task_categories: | |
| - text-generation | |
| tags: | |
| - metallurgy | |
| - mining | |
| size_categories: | |
| - 1K<n<10K | |
| ## Alloy-Bench | |
| **Alloy-Bench** is a Russian-language **multiple-choice benchmark** for evaluating large language models in the domain of metallurgy and mining engineering. | |
| It is designed in the spirit of MMLU-style exams and focuses on professional knowledge rather than general trivia. | |
| This is the validation split - publicly available for open evaluation and comparison. | |
| ### Version | |
| v1.1 fixed | |
| ### 📊 Dataset Statistics | |
| - **Total questions:** 1,120 questions | |
| - **Language:** Russian | |
| - **Format:** Parquet | |
|  | |
| --- | |
| ### 📋 Data Format | |
| The dataset is stored in a tabular format (Parquet). Each row corresponds to a **single multiple-choice question** and includes: | |
| - **question** – the question text in Russian; | |
| - **options** – a list of possible answers (3–5 choices); | |
| - **correct_answer** – the correct option; | |
| - **knowledge_area** – subdomain (e.g. *Металлургия редких металлов*, *Металлургия тяжелых цветных металлов*, *Химическая инженерия*). | |
| --- | |
| ### 📚 Citation | |
| If you use Alloy-Bench in academic work or reports, please cite it as: | |
| ```bibtex | |
| @misc{alloybench2025, | |
| title = {Alloy-Bench: Russian Benchmark for Metallurgy and Mining Question Answering}, | |
| author = {nn-tech}, | |
| year = {2025}, | |
| howpublished = {\url{https://huggingface.co/datasets/nn-tech/Alloy-Bench}}, | |
| note = {Multiple-choice evaluation benchmark for domain LLMs} | |
| } | |
| ``` | |
| --- | |