--- title: README emoji: ๐Ÿ”ฅ colorFrom: indigo colorTo: blue sdk: static pinned: false --- # Welcome to HAERAE We are a non-profit research lab focused on understanding and building better Korean language models. See below for an overview of our projects. **Benchmarks** We have built _the_ most-widely used korean benchmarks including HAE-RAE Bench (cultural knowledge, [dataset](https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH_1.0), [paper](https://arxiv.org/abs/2309.02706)), KMMLU (general knowledge, [dataset](https://huggingface.co/datasets/HAERAE-HUB/KMMLU), [paper](https://arxiv.org/abs/2402.11548)), HRM8K (math, [dataset](https://huggingface.co/datasets/HAERAE-HUB/HRM8K), [paper](https://www.arxiv.org/abs/2501.02448)), and KMMLU-Redux/Pro (general knowledge, [dataset](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Pro), [paper](https://arxiv.org/abs/2507.08924)). **Evaluation** We developed the [haerae-evaluation-toolkit](https://github.com/HAE-RAE/haerae-evaluation-toolkit), a unified LLM evaluation framework designed to provide consistent and reproducible benchmarking for Korean and multilingual models. **Reasoning Language Models** With cooperation with [KISTI-KONI](https://huggingface.co/KISTI-KONI) we released the [KO-REAson](https://huggingface.co/KOREAson) series, <10B reasoning language models trained for Korean. # News 2025.08.31: We release six [KO-REAson-0831 models](https://huggingface.co/collections/KoReason/koreason-0831-68b1363e1b3726b041a0a638) ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ 2025.07.11: We've collaborated with LG AI Research to build [KMMLU-PRO](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Pro) an major update to our KMMLU franchise. 2025.01.05: We are releasing the first public korean math (๐Ÿ“e = โˆ‘โˆžโฟโผโฐ ยนโ‚™๐Ÿค“) benchmark [HRM8K](https://huggingface.co/datasets/HAERAE-HUB/HRM8K)