license: mit
task_categories:
- text-classification
language:
- he
pretty_name: LCHAIM
size_categories:
- 1K<n<10K
LCHAIM: Investigating Long Context Reasoning in Hebrew
Overview
LCHAIM is a dataset designed to evaluate Natural Language Inference (NLI) models in Hebrew. Unlike English, Hebrew is a Morphologically Rich Language (MRL), requiring more research to develop robust NLI models. LCHAIM provides a benchmark for models that need to handle long premises and complex reasoning in Hebrew.
Dataset Description
LCHAIM was created by translating and validating the English ConTRoL dataset into Hebrew. It consists of 8,325 context-hypothesis pairs that require various types of reasoning, including:
Coreferential reasoning
Temporal reasoning
Logical reasoning
Analytical reasoning
Performance Benchmarks
Experiments with LCHAIM highlight the challenges of contextual reasoning in Hebrew. Key results include:
Fine-tuning the LongHero model on both Hebrew NLI datasets and LCHAIM yielded a mean accuracy of 52%, which is 35% (absolute) lower than human performance.
Large Language Models (LLMs) in a few-shot setting achieved the following top mean accuracies:
Gemma-9B
Dicta-LM-2.0-7B
GPT-4o
Top performance: 60.12% mean accuracy
Citation
If you use LCHAIM in your research, please cite our work:
@inproceedings{malul2025lchaim,
title={Lchaim-investigating long context reasoning in hebrew},
author={Malul, Ehud and Perets, Oriel and Mor, Ziv and Kassel, Yigal and Sulem, Elior},
booktitle={Findings of the Association for Computational Linguistics: ACL 2025},
pages={7928--7939},
year={2025}
}
License
LCHAIM is released under the mit license.
Contact
For questions or feedback, please contact orielpe@post.bgu.ac.il