Datasets:

Modalities:
Text
Formats:
json
Languages:
Hebrew
Libraries:
Datasets
pandas
License:
File size: 1,813 Bytes
8e6b357
 
 
 
 
 
 
 
 
 
def02e9
9957613
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
def02e9
 
 
 
 
 
 
 
 
 
 
9957613
 
 
 
 
 
 
8e6b357
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
task_categories:
- text-classification
language:
- he
pretty_name: LCHAIM
size_categories:
- 1K<n<10K
---
## LCHAIM: Investigating Long Context Reasoning in Hebrew 

### Overview

LCHAIM is a dataset designed to evaluate Natural Language Inference (NLI) models in Hebrew. Unlike English, Hebrew is a Morphologically Rich Language (MRL), requiring more research to develop robust NLI models. LCHAIM provides a benchmark for models that need to handle long premises and complex reasoning in Hebrew.

### Dataset Description

LCHAIM was created by translating and validating the English ConTRoL dataset into Hebrew. It consists of 8,325 context-hypothesis pairs that require various types of reasoning, including:

* Coreferential reasoning

* Temporal reasoning

* Logical reasoning

* Analytical reasoning

### Performance Benchmarks

Experiments with LCHAIM highlight the challenges of contextual reasoning in Hebrew. Key results include:

Fine-tuning the LongHero model on both Hebrew NLI datasets and LCHAIM yielded a mean accuracy of 52%, which is 35% (absolute) lower than human performance.

Large Language Models (LLMs) in a few-shot setting achieved the following top mean accuracies:

* Gemma-9B

* Dicta-LM-2.0-7B

* GPT-4o

Top performance: 60.12% mean accuracy

### Citation

If you use LCHAIM in your research, please cite our work:

```
@inproceedings{malul2025lchaim,
  title={Lchaim-investigating long context reasoning in hebrew},
  author={Malul, Ehud and Perets, Oriel and Mor, Ziv and Kassel, Yigal and Sulem, Elior},
  booktitle={Findings of the Association for Computational Linguistics: ACL 2025},
  pages={7928--7939},
  year={2025}
}
```
### License


LCHAIM is released under the mit license.

### Contact

For questions or feedback, please contact orielpe@post.bgu.ac.il