dataset_info:
- config_name: obfus_blood_relation
features:
- name: Base_Question
dtype: string
- name: Obfuscation_Level1_Question
dtype: string
- name: Obfuscation_Level2_Question
dtype: string
- name: Answer
dtype: string
splits:
- name: test
num_bytes: 51222
num_examples: 106
download_size: 22911
dataset_size: 51222
- config_name: obfus_direction_sense
features:
- name: Base_Question
dtype: string
- name: Obfuscated_Question
dtype: string
- name: Answer
dtype: string
splits:
- name: test
num_bytes: 80828
num_examples: 126
download_size: 33473
dataset_size: 80828
- config_name: obfus_fol
features:
- name: Base_Premise_NL
dtype: string
- name: Base_Premise_FOL
dtype: string
- name: Base_Conclusion_NL
dtype: string
- name: Base_Conclusion_FOL
dtype: string
- name: Obfuscated_Premise_NL
dtype: string
- name: Obfuscated_Premise_FOL
dtype: string
- name: Obfuscated_Conclusion_NL
dtype: string
- name: Obfuscated_Conclusion_FOL
dtype: string
- name: Answer
dtype: bool
splits:
- name: test
num_bytes: 229459
num_examples: 119
download_size: 103036
dataset_size: 229459
- config_name: obfus_number_series
features:
- name: Base_Series
dtype: string
- name: Obfuscated_Series
dtype: string
- name: Answer
dtype: float64
splits:
- name: test
num_bytes: 50960
num_examples: 300
download_size: 13940
dataset_size: 50960
configs:
- config_name: obfus_blood_relation
data_files:
- split: test
path: obfus_blood_relation/test-*
- config_name: obfus_direction_sense
data_files:
- split: test
path: obfus_direction_sense/test-*
- config_name: obfus_fol
data_files:
- split: test
path: obfus_fol/test-*
- config_name: obfus_number_series
data_files:
- split: test
path: obfus_number_series/test-*
LogiQAte
This benchmark is introduced in the paper:
“Don’t Judge a Book by its Cover: Testing LLMs’ Robustness Under Logical Obfuscation”
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2026)
Paper Link: https://arxiv.org/abs/2602.01132
LogiQAte is a diagnostic benchmark designed to evaluate whether large language models truly reason or merely rely on surface pattern matching.
The benchmark tests models under logical obfuscation: rewriting problems into logically equivalent but structurally different forms.
If a model genuinely understands the logic, its performance should remain stable.
In practice, we find that performance drops sharply, exposing brittleness in SoTA LLMs.
Dataset Organization
Each reasoning task_type is released as a separate configuration in this HuggingFace repository.
All data is provided in the test split for evaluation.
Loading the Dataset
from datasets import load_dataset
ds = load_dataset("abhilekhborah/LogiQAte", "task_type", split="test")
print(ds[0])
If you use LogiQAte, please cite:
@inproceedings{borah2026logiqate,
title = {Don’t Judge a Book by its Cover: Testing LLMs’ Robustness Under Logical Obfuscation},
author = {Borah, Abhilekh and Ghosh, Shubhra and Joshi, Kedar and Guru, Aditya Kumar and Ghosh, Kripabandhu},
booktitle = {Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
year = {2026}
}