ChaosBench-Logic / CITATION.cff
11NOel11's picture
Upload CITATION.cff with huggingface_hub
0d4d43c verified
cff-version: 1.2.0
message: "If you use this benchmark in your research, please cite it as below."
type: software
title: "ChaosBench-Logic: A Benchmark for Evaluating Large Language Models on Complex Reasoning about Dynamical Systems"
version: 2.0.0
date-released: 2026-02-18
url: "https://github.com/11NOel11/ChaosBench-Logic"
repository-code: "https://github.com/11NOel11/ChaosBench-Logic"
license: MIT
authors:
- family-names: "Thomas"
given-names: "Noel"
affiliation: "Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)"
keywords:
- "large language models"
- "benchmark"
- "dynamical systems"
- "chaos theory"
- "reasoning"
- "evaluation"
- "LLM"
- "GPT-4"
- "Claude"
- "Gemini"
- "LLaMA"
abstract: >
ChaosBench-Logic is a comprehensive benchmark for evaluating Large Language
Models (LLMs) on complex reasoning tasks involving chaotic and non-chaotic
dynamical systems. The benchmark consists of 40,886 v2 questions plus 621
archived v1 questions (41,507 total) spanning 165 dynamical systems (30 core
+ 135 from dysts library) across 10 task families: atomic, multi-hop,
indicator diagnostics, regime transition, FOL inference, cross-indicator,
extended systems, consistency paraphrase, adversarial, and perturbation
robustness. The ontology covers 27 predicates with first-order logic axioms
enabling up to 4-hop reasoning chains. The codebase supports multiple
state-of-the-art LLMs (GPT-4, Claude-3.5, Gemini-2.5, LLaMA-3 70B) in
both zero-shot and chain-of-thought modes, providing comprehensive metrics
including overall accuracy, task-specific breakdowns, FOL violation
detection, and bias analysis.
references:
- type: article
authors:
- family-names: "Thomas"
given-names: "Noel"
title: "ChaosBench-Logic: Benchmarking Large Language Models on Complex Reasoning about Dynamical Systems"
year: 2025
journal: "arXiv preprint"