Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
Question: string
Complex_CoT: string
Non_Valid_Complex_CoT: string
Response: int64
Depth: int64
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 675
to
{'question': Value('string'), 'valid_cot': Value('string'), 'near_miss_cot': Value('string'), 'response': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2083, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 180, in _generate_tables
                  yield Key(file_idx, batch_idx), self._cast_table(pa_table)
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 143, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Question: string
              Complex_CoT: string
              Non_Valid_Complex_CoT: string
              Response: int64
              Depth: int64
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 675
              to
              {'question': Value('string'), 'valid_cot': Value('string'), 'near_miss_cot': Value('string'), 'response': Value('int64')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

LogicBench: Logical Chain-of-Thought Training Dataset

Dataset Description

LogicBench is a large-scale dataset for training models on logical and causal reasoning with chain-of-thought (CoT) explanations. The dataset contains 40,000 training examples, each consisting of:

  • Facts: A set of 4-10 atomic facts about entities
  • Rules: A set of 3-8 Horn-style inference rules (if-then statements with conjunctions)
  • Query: A single query statement to prove or refute
  • Valid CoT: A step-by-step proof chain that is logically valid
  • Near-miss CoT: An intentionally incorrect proof chain that differs by exactly one corruption
  • Response: Binary label (1 if query is provable, 0 otherwise)

Dataset Structure

Data Fields

  • question (string): Contains the facts, rules, and query formatted as:

    Facts:
    [Entity] is [property] [PERIOD]
    ...
    
    Rules:
    [premise1] [AND] [premise2] [IMPLY] [conclusion] [PERIOD]
    ...
    
    Query:
    [Entity] is [property] [PERIOD]
    
  • valid_cot (string): A logically sound chain-of-thought reasoning that correctly determines whether the query follows from the facts and rules.

  • near_miss_cot (string): A near-miss reasoning chain with exactly one corruption type:

    • Reverse rule direction
    • Omit one necessary premise
    • Apply rule with mismatched predicate/variable
    • Substitute entity incorrectly
    • Use a rule that is not applicable
  • response (int64): Binary label where:

    • 1 = Query is logically entailed (provable)
    • 0 = Query is not entailed (not provable)

Data Splits

  • Train: 40,000 examples

Distribution

  • Provable queries (response=1): ~65%
  • Non-provable queries (response=0): ~35%

Intended Use

This dataset is designed for:

  1. Training reasoning models to perform logical inference with step-by-step explanations
  2. Evaluating chain-of-thought reasoning capabilities
  3. Contrastive learning using valid vs. near-miss reasoning chains
  4. Testing robustness to subtle logical errors

Example

{
  "question": "Facts:\nAlice is brave [PERIOD]\nAlice is kind [PERIOD]\n\nRules:\nbrave [AND] kind [IMPLY] helpful [PERIOD]\n\nQuery:\nAlice is helpful [PERIOD]",
  
  "valid_cot": "Step 1: From the facts, we know Alice has the following properties: brave, kind.\nStep 2: Apply rule 'brave AND kind [IMPLY] helpful'.\nStep 3: The premises (brave, kind) are satisfied.\nStep 4: By modus ponens, we derive that Alice is helpful.\nTherefore, QUERY is TRUE.",
  
  "near_miss_cot": "Step 1: From the facts, we know Alice has the following properties: brave, kind.\nStep 2: Apply rule 'brave AND kind [IMPLY] helpful'.\nStep 3: The premises (brave, kind) are partially satisfied (one premise missing).\nStep 4: By modus ponens, we derive that Alice is helpful.\nTherefore, QUERY is TRUE.",
  
  "response": 1
}

Dataset Creation

The dataset was programmatically generated using:

  • Forward chaining inference with Horn clauses
  • Systematic corruption of valid reasoning chains
  • Balanced distribution of provable and non-provable queries
  • Validation to ensure consistency between facts, rules, and labels

Citation

If you use this dataset, please cite:

@dataset{logicbench2026,
  title={LogicBench: A Logical Chain-of-Thought Training Dataset},
  author={Amartya77},
  year={2026},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/Amartya77/LogicBench}}
}

License

MIT License

Downloads last month
9