metadata
license: cc-by-nc-sa-4.0
Mystery Zebra Dataset
This is the Mystery Zebra dataset created as part of the paper "Lexical Recall or Logical Reasoning: Probing the Limits of Reasoning Abilities in Large Language Models". We make the dataset available in a .csv format for your convenience. The code used to generate the puzzles in this dataset can be found in: https://github.com/arg-tech/MysteryZebra The structure of the dataset is straightforward and easy to parse. In the following, we detail the content of all the columns in the .csv file:
- ID: This field contains the ID of the datapoint.
- For puzzles of the first part of the benchmark it is composed as follows: Pt1_Einstein_1domain_replacements-0, indicating:
<Part_of_benchmark>_<classic_puzzle_name>_<obfuscation_type>-<running_number>
- For puzzles of the second part of the benchmark it is composed as follows: Pt2_1x6_level1-0, indicating:
<part_of_benchmark>_<columns x rows>_<level>-<running_number>
- For puzzles of the first part of the benchmark it is composed as follows: Pt1_Einstein_1domain_replacements-0, indicating:
- Clues: This field contains the set of clues given to solve the puzzle. This is written out in Natural Language.
- For any puzzle that is not a
Zebra_originalor an `Einstein_original'. this is in the uniform phrasing detailed in our paper.
- For any puzzle that is not a
- SolutionGrid: This field contains the correct solution grid for the puzzle in the form of a phyton dictionary, where the keys correspont to a domain name (e.g. pets) and the items are in an ordered list, where indices equal positions in the grid.
Referencing
Please reference the corpus as follows:
[INSERT BIB INFO ONCE THERE]