| | --- |
| | license: mit |
| | task_categories: |
| | - question-answering |
| | - text-generation |
| | language: |
| | - en |
| | tags: |
| | - game-theory |
| | - reasoning |
| | - connect4 |
| | - synthetic-dataset |
| | - logic |
| | pretty_name: Connect4 Reasoning Task |
| | size_categories: |
| | - n<1K |
| | --- |
| | |
| | # Dataset Card for Connect4 Reasoning Task |
| |
|
| | ## 1. Dataset Summary |
| | This dataset is dynamically constructed using the **GAMEBoT** framework in the **Connect4** game. It is designed to evaluate Large Language Models' (LLMs) ability in symbolic reasoning, board state parsing, and lookahead planning. |
| |
|
| | By serializing 6 × 7 grid states into text-based formats, this dataset challenges models to identify winning topologies in a deterministic environment with a state-space complexity of approximately 4.5 × 10<sup>12</sup>, ensuring a robust evaluation against data contamination. |
| |
|
| | ## 2. Construction Methodology |
| | - **State Generation**: Two randomized agents simulated gameplay to generate a diverse distribution of board states, covering both balanced and critical tactical scenarios. |
| | - **Filtering and Balancing**: |
| | - Duplicate states were removed. |
| | - To prevent label imbalance (where LLMs might score high by simply predicting "no winning moves"), states with empty answers were downsampled to a **maximum ratio of 20%**. |
| | - **Ground Truth**: All labels were verified using a perfect solver integrated within the GAMEBoT engine. |
| |
|
| | ## 3. Evaluation Protocol |
| | For each instance, the model is prompted to solve two sub-problems: |
| | 1. **Self-Winning**: "Are there any potential winning moves to form 4-in-a-row for you? Output all winning moves." |
| | 2. **Opponent-Winning**: "Are there any potential winning moves to form 4-in-a-row for your opponent? Output all winning moves." |
| |
|
| | The evaluation extracts the final answer from the model's **Chain-of-Thought (CoT)** and compares it against the engine-validated ground truth. |
| |
|
| | ## 4. Citation |
| | If you find this dataset helpful in your research, please cite the following work: |
| |
|
| | ```bibtex |
| | @inproceedings{lin2025gamebot, |
| | title={GAMEBoT: Transparent assessment of LLM reasoning in games}, |
| | author={Lin, Wenye and Roberts, Jonathan and Yang, Yunhan and Albanie, Samuel and Lu, Zongqing and Han, Kai}, |
| | booktitle=ACL, |
| | year={2025} |
| | } |