u3t / README.md
markstanl's picture
Update README.md
6240c8c verified
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/full_dataset/train.parquet
      - split: test
        path: data/full_dataset/test.parquet
      - split: validation
        path: data/full_dataset/validation.parquet
language:
  - en
tags:
  - uttt
  - u3t
  - ultimate-tic-tac-toe
  - game-ai
  - game-data
pretty_name: U3T Monte Carlo Tree Search Position Evaluations
size_categories:
  - 1M<n<10M

Dataset Card for U3T

This dataset stores refactored data of Monte Carlo Search Tree Evaluations edited from arnowaczynski's utttai, for the game Ultimate-tic-tac-toe (U3T). MCTS evaluations provide probabilistic estimates of how advantageous a given game position or move is for a player.

About Ultimate Tic-Tac-Toe

From Wikipedia:

Ultimate tic-tac-toe is a board game composed of nine tic-tac-toe boards arranged in a 3 × 3 grid. Players take turns playing on the smaller tic-tac-toe boards until one of them wins on the larger board. Compared to traditional tic-tac-toe, strategy in this game is conceptually more difficult and has proven more challenging for computers.

Ultimate Tic-Tac-Toe remains a largely unexplored domain in AI research. While there have been some academic papers and small-scale attempts to develop AI for this game (with some being quite successful), not much effort has been put in to making advanced bots. This dataset provides a structured and scalable resource for researchers and developers looking to build AI models for U3T. By leveraging MCTS evaluations, this dataset offers a foundation for training AI to better understand strategic play in a complex yet structured environment. Given the lack of extensive research in this field, this dataset represents a unique opportunity to contribute to an emerging area of AI-driven gameplay.

Dataset Details

This dataset contains over 8 million evaluated positions at varying depths of play (via utttai):
Depth Graph from UTTTAI

Each evaluated positions counts the number of wins, draws, and loses, giving a probabilisitc estimation as to how good a position is. Each position stores an array of legal moves, and their respective MCTS searches.

Uses

The MCTS evaluation can be used for many types of game evaluations. The MCTS of every position can be used for making a static evaluator, or the evaluations of every legal move from a position can be used to train a model to predict the best moves. There are many ways to use this set, but the dataset is specifically tuned for making making deep learning models that can play U3T, or evaluate a static position. It is further recommended that any bot you make with this as initial training should be fine-tuned with reinforcement learning.

Dataset Structure

Full Dataset

This dataset stores evaluations for game positions in Ultimate Tic-Tac-Toe. Each data point represents a specific game state and its associated evaluation metrics. The full dataset stores all of the refactored data from the original dataset

Features:

  • state (string):
    • Represents the current game state of the Ultimate Tic-Tac-Toe board.
    • Despite the stored data being a string, the data itself is to be interpreted as a byteboard, each character representing a byte.
    • The format of the byte array is as follows. Greater detail can be found here.
      • byte_array = / "subgames" (small board) (81 bytes)/ "supergames" (big board) (9 bytes) / current player (1 byte) / constraint (1 byte) / result (1 byte)
        • subgames represents the byteboard of each small tiles. The least significant bit refers to the lowest index, and the most significant bit the largest.
          • Subgames Index Representation
        • supergames represents the byteboard of each big board. The LSB refers to the lowest index, and vice versa for the MSB.
          • Supergames Index Representation
        • current player refers to the current person playing. Where 1 is X and 2 is O
        • constraint is whatever board index the next player is required to play at (0-8) or 9 for any.
        • result just clarifies who is the winner. 0 for none, 1 for X, 2 for O, or 3 for draw
  • num_visits (int64):
    • Indicates how many positions were searched. num_visits = num_wins + num_draws + num_losses
  • num_wins (int64):
    • The number of times a player has won from this particular game state. The "player" refers to which player's turn it is.
  • num_draws (int64):
    • The number of times the game has resulted in a draw from this game state.
  • num_losses (int64):
    • The number of times a player has lost from this game state. The "player" refers to which player's turn it is.
  • actions (list of objects):
    • A list of the evaluated positions of all possible moves from the given position
    • Each action is represented as an object with the following fields:
      • index (int64):
        • The index of the move made on the board.
      • num_wins (int64):
        • The number of wins that resulted from taking this action.
      • num_draws (int64):
        • The number of draws that resulted from taking this action.
      • num_losses (int64):
        • The number of losses that resulted from taking this action.
      • symbol (int64):
        • Indicates the symbol placed on the board (e.g., X = 1, 0 = 2).

State Evaluation

This subdirectory contains the MCTS evaluation of a given U3T position, given a tensor gamestate. Important details on what the "score" is are described below. This streamlines the process of making game evaluation deep learning networks, as the tensor state and output score are given. Consider previous explanations to help understand visually how the board looks. It is important to note, though, that if the tensor is printed out, it will "look" reflected upon the y-axis from the actual gamestate it represents (each row is correct, but each row is reversed from it's visual representation).

Features:

  • tensor_state (tensor int32):
    • Represents the current game state of the Ultimate Tic-Tac-Toe board.
    • The tensor is stored as a (4, 9, 9) shape. Each of the 4 layers represents something else, and the (9, 9) part represents the 9 by 9 U3T subgame board.
      • First layer: X's occupied tiles (1 for X, 0 otherwise)
      • Second layer: O's occupied tiles (1 for O, 0 otherwise)
      • Third layer: Legal move positions (1 for valid moves, 0 otherwise)
      • Fourth layer: Current player's turn (9 by 9 board filled with 1s if X, 0s if O)
  • score (float32):
    • Indicates the 'score' of the position (the static evaluation). The score is simply designated as (wins - losses) / num_games. This puts the range of scores between [-1, 1].
    • It is important to note that unlike a regular evaluation, it is not assumed that -1 is better for O, or 1 is better for X. Rather, a positive score means that a given position is better for the current player.

Dataset Creation

Documentation on the generation of this dataset can be found here. Roughly, we converted the UTTTAI gamestate to the standardized version documented before. Processed the txt file into JSONL files for each depth. Converted each JSON line in the JSONL to a dictionary with the new key of depth. Appended a list with all of the dictionaries. Stored it as a large parquet file. Then split the data accordingly.

Dataset Splitting

The dataset was split into a train, test, and validation set, with a distribution randomized around the idea of ensuring the correct amount depth in each set (i.e. The train set has 70% of depth 0, 1, 2, ... gamestates, the test has 20%, etc.). This applies to all datasets.

  • Training Set: The training set consists of 70% of the data.
  • Testing Set: The test set constains 20% of the data.
  • Validation Set: The validation set contains 10% of the data

Curation Rationale

The original dataset by arnowaczynski was not structured for large-scale machine learning and used an unconventional indexing method (detailed here). To optimize for neural network training, we refactored it and uploaded the improved version to Hugging Face.

Source Data

arnowaczynski generated these datasets in their GitHub Repository. Further documentation on how the dataset was generated is available at that link.

Contribution

Contributing is highly encouraged and appreciated. This does not need to be expanding upon this dataset. It can be building different models or datasets. Opening PRs here or reaching out to me individually is encouraged.