The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Sumobot AI Assistant Models
This repository contains three Jupyter notebooks for training and exporting AI models for Sumobot gameplay:
train_ml.ipynb: A multi-class classification model (ML) using TensorFlowtrain_slm.ipynb: A sequence-level model (SLM) using a GPT-style transformertrain_llm.ipynb: A large language model (LLM) using LoRA fine-tuning
Get Started
Clone including the datasets
Note: This will fetch all dataset/*.csv sumobot logs
- Run
git lfs install - Run
git clone https://huggingface.co/datasets/arbyazra123/sumobot_ml
Clone without datasets
Note: By running the .ipynb files, it will scan local dataset folder. If it's empty, will fetch from online
- Run
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/arbyazra123/sumobot_ml
Running Online Notebooks
- Upload ipynb file that you want to run, for example
train_llm.ipynb - Upload
dataset_helper.pyto the notebook, for example in google colab, drag and dropdataset_helper.pyon colab's Files train_llm.ipynbrequires to installtrl. Therefore, install the package by running!pip install trltrain_ml.ipynbrequires to installtensorflow. Therefore, install the package by running!pip install tensorflow- If online notebook still raising
ModuleNotFound, uploadrequirements.txtin online notebooks, then run!pip install -r requirements.txt - It's done, you dont have to follow next steps.
Running the Offline Notebooks
Installing Requirements
Install all dependencies using pip with Command Prompt / Powershell (Windows) or Terminal (Macos/Linux):
pip install -r requirements.txt
This installs all required packages (PyTorch, TensorFlow, Hugging Face, ONNX, etc.) and works on Windows, macOS, and Linux.
Prerequisites
- Jupyter Notebook installed
- Python 3.10+
- Dataset will be fetched from HugginFace
1. Run train_ml.ipynb (Multi-Label Classification)
- Open Jupyter:
jupyter notebook - Navigate to
train_ml.ipynb - Run all cells
This trains a Keras model and exports it as
ml.onnx
Output: ml.onnx and action_labels.json
2. Run train_slm.ipynb (Sequence Model)
- Open Jupyter
- Navigate to
train_slm.ipynb - Run all cells
This trains a GPT-style SLM and exports as
slm.onnx
Output: slm.onnx and slm_tokenizer.json
⚠️ Note: This model works best with short, structured text inputs (e.g., bot positions, angles). You can adjust
max_datasetto limit training data.
3. Run train_llm.ipynb (LLM with LoRA)
- Open Jupyter
- Navigate to
train_llm.ipynb - Run all cells
This fine-tunes a Qwen-0.5B model using LoRA and exports the adapter
Output: adapters/qwen2.5_0.5b_lora/ (contains model + tokenizer)
Requires ~4-8GB VRAM for full training. For low-end systems, reduce
max_datasetornum_train_epoch.
Adjusting Configurations
Modify key settings in each notebook:
In train_ml.ipynb:
- Change
featuresto include/exclude bot or enemy stats - Adjust
batch_size,epochs, orlearning_ratefor performance - Set
max_datasetto limit training (e.g.,max_dataset=1000)
In train_slm.ipynb:
- Adjust
block_size,batch_size,n_layers,lr - Set
max_datasetto control training data size - Change
promptingenerate()to test different inputs
In train_llm.ipynb:
- Set
max_datasetto limit training (e.g.,max_dataset=10000) - Adjust
batches_per_device,gradient_accumulation,learning_rate, ornum_train_epoch - Modify
promptin inference section to test different behaviors
Notes for All Platforms
| Platform | Notes |
|---|---|
| Windows | Use pip install in terminal; Jupyter works via jupyter notebook or jupyter lab |
| macOS | Use pip or pip3; Jupyter works via jupyter notebook or jupyter lab |
| Linux (Ubuntu/Debian) | Use pip install and jupyter notebook |
To rerun: Restart Jupyter kernel and re-run from the top.
Known Issues & Tips
train_llm.ipynb: may fail on CPU due to lack of VRAM — usedevice_map="auto"or limitmax_datasettrain_ml.ipynb: Ensurecleaned_log.csvfor has columns:Name,Duration, and the listed features- Test outputs by changing the prompt or input data
Example Prompt Test (for all models)
For train_slm.ipynb and train_llm.ipynb, you can test with:
BotPos=[2.23,2.25], BotRot=228, EnemyPos=[2.87,0.39], EnemyRot=87, AngleToEnemy=-29.68, AngleToEnemyScore=0.87, DistanceToEnemyScore=0.79, NearBorderArenaScore=0.42, FacingToArena=0.65. Suggested Action:
The models will generate a response based on the provided game state.
Summary
| Model | Use Case | Exported To |
|---|---|---|
| ML | Action classification | ml.onnx |
| SLM | Sequence prediction | slm.onnx |
| LLM | Natural language reasoning | adapters/qwen2.5_0.5b_lora/ |
| ``` |
- Downloads last month
- 6