| # Sumobot AI Assistant Models | |
| This repository contains three Jupyter notebooks for training and exporting AI models for Sumobot gameplay: | |
| - `train_ml.ipynb`: A multi-class classification model (ML) using TensorFlow | |
| - `train_slm.ipynb`: A sequence-level model (SLM) using a GPT-style transformer | |
| - `train_llm.ipynb`: A large language model (LLM) using LoRA fine-tuning | |
| --- | |
| ## Get Started | |
| ### Clone including the datasets | |
| Note: This will fetch all dataset/*.csv sumobot logs | |
| 1. Run `git lfs install` | |
| 2. Run `git clone https://huggingface.co/datasets/arbyazra123/sumobot_ml` | |
| ### Clone without datasets | |
| Note: By running the .ipynb files, it will scan local `dataset` folder. If it's empty, will fetch from online | |
| 1. Run `GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/arbyazra123/sumobot_ml` | |
| ### Running Online Notebooks | |
| 1. Upload ipynb file that you want to run, for example `train_llm.ipynb` | |
| 2. Upload `dataset_helper.py` to the notebook, for example in google colab, drag and drop `dataset_helper.py` on colab's Files | |
| 3. `train_llm.ipynb` requires to install `trl`. Therefore, install the package by running `!pip install trl` | |
| 4. `train_ml.ipynb` requires to install `tensorflow`. Therefore, install the package by running `!pip install tensorflow` | |
| 5. If online notebook still raising `ModuleNotFound`, upload `requirements.txt` in online notebooks, then run `!pip install -r requirements.txt` | |
| 6. It's done, you dont have to follow next steps. | |
| ### Running the Offline Notebooks | |
| #### Installing Requirements | |
| Install all dependencies using `pip` with Command Prompt / Powershell (Windows) or Terminal (Macos/Linux): | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| > This installs all required packages (PyTorch, TensorFlow, Hugging Face, ONNX, etc.) and works on **Windows, macOS, and Linux**. | |
| --- | |
| #### Prerequisites | |
| - [Jupyter Notebook](https://jupyter.org/install) installed | |
| - Python 3.10+ | |
| - Dataset will be fetched from HugginFace | |
| --- | |
| #### 1. Run `train_ml.ipynb` (Multi-Label Classification) | |
| 1. Open Jupyter: | |
| ```bash | |
| jupyter notebook | |
| ``` | |
| 2. Navigate to `train_ml.ipynb` | |
| 3. Run all cells | |
| > This trains a Keras model and exports it as `ml.onnx` | |
| Output: `ml.onnx` and `action_labels.json` | |
| --- | |
| #### 2. Run `train_slm.ipynb` (Sequence Model) | |
| 1. Open Jupyter | |
| 2. Navigate to `train_slm.ipynb` | |
| 3. Run all cells | |
| > This trains a GPT-style SLM and exports as `slm.onnx` | |
| Output: `slm.onnx` and `slm_tokenizer.json` | |
| > ⚠️ **Note**: This model works best with short, structured text inputs (e.g., bot positions, angles). You can adjust `max_dataset` to limit training data. | |
| --- | |
| #### 3. Run `train_llm.ipynb` (LLM with LoRA) | |
| 1. Open Jupyter | |
| 2. Navigate to `train_llm.ipynb` | |
| 3. Run all cells | |
| > This fine-tunes a Qwen-0.5B model using LoRA and exports the adapter | |
| Output: `adapters/qwen2.5_0.5b_lora/` (contains model + tokenizer) | |
| > Requires ~4-8GB VRAM for full training. For low-end systems, reduce `max_dataset` or `num_train_epoch`. | |
| --- | |
| ## Adjusting Configurations | |
| ### Modify key settings in each notebook: | |
| #### In `train_ml.ipynb`: | |
| - Change `features` to include/exclude bot or enemy stats | |
| - Adjust `batch_size`, `epochs`, or `learning_rate` for performance | |
| - Set `max_dataset` to limit training (e.g., `max_dataset=1000`) | |
| #### In `train_slm.ipynb`: | |
| - Adjust `block_size`, `batch_size`, `n_layers`, `lr` | |
| - Set `max_dataset` to control training data size | |
| - Change `prompt` in `generate()` to test different inputs | |
| #### In `train_llm.ipynb`: | |
| - Set `max_dataset` to limit training (e.g., `max_dataset=10000`) | |
| - Adjust `batches_per_device`, `gradient_accumulation`, `learning_rate`, or `num_train_epoch` | |
| - Modify `prompt` in inference section to test different behaviors | |
| --- | |
| ## Notes for All Platforms | |
| | Platform | Notes | | |
| |--------|-------| | |
| | **Windows** | Use `pip install` in terminal; Jupyter works via `jupyter notebook` or `jupyter lab` | | |
| | **macOS** | Use `pip` or `pip3`; Jupyter works via `jupyter notebook` or `jupyter lab` | | |
| | **Linux (Ubuntu/Debian)** | Use `pip install` and `jupyter notebook` | | |
| > To rerun: Restart Jupyter kernel and re-run from the top. | |
| --- | |
| ## Known Issues & Tips | |
| - `train_llm.ipynb`: may fail on CPU due to lack of VRAM — use `device_map="auto"` or limit `max_dataset` | |
| - `train_ml.ipynb`: Ensure `cleaned_log.csv` for has columns: `Name`, `Duration`, and the listed features | |
| - Test outputs by changing the prompt or input data | |
| --- | |
| ## Example Prompt Test (for all models) | |
| For `train_slm.ipynb` and `train_llm.ipynb`, you can test with: | |
| ```text | |
| BotPos=[2.23,2.25], BotRot=228, EnemyPos=[2.87,0.39], EnemyRot=87, AngleToEnemy=-29.68, AngleToEnemyScore=0.87, DistanceToEnemyScore=0.79, NearBorderArenaScore=0.42, FacingToArena=0.65. Suggested Action: | |
| ``` | |
| > The models will generate a response based on the provided game state. | |
| --- | |
| ## Summary | |
| | Model | Use Case | Exported To | | |
| |------|---------|-------------| | |
| | ML | Action classification | `ml.onnx` | | |
| | SLM | Sequence prediction | `slm.onnx` | | |
| | LLM | Natural language reasoning | `adapters/qwen2.5_0.5b_lora/` | | |
| ``` |