File size: 4,122 Bytes
456c53f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
---
license: mit
tags:
- reinforcement-learning
- behavioral-cloning
- racing-game
- tensorflow
- neural-network
language:
- en
datasets:
- synthetic
metrics:
- accuracy
- f1
library_name: tensorflow
pipeline_tag: other
---
# AI Racing Game Model
This is a neural network trained to play a racing game using behavioral cloning. The model learns from expert demonstrations to make driving decisions (left, stay, right) based on the current game state.
## Model Details
- **Model Type**: Feed-forward Neural Network
- **Framework**: TensorFlow/Keras
- **Training Method**: Behavioral Cloning (Supervised Learning)
- **Input**: 9-dimensional state vector
- **Output**: 3-dimensional action probabilities
## Training Data
- **Total Samples**: 75,000
- **Data Source**: Synthetic expert demonstrations
- **Difficulty Levels**: Progressive (0.5x to 1.5x)
- **Training Method**: Supervised learning on expert actions
## Model Architecture
```
Input Layer: 9 features
Hidden Layer 1: 64 neurons (ReLU + BatchNorm + Dropout 0.3)
Hidden Layer 2: 32 neurons (ReLU + BatchNorm + Dropout 0.2)
Hidden Layer 3: 16 neurons (ReLU + Dropout 0.1)
Output Layer: 3 neurons (Softmax)
```
## Performance
- **Test Accuracy**: 0.9879
- **Test Loss**: 0.0471
- **Weighted F1-Score**: 0.9861
### Per-Class Metrics
- **Left Action**: Precision: 0.901, Recall: 0.471, F1: 0.618
- **Stay Action**: Precision: 0.989, Recall: 1.000, F1: 0.994
- **Right Action**: Precision: 0.973, Recall: 0.545, F1: 0.699
## Usage
### TensorFlow.js (Web)
```javascript
// Load the model
const model = await tf.loadLayersModel('model.json');
// Prepare input (9-dimensional array)
const gameState = tf.tensor2d([[
lane0_near, lane0_far, // Lane 0 sensors
lane1_near, lane1_far, // Lane 1 sensors
lane2_near, lane2_far, // Lane 2 sensors
current_lane_norm, // Current lane (0-1)
progress_norm, // Game progress (0-1)
speed_factor // Speed factor
]], [1, 9]);
// Get prediction
const prediction = model.predict(gameState);
const actionProbs = await prediction.data();
// Choose action (0=Left, 1=Stay, 2=Right)
const action = actionProbs.indexOf(Math.max(...actionProbs));
```
### Python (TensorFlow)
```python
import tensorflow as tf
import numpy as np
# Load the model
model = tf.keras.models.load_model('racing_model.keras')
# Prepare input
game_state = np.array([[
lane0_near, lane0_far,
lane1_near, lane1_far,
lane2_near, lane2_far,
current_lane_norm,
progress_norm,
speed_factor
]])
# Get prediction
action_probs = model.predict(game_state)[0]
action = np.argmax(action_probs) # 0=Left, 1=Stay, 2=Right
```
## Input Format
The model expects a 9-dimensional input vector:
1. **lane0_near** (0-1): Near obstacle sensor for left lane
2. **lane0_far** (0-1): Far obstacle sensor for left lane
3. **lane1_near** (0-1): Near obstacle sensor for middle lane
4. **lane1_far** (0-1): Far obstacle sensor for middle lane
5. **lane2_near** (0-1): Near obstacle sensor for right lane
6. **lane2_far** (0-1): Far obstacle sensor for right lane
7. **current_lane_norm** (0-1): Current lane position normalized
8. **progress_norm** (0-1): Game progress/score normalized
9. **speed_factor** (0-1): Current game speed factor
## Output Format
The model outputs 3 probability values:
- **Index 0**: Probability of moving left
- **Index 1**: Probability of staying in current lane
- **Index 2**: Probability of moving right
## Files Included
- `model.json` + `*.bin`: TensorFlow.js model files
- `racing_model.keras`: Native Keras model
- `metadata.json`: Model metadata and training info
- `training_history.png`: Training progress visualization
## Training Details
- **Epochs**: 30
- **Batch Size**: 64
- **Optimizer**: Adam (lr=0.001)
- **Loss Function**: Categorical Crossentropy
- **Early Stopping**: Patience 8
## Citation
If you use this model, please cite:
```bibtex
@misc{ai_racing_model,
title={AI Racing Game Neural Network},
author={Your Name},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Relacosm/theline-v1}
}
```
|