Frozen-Lake-v1 Q-Learning Agent
This repository contains a trained Q-learning agent for the Frozen-Lake-v1 environment from Gymnasium.
The agent is trained to navigate the frozen lake, avoid holes, and reach the goal efficiently. It uses a Q-table stored in model.zip.
Environment
- Name: FrozenLake-v1
- Observation Space: Discrete (depends on lake size, e.g., 6x6 → 36 states)
- Action Space: Discrete 4 (LEFT, DOWN, RIGHT, UP)
- Map: Randomized frozen lake with holes (
H) and goal (G) - Slippery: True (agent may slide on ice)
Algorithm
- Type: Q-learning (tabular)
- Hyperparameters:
- Learning rate: 0.7
- Discount factor (gamma): 0.95
- Exploration (epsilon): decaying from 1.0 → 0.01
- Episodes: 8000+
- Max Steps per Episode: 200
Model
The trained Q-table is saved as:
Inside model.zip, you will find q_table.npy.
How to Load the Model
import numpy as np
# Load Q-table
q_table = np.load("model.zip", allow_pickle=True)
# Example usage
state = 0 # starting state
action = np.argmax(q_table[state, :])