You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Q-Learning in Reinforcement Learning Environments

Project Overview

This project implements the Q-Learning algorithm in two different reinforcement learning environments: a custom-built Gridworld and the MountainCar-v0 environment from the OpenAI Gym library.

Part 1: Custom Gridworld Environment

A simple 5x5 Gridworld where the goal is to navigate from the start state to the goal state, avoiding obstacles and minimizing the number of steps:

  • Grid Layout:
    • Start at (0, 0)
    • Goal at (4, 4)
    • Obstacles at (2, 2) and (3, 3)
  • Rewards:
    • Goal: +100
    • Obstacle: -10
    • Each step: -1

Part 2: MountainCar-v0 Environment

Utilizing the standard MountainCar-v0 environment from the gym library, where the agent must drive a car up a steep hill:

  • Environment Features:
    • The agent learns to balance momentum and gravity to reach the peak.
    • Visualize the environment using env.render().

Video Preview

Watch the demonstration video on YouTube.

Implementation Details

  • Q-Learning:
    • Initialize and update the Q-table using the Bellman equation.
    • An epsilon-greedy policy is used for action selection.
  • Training and Testing:
    • Train the agent over multiple episodes.
    • Visualize and analyze the learned policy to evaluate performance.

How to Use

  1. Requirements: Ensure you have Python and the necessary libraries installed, including gym for the MountainCar environment.
  2. Setup:
    • Clone the repository: git clone https://huggingface.co/Karim2211/ReinforcementLearningModels/tree/main
    • Install dependencies: pip install -r requirements.txt
  3. Running the Code:
    • Navigate to the project directory.
    • For Gridworld: python gridworld.py
    • For MountainCar: python mountain_car.py
  4. Loading Models:
    • Models are saved in the models/ directory.
    • To load and test models:
      import pickle
      # For Gridworld model
      with open('models/gridworld_model.pkl', 'rb') as f:
          gridworld_model = pickle.load(f)
      # For MountainCar model
      with open('models/mountain_car_model.pkl', 'rb') as f:
          mountain_car_model = pickle.load(f)
      
  5. Testing the Models:
    • Use the loaded models to make predictions or to evaluate the policies.
    • Example testing code:
      # Test Gridworld model
      gridworld.test_policy(gridworld_model)
      # Test MountainCar model
      mountain_car.test_policy(mountain_car_model)
      

Dependencies

  • Python 3.x
  • gym
  • numpy
  • matplotlib (optional for visualization)

Authors

  • Karim Ashraf
  • Shiref Elhalawany
  • Mahitab Waleed

License

This project is licensed under the MIT License.

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading