You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸš€ Project Setup Guide

This project supports both Hugging Face models and Ollama models.
Follow the instructions below to set up and run the project.


⚑ Quick Start + 🐍 Miniconda & Conda Environment

sudo apt update && sudo apt upgrade -y

sudo apt install -y iproute2 libgl1 nano wget unzip nvtop git git-lfs build-essential cmake \
libopenblas-dev liblapack-dev libx11-dev libgtk-3-dev libglib2.0-0


# Install Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3

# Setup conda
export PATH="$HOME/miniconda3/bin:$PATH"
conda init
source ~/.bashrc

# Create environment
conda create --name YT python=3.12 -y
conda activate YT

# Clone project
git config --global credential.helper store
git clone https://huggingface.co/WalidAlHassan/Chat-with-YouTube-Video
cd Chat-with-YouTube-Video

# Install dependencies
pip install -r requirements.txt

πŸ€– Model Setup

πŸ”Ή If using Ollama models

curl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text-v2-moe
ollama pull gemma4:e4b

πŸ”Ή If using Hugging Face models

Create a .env file in the root directory and add:

HF_TOKEN=your_huggingface_api_key_here

βœ… Run!

streamlit run streamlit_app.py
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support