Spaces:
Sleeping
Sleeping
| # π¬ LectureLens AI β Complete Setup Guide | |
| ## π Project Structure | |
| lecturelens/ | |
| βββ app.py # Main Flask app + all API routes | |
| βββ requirements.txt # All Python dependencies | |
| βββ Dockerfile # HuggingFace deployment config | |
| βββ .env # API keys (NEVER upload this!) | |
| βββ .gitignore # Tells git to ignore .env | |
| βββ README.md # Project documentation | |
| βββ SETUP_GUIDE.md # This file | |
| βββ templates/ | |
| β βββ index.html # Complete frontend UI | |
| βββ utils/ | |
| βββ init.py # Package init | |
| βββ transcript_handler.py # YouTube transcript + title extraction | |
| βββ embedder.py # TF-IDF search engine | |
| βββ llm_handler.py # GPT-4o-mini β all AI features | |
| --- | |
| ## π API Keys Required | |
| ### OpenAI API Key (Only One Required!) | |
| 1. Go to: https://platform.openai.com | |
| 2. Sign up / Login | |
| 3. Click "API Keys" β "Create new secret key" | |
| 4. Copy your key (starts with `sk-`) | |
| --- | |
| ## π₯οΈ Local Setup | |
| ### Step 1: Install Python | |
| Download from: https://python.org (Python 3.10+) | |
| ### Step 2: Download Project | |
| Download and extract the project folder | |
| ### Step 3: Create Virtual Environment | |
| ```bash | |
| # Windows | |
| python -m venv venv | |
| venv\Scripts\activate | |
| # Mac/Linux | |
| python3 -m venv venv | |
| source venv/bin/activate | |
| ``` | |
| ### Step 4: Install Dependencies | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| ### Step 5: Create `.env` File | |
| Create a file named `.env` in the project root: | |
| OPENAI_API_KEY=sk-your-key-here | |
| SECRET_KEY=lecturelens-secret-2024 | |
| ### Step 6: Create `.gitignore` File | |
| .env | |
| pycache/ | |
| *.pyc | |
| venv/ | |
| ### Step 7: Run the App | |
| ```bash | |
| python app.py | |
| ``` | |
| ### Step 8: Open Browser | |
| http://localhost:7860 | |
| --- | |
| ## π― How to Use | |
| ### 1. Process a Video | |
| - Paste any YouTube lecture URL | |
| - Click "Analyze Video" | |
| - Wait for transcript extraction | |
| ### 2. Generate Study Material | |
| - Click **Summary** β Generate β¨ β Get comprehensive summary | |
| - Click **Flashcards** β Generate β¨ β Get 10 Q&A cards | |
| - Click **Sticky Notes** β Generate β¨ β Get 8 key point notes | |
| - Click **Flowchart** β Generate β¨ β Get concept map | |
| - Click **Quiz** β Generate β¨ β Get 5 MCQ questions | |
| ### 3. Chat with AI | |
| - Type any question about the lecture | |
| - AI answers strictly based on video content | |
| - Previous questions are remembered in conversation | |
| ### 4. Export Content | |
| - Click π PDF button on any panel to download | |
| - Click π Copy button to copy to clipboard | |
| ### 5. Compare Videos | |
| - Go to Compare tab | |
| - Enter second YouTube URL | |
| - Click Compare β¨ | |
| ### 6. Change Language | |
| - Click English / Ψ§Ψ±Ψ―Ω / Roman Urdu buttons at top | |
| - All generated content will be in selected language | |
| --- | |
| ## π How It Works β Technical Flow | |
| User enters YouTube URL | |
| β | |
| youtube-transcript-api β extracts captions (free) | |
| β | |
| yt-dlp β fetches video title (free) | |
| β | |
| TF-IDF (scikit-learn) β splits into chunks, builds search index (free, local) | |
| β | |
| User asks question / clicks Generate | |
| β | |
| TF-IDF β finds most relevant transcript chunks | |
| β | |
| GPT-4o-mini β generates answer from chunks (paid, ~$0.01/session) | |
| β | |
| Response shown in UI | |
| --- | |
| ## π Libraries Explained | |
| | Library | Purpose | Cost | | |
| |---------|---------|------| | |
| | `flask` | Web server + API endpoints | FREE | | |
| | `youtube-transcript-api` | Extract YouTube captions | FREE | | |
| | `yt-dlp` | Fetch video title | FREE | | |
| | `scikit-learn` | TF-IDF text search | FREE | | |
| | `openai` | GPT-4o-mini AI responses | ~$0.01/session | | |
| | `reportlab` | PDF generation | FREE | | |
| | `python-dotenv` | Load .env API keys | FREE | | |
| | `gunicorn` | Production server | FREE | | |
| --- | |
| ## π Deploy to HuggingFace Spaces (FREE Hosting) | |
| ### Step 1: Create HuggingFace Account | |
| Go to: https://huggingface.co β sign up free | |
| ### Step 2: Create New Space | |
| 1. Click profile β "New Space" | |
| 2. Name: `lecturelens-ai` | |
| 3. SDK: **Docker** | |
| 4. Visibility: Public (free) | |
| 5. Click "Create Space" | |
| ### Step 3: Upload Files | |
| Upload all project files EXCEPT `.env` file | |
| ### Step 4: Add Secret Key | |
| 1. Go to Space β Settings β Repository Secrets | |
| 2. Add: | |
| - Name: `OPENAI_API_KEY` | |
| - Value: `sk-your-key-here` | |
| ### Step 5: Deploy | |
| HuggingFace automatically builds and deploys! | |
| Your app: `https://huggingface.co/spaces/YOUR_USERNAME/lecturelens-ai` | |
| --- | |
| ## β Common Errors & Fixes | |
| | Error | Fix | | |
| |-------|-----| | |
| | `No transcript found` | Video does not have captions enabled | | |
| | `OPENAI_API_KEY not set` | Check .env file | | |
| | `Module not found` | Run `pip install -r requirements.txt` | | |
| | `Port already in use` | Change port in app.py or kill existing process | | |
| | `Invalid YouTube URL` | Make sure URL has `watch?v=` or `youtu.be/` | | |
| --- | |
| ## β οΈ Important Notes | |
| - Never upload `.env` file to GitHub or HuggingFace | |
| - Videos must have captions/subtitles enabled | |
| - Session data resets when server restarts | |
| - Supports English, Hindi, Urdu transcripts | |
| - AI answers strictly based on video content only | |
| --- | |
| ## π° Cost | |
| | Component | Cost | | |
| |-----------|------| | |
| | Everything except OpenAI | FREE | | |
| | OpenAI GPT-4o-mini | ~$0.01 per lecture session | | |
| | HuggingFace hosting | FREE | | |
| | **Total per session** | **~$0.01** | | |
| --- | |