Spaces:
Running
Running
| title: AI Queue Management - Time in Zone Tracking | |
| emoji: π― | |
| colorFrom: blue | |
| colorTo: green | |
| sdk: gradio | |
| sdk_version: 6.4.0 | |
| app_file: app.py | |
| pinned: false | |
| license: mit | |
| # AI Queue Management System - Time in Zone Tracking | |
| An end-to-end AI-powered queue management solution that combines computer vision for real-time tracking with Large Language Models for business intelligence. | |
| ## π Features | |
| - **Real-time Object Tracking**: YOLOv8 detection with ByteTrack tracking | |
| - **Time-in-Zone Analytics**: Precise measurement of dwell time in defined zones using Roboflow Supervision | |
| - **AI-Powered Insights**: LLM analysis of performance logs using Qwen-2.5-1.5B-Instruct | |
| - **Comprehensive Error Handling**: Robust error handling throughout the application with graceful degradation | |
| - **Multiple Input Formats**: Support for video, image, and YouTube URL processing | |
| - **YouTube Integration**: Optional support for processing YouTube videos with real-time streaming | |
| - **Import Error Handling**: Graceful handling of missing dependencies with informative error messages | |
| ## π Use Cases | |
| - **Retail Analytics**: Track customer movement and dwell time in product sections | |
| - **Bank Branch Efficiency**: Monitor counter service times and optimize staffing | |
| - **Airport Security**: Predict wait times and manage security lane staffing | |
| - **Hospital ER**: Ensure patients are seen within target wait times | |
| - **Smart Parking**: Monitor parking bay occupancy and turnover rates | |
| - **Safety Monitoring**: Alert security if someone enters or lingers in restricted areas | |
| ## π οΈ Technical Stack | |
| - **Detection Model**: YOLOv8 (Ultralytics) | |
| - **Tracking**: ByteTrack (Supervision) | |
| - **Time Tracking**: Supervision TimeInZone | |
| - **LLM**: Qwen-2.5-1.5B-Instruct | |
| - **Framework**: Gradio | |
| ## π¦ Installation | |
| ### Local Installation | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| ### Running Locally | |
| ```bash | |
| python app.py | |
| ``` | |
| The application will be available at `http://localhost:7860` | |
| ## π Deployment on Hugging Face Spaces | |
| ### Step 1: Create a New Space | |
| 1. Go to [Hugging Face Spaces](https://huggingface.co/spaces) | |
| 2. Click "Create new Space" | |
| 3. Choose: | |
| - **SDK**: Gradio | |
| - **Hardware**: CPU Basic (free) or upgrade to GPU if needed | |
| - **Visibility**: Public or Private | |
| ### Step 2: Upload Files | |
| Upload the following files to your Space: | |
| - `app.py` - Main application file | |
| - `queue_monitor.py` - Core tracking logic | |
| - `llm_analyzer.py` - LLM analysis component | |
| - `requirements.txt` - Python dependencies | |
| - `README.md` - This file | |
| ### Step 3: Configure Environment (Optional) | |
| The application uses a Hugging Face token for model access. You can configure it in two ways: | |
| **Option 1: Environment Variable (Recommended for Spaces)** | |
| 1. Go to Space Settings | |
| 2. Add a **Secret** named `HF_TOKEN` | |
| 3. Paste your Hugging Face token (get it from [Settings](https://huggingface.co/settings/tokens)) | |
| **Option 2: Default Token** | |
| The application includes a default token for testing. For production, use Option 1. | |
| ### Step 4: Deploy | |
| The Space will automatically build and deploy. You can monitor the build logs in the Space interface. | |
| ## π Usage | |
| ### Video Processing | |
| 1. Upload a video file (MP4, AVI, MOV) | |
| 2. Adjust confidence threshold (0.1-1.0) | |
| 3. Set maximum frames to process | |
| 4. Click "Process Video" | |
| 5. View processed frame and zone statistics | |
| ### YouTube Processing (Optional) | |
| 1. Enter a YouTube URL in the YouTube Processing tab | |
| 2. Choose between "Download & Process" (full video) or "Real-time Stream" (single frame) | |
| 3. Adjust confidence threshold | |
| 4. View processed results with zone tracking | |
| 5. **Note**: Requires `pytube` library. Install with: `pip install pytube` | |
| ### Image Processing | |
| 1. Upload an image (JPG, PNG) | |
| 2. Adjust confidence threshold | |
| 3. Click "Process Image" | |
| 4. View annotated image with zone tracking | |
| ### AI Log Analysis | |
| 1. Enter queue log data in JSON format (or use sample) | |
| 2. Click "Generate AI Insights" | |
| 3. Review AI-generated recommendations | |
| ## π Log Data Format | |
| The LLM expects logs in the following JSON format: | |
| ```json | |
| { | |
| "date": "2026-01-24", | |
| "branch": "SBI Jabalpur", | |
| "avg_wait_time_sec": 420, | |
| "max_wait_time_sec": 980, | |
| "customers_served": 134, | |
| "counter_1_avg_service": 180, | |
| "counter_2_avg_service": 310, | |
| "peak_hour": "12:00-13:00", | |
| "queue_overflow_events": 5 | |
| } | |
| ``` | |
| ## π§ Configuration | |
| ### Default Zone | |
| The application uses a default rectangular zone. You can modify it in `app.py`: | |
| ```python | |
| DEFAULT_ZONE = np.array([[100, 100], [1100, 100], [1100, 600], [100, 600]]) | |
| ``` | |
| ### Model Configuration | |
| - **YOLO Model**: Defaults to `yolov8s.pt` (can be changed in `QueueMonitor.__init__`) | |
| - **LLM Model**: Defaults to `Qwen/Qwen2.5-1.5B-Instruct` (can be changed in `LogAnalyzer.__init__`) | |
| ## β οΈ Error Handling | |
| The application includes comprehensive error handling for: | |
| - Invalid video/image formats | |
| - Model loading failures | |
| - Zone configuration errors | |
| - JSON parsing errors | |
| - Processing exceptions | |
| - Memory management | |
| - Frame processing errors | |
| ## π License | |
| MIT License | |
| ## π€ Contributing | |
| Contributions are welcome! Please feel free to submit a Pull Request. | |
| ## π§ Support | |
| For issues and questions, please open an issue on the repository. |