Update README.md
Browse files
README.md
CHANGED
|
@@ -2,8 +2,8 @@
|
|
| 2 |
title: Sign Language Detector Pro
|
| 3 |
emoji: π
|
| 4 |
colorFrom: red
|
| 5 |
-
colorTo:
|
| 6 |
-
sdk:
|
| 7 |
app_port: 8501
|
| 8 |
tags:
|
| 9 |
- streamlit
|
|
@@ -12,9 +12,152 @@ short_description: Streamlit template space
|
|
| 12 |
license: mit
|
| 13 |
---
|
| 14 |
|
| 15 |
-
#
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
title: Sign Language Detector Pro
|
| 3 |
emoji: π
|
| 4 |
colorFrom: red
|
| 5 |
+
colorTo: yellow
|
| 6 |
+
sdk: streamlit
|
| 7 |
app_port: 8501
|
| 8 |
tags:
|
| 9 |
- streamlit
|
|
|
|
| 12 |
license: mit
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Sign Language Detector Pro
|
| 16 |
|
| 17 |
+
An advanced Python application for detecting and interpreting sign language gestures from images and videos. Features cutting-edge computer vision using MediaPipe for hand landmark detection, AI-powered gesture classification via OpenAI API, and a modern web interface for professional analysis and reporting.
|
| 18 |
|
| 19 |
+
## β¨ Enhanced Features
|
| 20 |
+
|
| 21 |
+
### π― Core Functionality
|
| 22 |
+
- **Advanced Hand Detection**: MediaPipe-powered 21-point hand landmark detection
|
| 23 |
+
- **AI Gesture Classification**: OpenAI API integration for accurate sign language interpretation
|
| 24 |
+
- **Batch File Processing**: Support for multiple images and videos simultaneously
|
| 25 |
+
- **Professional Analytics**: Interactive charts, confidence metrics, and detailed analysis
|
| 26 |
+
|
| 27 |
+
### π¨ Modern Web Interface
|
| 28 |
+
- **Professional Design**: Modern, responsive UI with gradient themes and animations
|
| 29 |
+
- **Interactive Visualizations**: 3D hand landmark plots, confidence charts, and timeline analysis
|
| 30 |
+
- **Multiple Export Formats**: JSON, CSV, and PDF report generation
|
| 31 |
+
- **Real-time Progress Tracking**: Enhanced progress indicators and status updates
|
| 32 |
+
|
| 33 |
+
### π Advanced Analytics
|
| 34 |
+
- **Confidence Scoring**: Detailed confidence metrics for all detections
|
| 35 |
+
- **3D Visualization**: Interactive 3D plots of hand landmarks
|
| 36 |
+
- **Timeline Analysis**: Frame-by-frame video processing with visual timelines
|
| 37 |
+
- **Comparison Views**: Side-by-side before/after image comparisons
|
| 38 |
+
|
| 39 |
+
## Setup
|
| 40 |
+
|
| 41 |
+
1. Clone the repository
|
| 42 |
+
2. Create a virtual environment:
|
| 43 |
+
```bash
|
| 44 |
+
python -m venv venv
|
| 45 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
3. Install dependencies:
|
| 49 |
+
```bash
|
| 50 |
+
pip install -r requirements.txt
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
4. Set up environment variables:
|
| 54 |
+
```bash
|
| 55 |
+
cp .env.example .env
|
| 56 |
+
# Edit .env and add your OpenAI API key
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## Usage
|
| 60 |
+
|
| 61 |
+
### Enhanced Command Line Interface
|
| 62 |
+
```bash
|
| 63 |
+
# File processing mode (camera functionality removed)
|
| 64 |
+
python3 main.py --input path/to/video.mp4
|
| 65 |
+
|
| 66 |
+
# Batch processing with output directory
|
| 67 |
+
python3 main.py --input path/to/directory --output results/
|
| 68 |
+
|
| 69 |
+
# Disable speech output
|
| 70 |
+
python3 main.py --input path/to/image.jpg --no-speech
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
### Professional Web Interface
|
| 74 |
+
```bash
|
| 75 |
+
streamlit run app.py
|
| 76 |
+
```
|
| 77 |
+
**Features:**
|
| 78 |
+
- Drag-and-drop file upload
|
| 79 |
+
- Batch processing with progress tracking
|
| 80 |
+
- Interactive 3D visualizations
|
| 81 |
+
- Multiple export formats (JSON, CSV, PDF)
|
| 82 |
+
- Real-time analytics dashboard
|
| 83 |
+
|
| 84 |
+
### Demo Mode (No API Key Required)
|
| 85 |
+
```bash
|
| 86 |
+
python3 demo.py
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
## Project Structure
|
| 90 |
+
|
| 91 |
+
- `main.py` - Main application entry point
|
| 92 |
+
- `app.py` - Streamlit GUI application
|
| 93 |
+
- `src/` - Source code modules
|
| 94 |
+
- `hand_detector.py` - Hand landmark detection
|
| 95 |
+
- `gesture_extractor.py` - Gesture feature extraction
|
| 96 |
+
- `openai_classifier.py` - OpenAI API integration
|
| 97 |
+
- `camera_handler.py` - Real-time camera processing
|
| 98 |
+
- `file_handler.py` - File input processing
|
| 99 |
+
- `output_handler.py` - Text and speech output
|
| 100 |
+
- `tests/` - Unit tests
|
| 101 |
+
- `examples/` - Example videos and images
|
| 102 |
+
|
| 103 |
+
## Requirements
|
| 104 |
+
|
| 105 |
+
- Python 3.8+
|
| 106 |
+
- OpenAI API key (for gesture classification)
|
| 107 |
+
- Webcam (for real-time mode)
|
| 108 |
+
|
| 109 |
+
## Quick Start
|
| 110 |
+
|
| 111 |
+
1. **Test without API key (Demo mode):**
|
| 112 |
+
```bash
|
| 113 |
+
python3 demo.py
|
| 114 |
+
```
|
| 115 |
+
This will show hand detection and gesture analysis without requiring an OpenAI API key.
|
| 116 |
+
|
| 117 |
+
2. **Set up OpenAI API key:**
|
| 118 |
+
```bash
|
| 119 |
+
cp .env.example .env
|
| 120 |
+
# Edit .env and add: OPENAI_API_KEY=your_key_here
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
3. **Run real-time detection:**
|
| 124 |
+
```bash
|
| 125 |
+
python3 main.py --mode realtime
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
4. **Process a video file:**
|
| 129 |
+
```bash
|
| 130 |
+
python3 main.py --mode file --input examples/sample_video.mp4
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
5. **Launch web interface:**
|
| 134 |
+
```bash
|
| 135 |
+
streamlit run app.py
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
## π Enhanced Features Delivered
|
| 139 |
+
|
| 140 |
+
### β
Core Processing
|
| 141 |
+
- **Advanced Hand Detection** - MediaPipe 21-point landmark detection with enhanced visualization
|
| 142 |
+
- **AI-Powered Classification** - OpenAI API integration with confidence scoring
|
| 143 |
+
- **Batch File Processing** - Simultaneous processing of multiple images and videos
|
| 144 |
+
- **Professional Analytics** - Comprehensive metrics and statistical analysis
|
| 145 |
+
|
| 146 |
+
### β
Modern Web Interface
|
| 147 |
+
- **Responsive Design** - Professional UI with gradient themes and animations
|
| 148 |
+
- **Interactive Visualizations** - 3D hand plots, confidence charts, timeline analysis
|
| 149 |
+
- **Multiple Export Formats** - JSON, CSV, and PDF report generation
|
| 150 |
+
- **Real-time Progress** - Enhanced progress tracking with detailed status updates
|
| 151 |
+
|
| 152 |
+
### β
Advanced Analytics
|
| 153 |
+
- **3D Visualization** - Interactive 3D hand landmark plots
|
| 154 |
+
- **Timeline Analysis** - Frame-by-frame video processing visualization
|
| 155 |
+
- **Confidence Metrics** - Detailed confidence scoring and analysis
|
| 156 |
+
- **Comparison Views** - Side-by-side before/after image comparisons
|
| 157 |
+
- **Summary Reports** - Comprehensive processing statistics and insights
|
| 158 |
+
|
| 159 |
+
### β
User Experience
|
| 160 |
+
- **Drag-and-Drop Upload** - Intuitive file upload with visual feedback
|
| 161 |
+
- **Settings Panel** - Configurable detection parameters
|
| 162 |
+
- **Error Handling** - User-friendly error messages and recovery
|
| 163 |
+
- **Export Functionality** - Multiple format options for results
|