midlajvalappil commited on
Commit
b4c8e0e
Β·
verified Β·
1 Parent(s): ce69c1b

Update src/streamlit_app.py

Browse files
Files changed (1) hide show
  1. src/streamlit_app.py +1756 -34
src/streamlit_app.py CHANGED
@@ -1,40 +1,1762 @@
1
- import altair as alt
 
 
 
 
 
 
2
  import numpy as np
 
 
 
 
 
 
 
 
 
 
3
  import pandas as pd
4
- import streamlit as st
 
 
5
 
6
- """
7
- # Welcome to Streamlit!
8
 
9
- Edit `/streamlit_app.py` to customize this app to your heart's desire :heart:.
10
- If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
11
- forums](https://discuss.streamlit.io).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- In the meantime, below is an example of what you can do with just a few lines of code:
14
- """
15
 
16
- num_points = st.slider("Number of points in spiral", 1, 10000, 1100)
17
- num_turns = st.slider("Number of turns in spiral", 1, 300, 31)
18
-
19
- indices = np.linspace(0, 1, num_points)
20
- theta = 2 * np.pi * num_turns * indices
21
- radius = indices
22
-
23
- x = radius * np.cos(theta)
24
- y = radius * np.sin(theta)
25
-
26
- df = pd.DataFrame({
27
- "x": x,
28
- "y": y,
29
- "idx": indices,
30
- "rand": np.random.randn(num_points),
31
- })
32
-
33
- st.altair_chart(alt.Chart(df, height=700, width=700)
34
- .mark_point(filled=True)
35
- .encode(
36
- x=alt.X("x", axis=None),
37
- y=alt.Y("y", axis=None),
38
- color=alt.Color("idx", legend=None, scale=alt.Scale()),
39
- size=alt.Size("rand", legend=None, scale=alt.Scale(range=[1, 150])),
40
- ))
 
1
+ """
2
+ Enhanced Streamlit GUI for Sign Language Detector
3
+ Modern, Professional File Processing Interface
4
+ """
5
+
6
+ import streamlit as st
7
+ import cv2
8
  import numpy as np
9
+ import os
10
+ import sys
11
+ import time
12
+ import threading
13
+ from PIL import Image
14
+ import tempfile
15
+ from typing import Optional, List, Dict, Any
16
+ import plotly.express as px
17
+ import plotly.graph_objects as go
18
+ from plotly.subplots import make_subplots
19
  import pandas as pd
20
+ import base64
21
+ from io import BytesIO
22
+ import json
23
 
24
+ # Add src directory to path
25
+ sys.path.append(os.path.dirname(__file__))
26
 
27
+ from src.file_handler import FileHandler
28
+ from src.output_handler import OutputHandler
29
+ from src.hand_detector import HandDetector
30
+ from src.gesture_extractor import GestureExtractor
31
+ from src.openai_classifier import SignLanguageClassifier
32
+ from src.visualization_utils import HandLandmarkVisualizer, create_processing_timeline
33
+ from src.export_utils import ResultExporter
34
+
35
+
36
+ # Page configuration
37
+ st.set_page_config(
38
+ page_title="Sign Language Detector Pro",
39
+ page_icon="🀟",
40
+ layout="wide",
41
+ initial_sidebar_state="expanded"
42
+ )
43
+
44
+ # Comprehensive CSS for optimal text visibility and professional design
45
+ st.markdown("""
46
+ <style>
47
+ /* Enhanced theme colors with WCAG AA compliant contrast ratios */
48
+ :root {
49
+ --primary-color: #2E86AB;
50
+ --secondary-color: #A23B72;
51
+ --accent-color: #F18F01;
52
+ --background-color: #F8F9FA;
53
+ --text-color: #2C3E50;
54
+ --text-light: #FFFFFF;
55
+ --text-dark: #1A1A1A;
56
+ --text-medium: #495057;
57
+ --text-muted: #6C757D;
58
+ --success-color: #27AE60;
59
+ --warning-color: #F39C12;
60
+ --error-color: #E74C3C;
61
+ --info-color: #17A2B8;
62
+ --border-color: #E1E5E9;
63
+ --card-background: #FFFFFF;
64
+ --sidebar-background: #F8F9FA;
65
+ --hover-background: #E9ECEF;
66
+ }
67
+
68
+ /* Hide Streamlit branding */
69
+ #MainMenu {visibility: hidden;}
70
+ footer {visibility: hidden;}
71
+ header {visibility: hidden;}
72
+
73
+ /* Global text color improvements - Foundation */
74
+ .stApp {
75
+ color: var(--text-dark) !important;
76
+ background-color: var(--background-color) !important;
77
+ }
78
+
79
+ /* All headings - Comprehensive coverage */
80
+ h1, h2, h3, h4, h5, h6 {
81
+ color: var(--text-dark) !important;
82
+ font-weight: 600 !important;
83
+ }
84
+
85
+ /* All paragraph text */
86
+ p {
87
+ color: var(--text-color) !important;
88
+ }
89
+
90
+ /* All span elements */
91
+ span {
92
+ color: var(--text-dark) !important;
93
+ }
94
+
95
+ /* All div text content */
96
+ div {
97
+ color: var(--text-dark) !important;
98
+ }
99
+
100
+ /* Custom header */
101
+ .main-header {
102
+ background: linear-gradient(135deg, var(--primary-color), var(--secondary-color));
103
+ padding: 2rem;
104
+ border-radius: 15px;
105
+ margin-bottom: 2rem;
106
+ color: var(--text-light);
107
+ text-align: center;
108
+ box-shadow: 0 8px 32px rgba(0,0,0,0.1);
109
+ }
110
+
111
+ .main-header h1 {
112
+ font-size: 3rem;
113
+ font-weight: 700;
114
+ margin-bottom: 0.5rem;
115
+ color: var(--text-light) !important;
116
+ text-shadow: 2px 2px 4px rgba(0,0,0,0.3);
117
+ }
118
+
119
+ .main-header p {
120
+ font-size: 1.2rem;
121
+ opacity: 0.9;
122
+ margin: 0;
123
+ color: var(--text-light) !important;
124
+ }
125
+
126
+ /* File upload area with improved text visibility */
127
+ .upload-area {
128
+ border: 3px dashed var(--primary-color);
129
+ border-radius: 15px;
130
+ padding: 3rem;
131
+ text-align: center;
132
+ background: var(--card-background);
133
+ margin: 2rem 0;
134
+ transition: all 0.3s ease;
135
+ box-shadow: 0 4px 15px rgba(0,0,0,0.1);
136
+ color: var(--text-dark) !important;
137
+ }
138
+
139
+ .upload-area h3 {
140
+ color: var(--text-dark) !important;
141
+ font-weight: 600;
142
+ margin-bottom: 1rem;
143
+ }
144
+
145
+ .upload-area p {
146
+ color: var(--text-color) !important;
147
+ margin: 0.5rem 0;
148
+ }
149
+
150
+ .upload-area:hover {
151
+ border-color: var(--accent-color);
152
+ transform: translateY(-2px);
153
+ box-shadow: 0 8px 25px rgba(0,0,0,0.15);
154
+ }
155
+
156
+ /* Result cards with improved text contrast */
157
+ .result-card {
158
+ background: var(--card-background);
159
+ border-radius: 15px;
160
+ padding: 1.5rem;
161
+ margin: 1rem 0;
162
+ box-shadow: 0 4px 20px rgba(0,0,0,0.1);
163
+ border-left: 5px solid var(--primary-color);
164
+ transition: all 0.3s ease;
165
+ color: var(--text-dark) !important;
166
+ }
167
+
168
+ .result-card h3 {
169
+ color: var(--text-dark) !important;
170
+ font-weight: 600;
171
+ margin-bottom: 1rem;
172
+ }
173
+
174
+ .result-card p {
175
+ color: var(--text-color) !important;
176
+ }
177
+
178
+ .result-card:hover {
179
+ transform: translateY(-3px);
180
+ box-shadow: 0 8px 30px rgba(0,0,0,0.15);
181
+ }
182
+
183
+ /* Metrics styling with improved text visibility */
184
+ .metric-card {
185
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
186
+ color: var(--text-light) !important;
187
+ padding: 1.5rem;
188
+ border-radius: 15px;
189
+ text-align: center;
190
+ margin: 0.5rem;
191
+ box-shadow: 0 4px 15px rgba(0,0,0,0.1);
192
+ }
193
+
194
+ .metric-value {
195
+ font-size: 2.5rem;
196
+ font-weight: bold;
197
+ margin-bottom: 0.5rem;
198
+ color: var(--text-light) !important;
199
+ }
200
+
201
+ .metric-label {
202
+ font-size: 1rem;
203
+ opacity: 0.9;
204
+ color: var(--text-light) !important;
205
+ }
206
+
207
+ /* Progress bar styling */
208
+ .stProgress > div > div > div > div {
209
+ background: linear-gradient(90deg, var(--primary-color), var(--accent-color));
210
+ border-radius: 10px;
211
+ }
212
+
213
+ /* Comprehensive Button styling - All states covered */
214
+ .stButton > button {
215
+ background: linear-gradient(135deg, var(--primary-color), var(--secondary-color)) !important;
216
+ color: var(--text-light) !important;
217
+ border: none !important;
218
+ border-radius: 10px !important;
219
+ padding: 0.75rem 2rem !important;
220
+ font-weight: 600 !important;
221
+ font-size: 1rem !important;
222
+ transition: all 0.3s ease !important;
223
+ box-shadow: 0 4px 15px rgba(0,0,0,0.2) !important;
224
+ text-shadow: 1px 1px 2px rgba(0,0,0,0.3) !important;
225
+ }
226
+
227
+ .stButton > button:hover {
228
+ transform: translateY(-2px) !important;
229
+ box-shadow: 0 6px 20px rgba(0,0,0,0.3) !important;
230
+ color: var(--text-light) !important;
231
+ background: linear-gradient(135deg, #3A9BC1, #B8457A) !important;
232
+ }
233
+
234
+ .stButton > button:focus {
235
+ color: var(--text-light) !important;
236
+ box-shadow: 0 6px 20px rgba(0,0,0,0.3) !important;
237
+ outline: 2px solid var(--accent-color) !important;
238
+ outline-offset: 2px !important;
239
+ }
240
+
241
+ .stButton > button:active {
242
+ color: var(--text-light) !important;
243
+ transform: translateY(0px) !important;
244
+ box-shadow: 0 2px 10px rgba(0,0,0,0.2) !important;
245
+ }
246
+
247
+ /* Download button specific styling */
248
+ .stDownloadButton > button {
249
+ background: linear-gradient(135deg, var(--success-color), #2ECC71) !important;
250
+ color: var(--text-light) !important;
251
+ border: none !important;
252
+ border-radius: 10px !important;
253
+ padding: 0.75rem 2rem !important;
254
+ font-weight: 600 !important;
255
+ font-size: 1rem !important;
256
+ text-shadow: 1px 1px 2px rgba(0,0,0,0.3) !important;
257
+ }
258
+
259
+ .stDownloadButton > button:hover {
260
+ color: var(--text-light) !important;
261
+ background: linear-gradient(135deg, #2ECC71, #27AE60) !important;
262
+ }
263
+
264
+ /* Comprehensive Sidebar styling - All elements covered */
265
+ .css-1d391kg, .css-1lcbmhc, .css-17eq0hr, .css-1y4p8pa {
266
+ background: var(--sidebar-background) !important;
267
+ color: var(--text-dark) !important;
268
+ }
269
+
270
+ /* Sidebar text - All variations */
271
+ .css-1d391kg .stMarkdown, .css-1lcbmhc .stMarkdown, .css-17eq0hr .stMarkdown {
272
+ color: var(--text-dark) !important;
273
+ }
274
+
275
+ .css-1d391kg h1, .css-1d391kg h2, .css-1d391kg h3, .css-1d391kg h4, .css-1d391kg h5, .css-1d391kg h6 {
276
+ color: var(--text-dark) !important;
277
+ font-weight: 600 !important;
278
+ }
279
+
280
+ .css-1lcbmhc h1, .css-1lcbmhc h2, .css-1lcbmhc h3, .css-1lcbmhc h4, .css-1lcbmhc h5, .css-1lcbmhc h6 {
281
+ color: var(--text-dark) !important;
282
+ font-weight: 600 !important;
283
+ }
284
+
285
+ /* Sidebar labels and text */
286
+ .css-1d391kg label, .css-1lcbmhc label {
287
+ color: var(--text-dark) !important;
288
+ font-weight: 500 !important;
289
+ }
290
+
291
+ .css-1d391kg p, .css-1lcbmhc p {
292
+ color: var(--text-color) !important;
293
+ }
294
+
295
+ .css-1d391kg span, .css-1lcbmhc span {
296
+ color: var(--text-dark) !important;
297
+ }
298
+
299
+ /* Sidebar widget labels */
300
+ .css-1d391kg .stSelectbox label, .css-1d391kg .stSlider label, .css-1d391kg .stCheckbox label {
301
+ color: var(--text-dark) !important;
302
+ font-weight: 500 !important;
303
+ }
304
+
305
+ /* Success/Error messages with proper contrast */
306
+ .success-message {
307
+ background: var(--success-color) !important;
308
+ color: var(--text-light) !important;
309
+ padding: 1rem !important;
310
+ border-radius: 10px !important;
311
+ margin: 1rem 0 !important;
312
+ }
313
+
314
+ .error-message {
315
+ background: var(--error-color) !important;
316
+ color: var(--text-light) !important;
317
+ padding: 1rem !important;
318
+ border-radius: 10px !important;
319
+ margin: 1rem 0 !important;
320
+ }
321
+
322
+ /* Streamlit native message styling improvements */
323
+ .stAlert {
324
+ color: var(--text-dark) !important;
325
+ }
326
+
327
+ .stSuccess {
328
+ background-color: rgba(39, 174, 96, 0.1) !important;
329
+ color: var(--text-dark) !important;
330
+ border: 1px solid var(--success-color) !important;
331
+ }
332
+
333
+ .stError {
334
+ background-color: rgba(231, 76, 60, 0.1) !important;
335
+ color: var(--text-dark) !important;
336
+ border: 1px solid var(--error-color) !important;
337
+ }
338
+
339
+ .stWarning {
340
+ background-color: rgba(243, 156, 18, 0.1) !important;
341
+ color: var(--text-dark) !important;
342
+ border: 1px solid var(--warning-color) !important;
343
+ }
344
+
345
+ .stInfo {
346
+ background-color: rgba(46, 134, 171, 0.1) !important;
347
+ color: var(--text-dark) !important;
348
+ border: 1px solid var(--primary-color) !important;
349
+ }
350
+
351
+ /* Loading animation */
352
+ .loading-spinner {
353
+ display: inline-block;
354
+ width: 40px;
355
+ height: 40px;
356
+ border: 4px solid #f3f3f3;
357
+ border-top: 4px solid var(--primary-color);
358
+ border-radius: 50%;
359
+ animation: spin 1s linear infinite;
360
+ }
361
+
362
+ @keyframes spin {
363
+ 0% { transform: rotate(0deg); }
364
+ 100% { transform: rotate(360deg); }
365
+ }
366
+
367
+ /* Comprehensive Form and Input styling - All form elements */
368
+ .stTextInput > div > div > input {
369
+ color: var(--text-dark) !important;
370
+ background-color: var(--card-background) !important;
371
+ border: 1px solid var(--border-color) !important;
372
+ border-radius: 8px !important;
373
+ padding: 0.75rem !important;
374
+ font-size: 1rem !important;
375
+ }
376
+
377
+ .stTextInput > div > div > input::placeholder {
378
+ color: var(--text-muted) !important;
379
+ opacity: 0.7 !important;
380
+ }
381
+
382
+ .stTextInput > div > div > input:focus {
383
+ border-color: var(--primary-color) !important;
384
+ box-shadow: 0 0 0 2px rgba(46, 134, 171, 0.2) !important;
385
+ color: var(--text-dark) !important;
386
+ }
387
+
388
+ /* Text area styling */
389
+ .stTextArea > div > div > textarea {
390
+ color: var(--text-dark) !important;
391
+ background-color: var(--card-background) !important;
392
+ border: 1px solid var(--border-color) !important;
393
+ border-radius: 8px !important;
394
+ }
395
+
396
+ .stTextArea > div > div > textarea::placeholder {
397
+ color: var(--text-muted) !important;
398
+ opacity: 0.7 !important;
399
+ }
400
+
401
+ /* Select box styling */
402
+ .stSelectbox > div > div > div {
403
+ color: var(--text-dark) !important;
404
+ background-color: var(--card-background) !important;
405
+ border: 1px solid var(--border-color) !important;
406
+ border-radius: 8px !important;
407
+ }
408
+
409
+ .stSelectbox > div > div > div > div {
410
+ color: var(--text-dark) !important;
411
+ }
412
+
413
+ /* Multi-select styling */
414
+ .stMultiSelect > div > div > div {
415
+ color: var(--text-dark) !important;
416
+ background-color: var(--card-background) !important;
417
+ }
418
+
419
+ /* Number input styling */
420
+ .stNumberInput > div > div > input {
421
+ color: var(--text-dark) !important;
422
+ background-color: var(--card-background) !important;
423
+ border: 1px solid var(--border-color) !important;
424
+ }
425
+
426
+ /* Slider styling */
427
+ .stSlider > div > div > div {
428
+ color: var(--text-dark) !important;
429
+ }
430
+
431
+ .stSlider > div > div > div > div {
432
+ color: var(--text-dark) !important;
433
+ }
434
+
435
+ /* Checkbox and radio styling */
436
+ .stCheckbox > label {
437
+ color: var(--text-dark) !important;
438
+ font-weight: 500 !important;
439
+ }
440
+
441
+ .stRadio > label {
442
+ color: var(--text-dark) !important;
443
+ font-weight: 500 !important;
444
+ }
445
+
446
+ /* Form labels - comprehensive coverage */
447
+ label {
448
+ color: var(--text-dark) !important;
449
+ font-weight: 500 !important;
450
+ font-size: 1rem !important;
451
+ }
452
+
453
+ /* Comprehensive Tab styling - All states and variations */
454
+ .stTabs [data-baseweb="tab-list"] {
455
+ gap: 8px !important;
456
+ border-bottom: 2px solid var(--border-color) !important;
457
+ }
458
+
459
+ .stTabs [data-baseweb="tab"] {
460
+ color: var(--text-dark) !important;
461
+ background-color: var(--card-background) !important;
462
+ border: 1px solid var(--border-color) !important;
463
+ border-radius: 8px 8px 0 0 !important;
464
+ padding: 12px 20px !important;
465
+ font-weight: 500 !important;
466
+ font-size: 1rem !important;
467
+ transition: all 0.3s ease !important;
468
+ margin-bottom: -2px !important;
469
+ }
470
+
471
+ .stTabs [data-baseweb="tab"]:hover {
472
+ background-color: var(--hover-background) !important;
473
+ color: var(--text-dark) !important;
474
+ border-color: var(--primary-color) !important;
475
+ }
476
+
477
+ .stTabs [aria-selected="true"] {
478
+ background-color: var(--primary-color) !important;
479
+ color: var(--text-light) !important;
480
+ border-color: var(--primary-color) !important;
481
+ font-weight: 600 !important;
482
+ text-shadow: 1px 1px 2px rgba(0,0,0,0.2) !important;
483
+ }
484
+
485
+ /* Tab content styling */
486
+ .stTabs [data-baseweb="tab-panel"] {
487
+ color: var(--text-dark) !important;
488
+ background-color: var(--card-background) !important;
489
+ padding: 1.5rem !important;
490
+ border-radius: 0 8px 8px 8px !important;
491
+ border: 1px solid var(--border-color) !important;
492
+ border-top: none !important;
493
+ }
494
+
495
+ /* Comprehensive Expander styling */
496
+ .streamlit-expanderHeader {
497
+ color: var(--text-dark) !important;
498
+ background-color: var(--card-background) !important;
499
+ border: 1px solid var(--border-color) !important;
500
+ border-radius: 8px !important;
501
+ padding: 1rem !important;
502
+ font-weight: 600 !important;
503
+ font-size: 1.1rem !important;
504
+ }
505
+
506
+ .streamlit-expanderHeader:hover {
507
+ background-color: var(--hover-background) !important;
508
+ color: var(--text-dark) !important;
509
+ }
510
+
511
+ .streamlit-expanderContent {
512
+ color: var(--text-dark) !important;
513
+ background-color: var(--card-background) !important;
514
+ border: 1px solid var(--border-color) !important;
515
+ border-top: none !important;
516
+ border-radius: 0 0 8px 8px !important;
517
+ padding: 1.5rem !important;
518
+ }
519
+
520
+ /* Comprehensive Metric styling - All metric components */
521
+ .metric-container {
522
+ background-color: var(--card-background) !important;
523
+ color: var(--text-dark) !important;
524
+ padding: 1rem !important;
525
+ border-radius: 8px !important;
526
+ border: 1px solid var(--border-color) !important;
527
+ }
528
+
529
+ /* Streamlit native metrics */
530
+ .css-1xarl3l {
531
+ color: var(--text-dark) !important;
532
+ }
533
+
534
+ .css-1xarl3l > div {
535
+ color: var(--text-dark) !important;
536
+ }
537
+
538
+ /* Metric values and labels */
539
+ [data-testid="metric-container"] {
540
+ background-color: var(--card-background) !important;
541
+ border: 1px solid var(--border-color) !important;
542
+ border-radius: 8px !important;
543
+ padding: 1rem !important;
544
+ }
545
+
546
+ [data-testid="metric-container"] > div {
547
+ color: var(--text-dark) !important;
548
+ }
549
+
550
+ [data-testid="metric-container"] label {
551
+ color: var(--text-medium) !important;
552
+ font-weight: 500 !important;
553
+ }
554
+
555
+ /* Progress indicators and loading text */
556
+ .stProgress > div > div > div {
557
+ color: var(--text-dark) !important;
558
+ }
559
+
560
+ .stSpinner > div {
561
+ color: var(--text-dark) !important;
562
+ }
563
+
564
+ /* File uploader styling */
565
+ .stFileUploader > div > div > div {
566
+ color: var(--text-dark) !important;
567
+ background-color: var(--card-background) !important;
568
+ border: 2px dashed var(--border-color) !important;
569
+ border-radius: 8px !important;
570
+ }
571
+
572
+ .stFileUploader > div > div > div:hover {
573
+ border-color: var(--primary-color) !important;
574
+ }
575
+
576
+ .stFileUploader label {
577
+ color: var(--text-dark) !important;
578
+ font-weight: 500 !important;
579
+ }
580
+
581
+ /* Data frame and table styling */
582
+ .stDataFrame {
583
+ color: var(--text-dark) !important;
584
+ }
585
+
586
+ .stDataFrame table {
587
+ color: var(--text-dark) !important;
588
+ background-color: var(--card-background) !important;
589
+ }
590
+
591
+ .stDataFrame th {
592
+ color: var(--text-dark) !important;
593
+ background-color: var(--hover-background) !important;
594
+ font-weight: 600 !important;
595
+ }
596
+
597
+ .stDataFrame td {
598
+ color: var(--text-dark) !important;
599
+ }
600
+
601
+ /* Code blocks and preformatted text */
602
+ .stCode {
603
+ color: var(--text-dark) !important;
604
+ background-color: var(--hover-background) !important;
605
+ }
606
+
607
+ code {
608
+ color: var(--text-dark) !important;
609
+ background-color: var(--hover-background) !important;
610
+ padding: 0.2rem 0.4rem !important;
611
+ border-radius: 4px !important;
612
+ }
613
+
614
+ pre {
615
+ color: var(--text-dark) !important;
616
+ background-color: var(--hover-background) !important;
617
+ }
618
+
619
+ /* JSON and data display */
620
+ .stJson {
621
+ color: var(--text-dark) !important;
622
+ background-color: var(--card-background) !important;
623
+ }
624
+
625
+ /* Caption and help text */
626
+ .caption {
627
+ color: var(--text-muted) !important;
628
+ font-size: 0.9rem !important;
629
+ }
630
+
631
+ .help {
632
+ color: var(--text-muted) !important;
633
+ font-size: 0.85rem !important;
634
+ }
635
+
636
+ /* Tooltip styling */
637
+ .stTooltipIcon {
638
+ color: var(--text-medium) !important;
639
+ }
640
+
641
+ /* Link styling */
642
+ a {
643
+ color: var(--primary-color) !important;
644
+ text-decoration: none !important;
645
+ }
646
+
647
+ a:hover {
648
+ color: var(--secondary-color) !important;
649
+ text-decoration: underline !important;
650
+ }
651
+
652
+ /* Status indicators */
653
+ .status-success {
654
+ color: var(--success-color) !important;
655
+ font-weight: 600 !important;
656
+ }
657
+
658
+ .status-error {
659
+ color: var(--error-color) !important;
660
+ font-weight: 600 !important;
661
+ }
662
+
663
+ .status-warning {
664
+ color: var(--warning-color) !important;
665
+ font-weight: 600 !important;
666
+ }
667
+
668
+ .status-info {
669
+ color: var(--info-color) !important;
670
+ font-weight: 600 !important;
671
+ }
672
+
673
+ /* Responsive design */
674
+ @media (max-width: 768px) {
675
+ .main-header h1 {
676
+ font-size: 2rem !important;
677
+ color: var(--text-light) !important;
678
+ }
679
+ .main-header p {
680
+ font-size: 1rem !important;
681
+ color: var(--text-light) !important;
682
+ }
683
+ .upload-area {
684
+ padding: 2rem !important;
685
+ }
686
+
687
+ /* Mobile text adjustments */
688
+ h1, h2, h3, h4, h5, h6 {
689
+ font-size: calc(1rem + 0.5vw) !important;
690
+ }
691
+
692
+ p, span, div {
693
+ font-size: 0.9rem !important;
694
+ }
695
+
696
+ label {
697
+ font-size: 0.9rem !important;
698
+ }
699
+ }
700
+
701
+ /* High contrast mode support */
702
+ @media (prefers-contrast: high) {
703
+ :root {
704
+ --text-dark: #000000;
705
+ --text-light: #FFFFFF;
706
+ --border-color: #000000;
707
+ }
708
+ }
709
+
710
+ /* Dark mode support (if needed) */
711
+ @media (prefers-color-scheme: dark) {
712
+ .stApp {
713
+ background-color: #1E1E1E !important;
714
+ }
715
+
716
+ :root {
717
+ --background-color: #1E1E1E;
718
+ --card-background: #2D2D2D;
719
+ --text-dark: #FFFFFF;
720
+ --text-color: #E0E0E0;
721
+ --border-color: #404040;
722
+ --hover-background: #404040;
723
+ }
724
+ }
725
+ </style>
726
+ """, unsafe_allow_html=True)
727
+
728
+ # Initialize session state
729
+ if 'file_handler' not in st.session_state:
730
+ st.session_state.file_handler = None
731
+ if 'output_handler' not in st.session_state:
732
+ st.session_state.output_handler = None
733
+ if 'detections' not in st.session_state:
734
+ st.session_state.detections = []
735
+ if 'transcript' not in st.session_state:
736
+ st.session_state.transcript = []
737
+ if 'processing_results' not in st.session_state:
738
+ st.session_state.processing_results = []
739
+ if 'current_file' not in st.session_state:
740
+ st.session_state.current_file = None
741
+ if 'visualizer' not in st.session_state:
742
+ st.session_state.visualizer = HandLandmarkVisualizer()
743
+ if 'exporter' not in st.session_state:
744
+ st.session_state.exporter = ResultExporter()
745
+
746
+
747
+ def initialize_components():
748
+ """Initialize the application components."""
749
+ if st.session_state.file_handler is None:
750
+ st.session_state.file_handler = FileHandler()
751
+
752
+ if st.session_state.output_handler is None:
753
+ st.session_state.output_handler = OutputHandler(
754
+ enable_speech=False, # Disable speech in web interface
755
+ save_transcript=False # Handle transcript in session state
756
+ )
757
+
758
+ def create_header():
759
+ """Create the main header with modern styling."""
760
+ st.markdown("""
761
+ <div class="main-header">
762
+ <h1>🀟 Sign Language Detector Pro</h1>
763
+ <p>Advanced AI-Powered Gesture Recognition & Analysis</p>
764
+ </div>
765
+ """, unsafe_allow_html=True)
766
+
767
+ def create_file_upload_area():
768
+ """Create an enhanced file upload area with drag-and-drop styling."""
769
+ st.markdown("""
770
+ <div class="upload-area">
771
+ <h3 style="color: #2C3E50 !important; font-weight: 600; margin-bottom: 1rem;">πŸ“ Upload Your Files</h3>
772
+ <p style="color: #2C3E50 !important; font-size: 1.1rem; margin-bottom: 0.5rem;">Drag and drop your images or videos here, or click to browse</p>
773
+ <p style="color: #666666 !important; font-size: 0.9rem; margin: 0;"><small>Supported formats: JPG, PNG, BMP, MP4, AVI, MOV, MKV</small></p>
774
+ </div>
775
+ """, unsafe_allow_html=True)
776
+
777
+ def create_metrics_dashboard(results: List[Dict[str, Any]]):
778
+ """Create a metrics dashboard showing processing statistics."""
779
+ if not results:
780
+ return
781
+
782
+ # Calculate metrics
783
+ total_files = len(results)
784
+ successful_files = sum(1 for r in results if r.get('success', False))
785
+ total_hands = sum(r.get('hands_detected', 0) for r in results if r.get('success', False))
786
+ avg_confidence = 0
787
+
788
+ if successful_files > 0:
789
+ confidences = []
790
+ for result in results:
791
+ if result.get('success') and result.get('detections'):
792
+ for detection in result['detections']:
793
+ if 'confidence' in detection:
794
+ confidences.append(detection['confidence'])
795
+ avg_confidence = np.mean(confidences) if confidences else 0
796
+
797
+ # Display metrics in columns
798
+ col1, col2, col3, col4 = st.columns(4)
799
+
800
+ with col1:
801
+ st.markdown(f"""
802
+ <div class="metric-card">
803
+ <div class="metric-value" style="color: #FFFFFF !important; font-size: 2.5rem; font-weight: bold; margin-bottom: 0.5rem;">{total_files}</div>
804
+ <div class="metric-label" style="color: #FFFFFF !important; font-size: 1rem; opacity: 0.9;">Files Processed</div>
805
+ </div>
806
+ """, unsafe_allow_html=True)
807
+
808
+ with col2:
809
+ st.markdown(f"""
810
+ <div class="metric-card">
811
+ <div class="metric-value" style="color: #FFFFFF !important; font-size: 2.5rem; font-weight: bold; margin-bottom: 0.5rem;">{successful_files}</div>
812
+ <div class="metric-label" style="color: #FFFFFF !important; font-size: 1rem; opacity: 0.9;">Successful</div>
813
+ </div>
814
+ """, unsafe_allow_html=True)
815
+
816
+ with col3:
817
+ st.markdown(f"""
818
+ <div class="metric-card">
819
+ <div class="metric-value" style="color: #FFFFFF !important; font-size: 2.5rem; font-weight: bold; margin-bottom: 0.5rem;">{total_hands}</div>
820
+ <div class="metric-label" style="color: #FFFFFF !important; font-size: 1rem; opacity: 0.9;">Hands Detected</div>
821
+ </div>
822
+ """, unsafe_allow_html=True)
823
+
824
+ with col4:
825
+ st.markdown(f"""
826
+ <div class="metric-card">
827
+ <div class="metric-value" style="color: #FFFFFF !important; font-size: 2.5rem; font-weight: bold; margin-bottom: 0.5rem;">{avg_confidence:.1%}</div>
828
+ <div class="metric-label" style="color: #FFFFFF !important; font-size: 1rem; opacity: 0.9;">Avg Confidence</div>
829
+ </div>
830
+ """, unsafe_allow_html=True)
831
+
832
+ def create_confidence_chart(results: List[Dict[str, Any]], chart_key: str = "confidence_chart"):
833
+ """Create a confidence score visualization."""
834
+ confidences = []
835
+ file_names = []
836
+
837
+ for result in results:
838
+ if result.get('success') and result.get('detections'):
839
+ for i, detection in enumerate(result['detections']):
840
+ if 'confidence' in detection:
841
+ confidences.append(detection['confidence'])
842
+ file_name = os.path.basename(result.get('file_path', 'Unknown'))
843
+ file_names.append(f"{file_name} - Hand {i+1}")
844
+
845
+ if confidences:
846
+ df = pd.DataFrame({
847
+ 'File': file_names,
848
+ 'Confidence': confidences
849
+ })
850
+
851
+ fig = px.bar(df, x='File', y='Confidence',
852
+ title='Hand Detection Confidence Scores',
853
+ color='Confidence',
854
+ color_continuous_scale='Viridis')
855
+ fig.update_layout(
856
+ xaxis_tickangle=-45,
857
+ height=400,
858
+ showlegend=False
859
+ )
860
+ st.plotly_chart(fig, use_container_width=True, key=chart_key)
861
+
862
+ def create_gesture_analysis_chart(results: List[Dict[str, Any]], chart_key: str = "gesture_analysis_chart"):
863
+ """Create gesture analysis visualization."""
864
+ gesture_data = []
865
+
866
+ for result in results:
867
+ if result.get('success') and result.get('detections'):
868
+ for detection in result['detections']:
869
+ if 'classification' in detection and detection['classification'].get('success'):
870
+ classification = detection['classification']
871
+ gesture_data.append({
872
+ 'File': os.path.basename(result.get('file_path', 'Unknown')),
873
+ 'Hand': detection.get('hand_label', 'Unknown'),
874
+ 'Letter': classification.get('letter', 'N/A'),
875
+ 'Word': classification.get('word', 'N/A'),
876
+ 'Confidence': classification.get('confidence', 0)
877
+ })
878
+
879
+ if gesture_data:
880
+ df = pd.DataFrame(gesture_data)
881
+
882
+ # Create subplots
883
+ fig = make_subplots(
884
+ rows=1, cols=2,
885
+ subplot_titles=('Letters Detected', 'Classification Confidence'),
886
+ specs=[[{"type": "pie"}, {"type": "histogram"}]]
887
+ )
888
+
889
+ # Letter distribution pie chart
890
+ letter_counts = df['Letter'].value_counts()
891
+ fig.add_trace(
892
+ go.Pie(labels=letter_counts.index, values=letter_counts.values, name="Letters"),
893
+ row=1, col=1
894
+ )
895
+
896
+ # Confidence histogram
897
+ fig.add_trace(
898
+ go.Histogram(x=df['Confidence'], name="Confidence", nbinsx=10),
899
+ row=1, col=2
900
+ )
901
+
902
+ fig.update_layout(height=400, showlegend=False)
903
+ st.plotly_chart(fig, use_container_width=True, key=chart_key)
904
+
905
+
906
+ def setup_ai_api():
907
+ """Setup AI API key with automatic Gemini configuration."""
908
+ st.sidebar.markdown("### πŸ”‘ AI API Configuration")
909
+
910
+ # Use Gemini by default
911
+ default_gemini_key = "AIzaSyDd2BfvfgnVQFkGufpuD76QOsaPM3hWgxo"
912
+
913
+ # AI provider selection
914
+ ai_provider = st.sidebar.selectbox(
915
+ "AI Provider",
916
+ ["Google Gemini (Recommended)", "OpenAI GPT"],
917
+ index=0,
918
+ help="Choose your AI provider for sign language classification"
919
+ )
920
+
921
+ use_gemini = "Gemini" in ai_provider
922
+
923
+ # Check if user wants to use a custom API key
924
+ use_custom_key = st.sidebar.checkbox("Use Custom API Key", value=False)
925
+
926
+ if use_custom_key:
927
+ if use_gemini:
928
+ api_key = st.sidebar.text_input(
929
+ "Custom Gemini API Key",
930
+ type="password",
931
+ help="Enter your custom Google Gemini API key",
932
+ placeholder="AIza..."
933
+ )
934
+ env_key = 'GEMINI_API_KEY'
935
+ else:
936
+ api_key = st.sidebar.text_input(
937
+ "Custom OpenAI API Key",
938
+ type="password",
939
+ help="Enter your custom OpenAI API key",
940
+ placeholder="sk-..."
941
+ )
942
+ env_key = 'OPENAI_API_KEY'
943
+
944
+ if api_key:
945
+ os.environ[env_key] = api_key
946
+ st.sidebar.success(f"βœ… Custom {ai_provider.split()[0]} API key configured")
947
+ return api_key, use_gemini
948
+ else:
949
+ st.sidebar.warning("⚠️ Please enter your custom API key")
950
+ return None, use_gemini
951
+ else:
952
+ # Use default keys
953
+ if use_gemini:
954
+ os.environ['GEMINI_API_KEY'] = default_gemini_key
955
+ os.environ['USE_GEMINI'] = 'True'
956
+ st.sidebar.success("βœ… Gemini API configured automatically")
957
+ st.sidebar.info("πŸš€ Using Google Gemini for fast, accurate predictions")
958
+ return default_gemini_key, True
959
+ else:
960
+ # OpenAI fallback (will likely fail due to quota)
961
+ st.sidebar.warning("⚠️ OpenAI quota may be exceeded")
962
+ st.sidebar.info("πŸ’‘ Recommend using Gemini for better reliability")
963
+ return None, False
964
+
965
+ def create_settings_panel():
966
+ """Create an advanced settings panel."""
967
+ st.sidebar.markdown("### βš™οΈ Processing Settings")
968
+
969
+ # Detection confidence threshold
970
+ confidence_threshold = st.sidebar.slider(
971
+ "Detection Confidence Threshold",
972
+ min_value=0.1,
973
+ max_value=1.0,
974
+ value=0.7,
975
+ step=0.1,
976
+ help="Minimum confidence for hand detection"
977
+ )
978
+
979
+ # Maximum hands to detect
980
+ max_hands = st.sidebar.selectbox(
981
+ "Maximum Hands to Detect",
982
+ options=[1, 2, 3, 4],
983
+ index=1,
984
+ help="Maximum number of hands to detect per image"
985
+ )
986
+
987
+ # Video frame sampling
988
+ frame_skip = st.sidebar.slider(
989
+ "Video Frame Sampling",
990
+ min_value=1,
991
+ max_value=30,
992
+ value=5,
993
+ help="Process every Nth frame in videos (higher = faster processing)"
994
+ )
995
+
996
+ # Export options
997
+ st.sidebar.markdown("### πŸ“Š Export Options")
998
+ export_format = st.sidebar.selectbox(
999
+ "Export Format",
1000
+ options=["JSON", "CSV", "PDF Report"],
1001
+ help="Choose format for exporting results"
1002
+ )
1003
+
1004
+ return {
1005
+ 'confidence_threshold': confidence_threshold,
1006
+ 'max_hands': max_hands,
1007
+ 'frame_skip': frame_skip,
1008
+ 'export_format': export_format
1009
+ }
1010
+
1011
+ def process_uploaded_files(uploaded_files: List, api_key: str, settings: Dict[str, Any], use_gemini: bool = True):
1012
+ """Process multiple uploaded files with progress tracking."""
1013
+ if not uploaded_files:
1014
+ return []
1015
+
1016
+ results = []
1017
+ progress_bar = st.progress(0)
1018
+ status_text = st.empty()
1019
+
1020
+ # Initialize file handler with settings
1021
+ file_handler = FileHandler(
1022
+ frame_skip=settings['frame_skip'],
1023
+ max_frames=100
1024
+ )
1025
+
1026
+ if api_key:
1027
+ file_handler.initialize_classifier(api_key, use_gemini=use_gemini)
1028
+
1029
+ for i, uploaded_file in enumerate(uploaded_files):
1030
+ # Update progress
1031
+ progress = (i + 1) / len(uploaded_files)
1032
+ progress_bar.progress(progress)
1033
+ status_text.text(f"Processing {uploaded_file.name}... ({i+1}/{len(uploaded_files)})")
1034
+
1035
+ # Save uploaded file to temporary location
1036
+ with tempfile.NamedTemporaryFile(delete=False, suffix=os.path.splitext(uploaded_file.name)[1]) as tmp_file:
1037
+ tmp_file.write(uploaded_file.getvalue())
1038
+ tmp_path = tmp_file.name
1039
+
1040
+ try:
1041
+ # Determine file type and process
1042
+ file_type = file_handler.get_file_type(tmp_path)
1043
+
1044
+ if file_type == 'image':
1045
+ result = file_handler.process_image(tmp_path)
1046
+ elif file_type == 'video':
1047
+ result = file_handler.process_video(tmp_path)
1048
+ else:
1049
+ result = {'success': False, 'error': 'Unsupported file format'}
1050
+
1051
+ # Add filename to result
1052
+ result['filename'] = uploaded_file.name
1053
+ result['file_size'] = len(uploaded_file.getvalue())
1054
+ results.append(result)
1055
+
1056
+ except Exception as e:
1057
+ results.append({
1058
+ 'success': False,
1059
+ 'error': str(e),
1060
+ 'filename': uploaded_file.name,
1061
+ 'file_size': len(uploaded_file.getvalue())
1062
+ })
1063
+
1064
+ finally:
1065
+ # Clean up temporary file
1066
+ try:
1067
+ os.unlink(tmp_path)
1068
+ except:
1069
+ pass
1070
+
1071
+ progress_bar.empty()
1072
+ status_text.empty()
1073
+
1074
+ return results
1075
+
1076
+
1077
+ def create_image_with_landmarks(image_array: np.ndarray, hand_landmarks: List[Dict[str, Any]]) -> Image.Image:
1078
+ """Create an image with hand landmarks overlaid."""
1079
+ # Convert to PIL Image for display
1080
+ if len(image_array.shape) == 3 and image_array.shape[2] == 3:
1081
+ # BGR to RGB conversion
1082
+ image_rgb = cv2.cvtColor(image_array, cv2.COLOR_BGR2RGB)
1083
+ else:
1084
+ image_rgb = image_array
1085
+
1086
+ return Image.fromarray(image_rgb)
1087
+
1088
+ def display_image_results(result: Dict[str, Any]):
1089
+ """Display results for image processing with enhanced UI."""
1090
+ if not result['success']:
1091
+ st.error(f"❌ Error processing {result.get('filename', 'file')}: {result.get('error', 'Unknown error')}")
1092
+ return
1093
+
1094
+ filename = result.get('filename', 'Unknown')
1095
+ file_size = result.get('file_size', 0)
1096
+
1097
+ # Create result card
1098
+ st.markdown(f"""
1099
+ <div class="result-card">
1100
+ <h3 style="color: #2C3E50 !important; font-weight: 600; margin-bottom: 1rem;">πŸ“Έ {filename}</h3>
1101
+ <p style="color: #2C3E50 !important;"><strong>File Size:</strong> {file_size / 1024:.1f} KB | <strong>Hands Detected:</strong> {result['hands_detected']}</p>
1102
+ </div>
1103
+ """, unsafe_allow_html=True)
1104
+
1105
+ if result['hands_detected'] > 0:
1106
+ col1, col2 = st.columns([1, 1])
1107
+
1108
+ with col1:
1109
+ st.subheader("πŸ–ΌοΈ Processed Images")
1110
+
1111
+ # Create tabs for different views
1112
+ img_tab1, img_tab2, img_tab3 = st.tabs(["πŸ” Enhanced", "πŸ“Š Comparison", "🎯 3D View"])
1113
+
1114
+ with img_tab1:
1115
+ if 'enhanced_image' in result:
1116
+ enhanced_img = create_image_with_landmarks(result['enhanced_image'], [])
1117
+ st.image(enhanced_img, caption="Enhanced Hand Landmarks", use_container_width=True)
1118
+ elif 'annotated_image' in result:
1119
+ annotated_img = create_image_with_landmarks(result['annotated_image'], [])
1120
+ st.image(annotated_img, caption="Hand Landmarks Detected", use_container_width=True)
1121
+
1122
+ with img_tab2:
1123
+ if 'comparison_image' in result:
1124
+ comparison_img = create_image_with_landmarks(result['comparison_image'], [])
1125
+ st.image(comparison_img, caption="Before vs After Comparison", use_container_width=True)
1126
+
1127
+ with img_tab3:
1128
+ # 3D visualization for first detected hand
1129
+ if result['detections'] and 'landmarks_3d' in result['detections'][0]:
1130
+ hand_data = {
1131
+ 'label': result['detections'][0]['hand_label'],
1132
+ 'landmarks': result['detections'][0]['landmarks_3d']
1133
+ }
1134
+
1135
+ visualizer = st.session_state.visualizer
1136
+ fig_3d = visualizer.create_3d_hand_plot(hand_data)
1137
+ st.plotly_chart(fig_3d, use_container_width=True, key="3d_hand_plot")
1138
+ else:
1139
+ st.info("3D visualization requires hand landmark data")
1140
+
1141
+ with col2:
1142
+ st.subheader("πŸ” Detection Details")
1143
+
1144
+ for i, detection in enumerate(result['detections']):
1145
+ with st.expander(f"βœ‹ Hand {i+1}: {detection['hand_label']}", expanded=True):
1146
+ # Confidence meter
1147
+ confidence = detection['confidence']
1148
+ st.metric("Detection Confidence", f"{confidence:.1%}")
1149
+
1150
+ # Progress bar for confidence
1151
+ st.progress(confidence)
1152
+
1153
+ # Gesture description
1154
+ st.text_area(
1155
+ "Gesture Description",
1156
+ detection['gesture_description'],
1157
+ height=100,
1158
+ disabled=True
1159
+ )
1160
+
1161
+ # Classification results
1162
+ if 'classification' in detection and detection['classification']['success']:
1163
+ classification = detection['classification']
1164
+
1165
+ col_a, col_b = st.columns(2)
1166
+ with col_a:
1167
+ if classification.get('letter'):
1168
+ st.success(f"πŸ”€ **Letter:** {classification['letter']}")
1169
+ with col_b:
1170
+ if classification.get('word'):
1171
+ st.success(f"πŸ“ **Word:** {classification['word']}")
1172
+
1173
+ if classification.get('confidence'):
1174
+ st.info(f"🎯 **AI Confidence:** {classification['confidence']:.1%}")
1175
+
1176
+ def display_video_results(result: Dict[str, Any]):
1177
+ """Display results for video processing with enhanced UI."""
1178
+ if not result['success']:
1179
+ st.error(f"❌ Error processing {result.get('filename', 'file')}: {result.get('error', 'Unknown error')}")
1180
+ return
1181
+
1182
+ filename = result.get('filename', 'Unknown')
1183
+ file_size = result.get('file_size', 0)
1184
+ video_props = result['video_properties']
1185
+
1186
+ # Create result card
1187
+ st.markdown(f"""
1188
+ <div class="result-card">
1189
+ <h3 style="color: #2C3E50 !important; font-weight: 600; margin-bottom: 1rem;">πŸŽ₯ {filename}</h3>
1190
+ <p style="color: #2C3E50 !important;"><strong>File Size:</strong> {file_size / (1024*1024):.1f} MB |
1191
+ <strong>Duration:</strong> {video_props['duration']:.1f}s |
1192
+ <strong>Total Hands:</strong> {result['total_hands_detected']}</p>
1193
+ </div>
1194
+ """, unsafe_allow_html=True)
1195
+
1196
+ # Video metrics
1197
+ col1, col2, col3, col4 = st.columns(4)
1198
+ with col1:
1199
+ st.metric("Total Frames", video_props['total_frames'])
1200
+ with col2:
1201
+ st.metric("Processed Frames", video_props['processed_frames'])
1202
+ with col3:
1203
+ st.metric("FPS", f"{video_props['fps']:.1f}")
1204
+ with col4:
1205
+ st.metric("Hands Found", result['total_hands_detected'])
1206
+
1207
+ # Frame-by-frame analysis
1208
+ if result['frame_detections']:
1209
+ st.subheader("πŸ“Š Frame-by-Frame Analysis")
1210
+
1211
+ # Enhanced timeline visualization
1212
+ timeline_fig = create_processing_timeline(result['frame_detections'])
1213
+ st.plotly_chart(timeline_fig, use_container_width=True, key="video_timeline")
1214
+
1215
+ # Additional analysis charts
1216
+ col_chart1, col_chart2 = st.columns(2)
1217
+
1218
+ with col_chart1:
1219
+ # Confidence over time
1220
+ confidence_data = []
1221
+ for frame in result['frame_detections']:
1222
+ for detection in frame['detections']:
1223
+ if 'confidence' in detection:
1224
+ confidence_data.append({
1225
+ 'Timestamp': frame['timestamp'],
1226
+ 'Confidence': detection['confidence'],
1227
+ 'Hand': detection['hand_label']
1228
+ })
1229
+
1230
+ if confidence_data:
1231
+ conf_df = pd.DataFrame(confidence_data)
1232
+ fig_conf = px.scatter(conf_df, x='Timestamp', y='Confidence',
1233
+ color='Hand', title='Detection Confidence Over Time')
1234
+ st.plotly_chart(fig_conf, use_container_width=True, key="confidence_over_time")
1235
+
1236
+ with col_chart2:
1237
+ # Hand distribution
1238
+ hand_counts = {}
1239
+ for frame in result['frame_detections']:
1240
+ for detection in frame['detections']:
1241
+ hand_label = detection.get('hand_label', 'Unknown')
1242
+ hand_counts[hand_label] = hand_counts.get(hand_label, 0) + 1
1243
+
1244
+ if hand_counts:
1245
+ fig_pie = px.pie(values=list(hand_counts.values()),
1246
+ names=list(hand_counts.keys()),
1247
+ title='Hand Distribution')
1248
+ st.plotly_chart(fig_pie, use_container_width=True, key="hand_distribution")
1249
+
1250
+ # Detailed frame results
1251
+ st.subheader("πŸ” Detailed Frame Results")
1252
+
1253
+ # Show first 10 frames with detections
1254
+ frames_to_show = [f for f in result['frame_detections'] if f['hands_detected'] > 0][:10]
1255
+
1256
+ for frame_data in frames_to_show:
1257
+ with st.expander(f"⏱️ Frame {frame_data['frame_number']} (t={frame_data['timestamp']:.1f}s)"):
1258
+ for i, detection in enumerate(frame_data['detections']):
1259
+ st.write(f"**βœ‹ {detection['hand_label']} Hand {i+1}**")
1260
+
1261
+ if 'classification' in detection and detection['classification']['success']:
1262
+ classification = detection['classification']
1263
+
1264
+ col_a, col_b, col_c = st.columns(3)
1265
+ with col_a:
1266
+ if classification.get('letter'):
1267
+ st.info(f"Letter: **{classification['letter']}**")
1268
+ with col_b:
1269
+ if classification.get('word'):
1270
+ st.info(f"Word: **{classification['word']}**")
1271
+ with col_c:
1272
+ if classification.get('confidence'):
1273
+ st.info(f"Confidence: **{classification['confidence']:.1%}**")
1274
+
1275
+ # Sequence analysis
1276
+ if result.get('sequence_analysis') and result['sequence_analysis'].get('success'):
1277
+ st.subheader("πŸ”— Sequence Analysis")
1278
+ sequence = result['sequence_analysis']
1279
+
1280
+ col1, col2 = st.columns(2)
1281
+ with col1:
1282
+ if sequence.get('word'):
1283
+ st.success(f"🎯 **Detected Word:** {sequence['word']}")
1284
+ if sequence.get('sentence'):
1285
+ st.success(f"πŸ“ **Detected Sentence:** {sequence['sentence']}")
1286
+
1287
+ with col2:
1288
+ if sequence.get('individual_letters'):
1289
+ letters_str = ' β†’ '.join(sequence['individual_letters'])
1290
+ st.info(f"πŸ”€ **Letter Sequence:** {letters_str}")
1291
+
1292
+ if sequence.get('confidence'):
1293
+ st.metric("Sequence Confidence", f"{sequence['confidence']:.1%}")
1294
+
1295
+ def export_results(results: List[Dict[str, Any]], format_type: str):
1296
+ """Enhanced export functionality with multiple formats."""
1297
+ if not results:
1298
+ st.warning("No results to export")
1299
+ return
1300
+
1301
+ exporter = st.session_state.exporter
1302
+ timestamp = int(time.time())
1303
+
1304
+ col1, col2, col3 = st.columns(3)
1305
+
1306
+ with col1:
1307
+ if st.button("πŸ“„ Export JSON", use_container_width=True):
1308
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as tmp_file:
1309
+ if exporter.export_to_json(results, tmp_file.name, include_metadata=True):
1310
+ with open(tmp_file.name, 'r') as f:
1311
+ json_data = f.read()
1312
+
1313
+ st.download_button(
1314
+ label="πŸ“₯ Download JSON",
1315
+ data=json_data,
1316
+ file_name=f"sign_language_results_{timestamp}.json",
1317
+ mime="application/json",
1318
+ use_container_width=True
1319
+ )
1320
+ os.unlink(tmp_file.name)
1321
+ else:
1322
+ st.error("Failed to export JSON")
1323
+
1324
+ with col2:
1325
+ if st.button("πŸ“Š Export CSV", use_container_width=True):
1326
+ with tempfile.NamedTemporaryFile(mode='w', suffix='.csv', delete=False) as tmp_file:
1327
+ if exporter.export_to_csv(results, tmp_file.name):
1328
+ with open(tmp_file.name, 'r') as f:
1329
+ csv_data = f.read()
1330
+
1331
+ st.download_button(
1332
+ label="πŸ“₯ Download CSV",
1333
+ data=csv_data,
1334
+ file_name=f"sign_language_results_{timestamp}.csv",
1335
+ mime="text/csv",
1336
+ use_container_width=True
1337
+ )
1338
+ os.unlink(tmp_file.name)
1339
+ else:
1340
+ st.error("Failed to export CSV")
1341
+
1342
+ with col3:
1343
+ if st.button("πŸ“‹ Export PDF Report", use_container_width=True):
1344
+ with tempfile.NamedTemporaryFile(suffix='.pdf', delete=False) as tmp_file:
1345
+ if exporter.export_to_pdf(results, tmp_file.name, include_images=False):
1346
+ with open(tmp_file.name, 'rb') as f:
1347
+ pdf_data = f.read()
1348
+
1349
+ st.download_button(
1350
+ label="πŸ“₯ Download PDF",
1351
+ data=pdf_data,
1352
+ file_name=f"sign_language_report_{timestamp}.pdf",
1353
+ mime="application/pdf",
1354
+ use_container_width=True
1355
+ )
1356
+ os.unlink(tmp_file.name)
1357
+ else:
1358
+ st.error("Failed to export PDF")
1359
+
1360
+ # Summary report
1361
+ if st.button("πŸ“ˆ Generate Summary Report", use_container_width=True):
1362
+ summary = exporter.create_summary_report(results)
1363
+
1364
+ st.markdown("### πŸ“Š Processing Summary")
1365
+
1366
+ col_a, col_b, col_c, col_d = st.columns(4)
1367
+ with col_a:
1368
+ st.metric("Total Files", summary['total_files'])
1369
+ with col_b:
1370
+ st.metric("Successful", summary['successful_files'])
1371
+ with col_c:
1372
+ st.metric("Failed", summary['failed_files'])
1373
+ with col_d:
1374
+ st.metric("Hands Detected", summary['total_hands_detected'])
1375
+
1376
+ if summary['detected_letters']:
1377
+ st.markdown("#### πŸ”€ Most Common Letters")
1378
+ letters_df = pd.DataFrame(list(summary['detected_letters'].items()),
1379
+ columns=['Letter', 'Count'])
1380
+ letters_df = letters_df.sort_values('Count', ascending=False)
1381
+
1382
+ fig = px.bar(letters_df.head(10), x='Letter', y='Count',
1383
+ title='Top 10 Detected Letters')
1384
+ st.plotly_chart(fig, use_container_width=True, key="top_letters_chart")
1385
+
1386
+ if summary['detected_words']:
1387
+ st.markdown("#### πŸ“ Most Common Words")
1388
+ words_df = pd.DataFrame(list(summary['detected_words'].items()),
1389
+ columns=['Word', 'Count'])
1390
+ words_df = words_df.sort_values('Count', ascending=False)
1391
+
1392
+ fig = px.bar(words_df.head(10), x='Word', y='Count',
1393
+ title='Top 10 Detected Words')
1394
+ st.plotly_chart(fig, use_container_width=True, key="top_words_chart")
1395
+
1396
+
1397
+ def get_single_prediction(result: Dict[str, Any]) -> str:
1398
+ """
1399
+ Extract a single, clear prediction from the result.
1400
+
1401
+ Args:
1402
+ result: Processing result dictionary
1403
+
1404
+ Returns:
1405
+ Single prediction string (letter, word, or "No prediction")
1406
+ """
1407
+ if not result.get('success') or not result.get('detections'):
1408
+ return "No prediction"
1409
+
1410
+ # Collect all predictions from all detected hands
1411
+ letters = []
1412
+ words = []
1413
+
1414
+ for detection in result['detections']:
1415
+ if 'classification' in detection and detection['classification'].get('success'):
1416
+ classification = detection['classification']
1417
+
1418
+ # Get letter prediction
1419
+ if classification.get('letter') and classification['letter'] != 'N/A':
1420
+ letters.append(classification['letter'])
1421
+
1422
+ # Get word prediction
1423
+ if classification.get('word') and classification['word'] != 'N/A':
1424
+ words.append(classification['word'])
1425
+
1426
+ # Priority: Word > Letter > No prediction
1427
+ if words:
1428
+ # Return the most confident word or the first word if multiple
1429
+ return words[0].upper()
1430
+ elif letters:
1431
+ # Return the most confident letter or the first letter if multiple
1432
+ return letters[0].upper()
1433
+ else:
1434
+ return "No prediction"
1435
+
1436
+ def display_single_prediction_card(result: Dict[str, Any]):
1437
+ """Display a single, clear prediction card for the result."""
1438
+ filename = os.path.basename(result.get('file_path', 'Unknown'))
1439
+ prediction = get_single_prediction(result)
1440
+
1441
+ # Determine card color based on prediction
1442
+ if prediction == "No prediction":
1443
+ card_color = "#E74C3C" # Red for no prediction
1444
+ icon = "❌"
1445
+ confidence_text = ""
1446
+ else:
1447
+ card_color = "#27AE60" # Green for successful prediction
1448
+ icon = "βœ…"
1449
+
1450
+ # Get confidence if available
1451
+ confidence = 0.0
1452
+ for detection in result.get('detections', []):
1453
+ if 'classification' in detection and detection['classification'].get('success'):
1454
+ conf = detection['classification'].get('confidence', 0)
1455
+ if conf > confidence:
1456
+ confidence = conf
1457
+
1458
+ confidence_text = f" (Confidence: {confidence:.1%})" if confidence > 0 else ""
1459
+
1460
+ # Display the prediction card
1461
+ st.markdown(f"""
1462
+ <div style="
1463
+ background: linear-gradient(135deg, {card_color}, {card_color}dd);
1464
+ color: white;
1465
+ padding: 2rem;
1466
+ border-radius: 15px;
1467
+ text-align: center;
1468
+ margin: 1rem 0;
1469
+ box-shadow: 0 8px 32px rgba(0,0,0,0.2);
1470
+ ">
1471
+ <h2 style="color: white !important; margin-bottom: 1rem; font-size: 2.5rem;">
1472
+ {icon} {prediction}
1473
+ </h2>
1474
+ <p style="color: white !important; font-size: 1.2rem; margin: 0;">
1475
+ πŸ“ {filename}{confidence_text}
1476
+ </p>
1477
+ </div>
1478
+ """, unsafe_allow_html=True)
1479
+
1480
+ def display_results(results: List[Dict[str, Any]]):
1481
+ """Display processing results with enhanced UI."""
1482
+ if not results:
1483
+ st.info("No results to display")
1484
+ return
1485
+
1486
+ # Display Single Predictions First (Most Important)
1487
+ st.markdown("## 🎯 **SIGN LANGUAGE PREDICTIONS**")
1488
+
1489
+ # Create a summary table of all predictions
1490
+ prediction_data = []
1491
+ for result in results:
1492
+ filename = os.path.basename(result.get('file_path', 'Unknown'))
1493
+ prediction = get_single_prediction(result)
1494
+
1495
+ # Get confidence
1496
+ confidence = 0.0
1497
+ for detection in result.get('detections', []):
1498
+ if 'classification' in detection and detection['classification'].get('success'):
1499
+ conf = detection['classification'].get('confidence', 0)
1500
+ if conf > confidence:
1501
+ confidence = conf
1502
+
1503
+ prediction_data.append({
1504
+ 'File': filename,
1505
+ 'Prediction': prediction,
1506
+ 'Confidence': f"{confidence:.1%}" if confidence > 0 else "N/A"
1507
+ })
1508
+
1509
+ if prediction_data:
1510
+ # Display as a clean table
1511
+ import pandas as pd
1512
+ df = pd.DataFrame(prediction_data)
1513
+ st.dataframe(df, use_container_width=True, hide_index=True)
1514
+
1515
+ st.markdown("### Individual Prediction Cards")
1516
+
1517
+ # Show single prediction cards for each file
1518
+ for result in results:
1519
+ display_single_prediction_card(result)
1520
+
1521
+ # Add separator
1522
+ st.markdown("---")
1523
+
1524
+ # Create metrics dashboard
1525
+ create_metrics_dashboard(results)
1526
+
1527
+ # Create visualizations
1528
+ col1, col2 = st.columns(2)
1529
+ with col1:
1530
+ create_confidence_chart(results, "main_confidence_chart")
1531
+ with col2:
1532
+ create_gesture_analysis_chart(results, "main_gesture_analysis_chart")
1533
+
1534
+ # Display individual results
1535
+ st.markdown("## πŸ“‹ Detailed Analysis")
1536
+
1537
+ for result in results:
1538
+ if result.get('file_type') == 'image':
1539
+ display_image_results(result)
1540
+ elif result.get('file_type') == 'video':
1541
+ display_video_results(result)
1542
+ else:
1543
+ st.error(f"❌ Failed to process {result.get('filename', 'unknown file')}: {result.get('error', 'Unknown error')}")
1544
+
1545
+
1546
+ def display_quick_summary(results: List[Dict[str, Any]]):
1547
+ """Display a quick summary of predictions at the top."""
1548
+ if not results:
1549
+ return
1550
+
1551
+ predictions = []
1552
+ for result in results:
1553
+ filename = os.path.basename(result.get('file_path', 'Unknown'))
1554
+ prediction = get_single_prediction(result)
1555
+ if prediction != "No prediction":
1556
+ predictions.append(f"**{filename}** β†’ **{prediction}**")
1557
+
1558
+ if predictions:
1559
+ st.success("🎯 **Quick Results:** " + " | ".join(predictions))
1560
+ else:
1561
+ st.warning("⚠️ No clear predictions found in uploaded files")
1562
+
1563
+ def main():
1564
+ """Enhanced Streamlit application with modern UI."""
1565
+ # Create header
1566
+ create_header()
1567
+
1568
+ # Show quick summary if results exist
1569
+ if st.session_state.processing_results:
1570
+ display_quick_summary(st.session_state.processing_results)
1571
+
1572
+ # Initialize components
1573
+ initialize_components()
1574
+
1575
+ # Sidebar configuration
1576
+ st.sidebar.markdown("# πŸŽ›οΈ Control Panel")
1577
+
1578
+ # AI API setup
1579
+ api_key, use_gemini = setup_ai_api()
1580
+
1581
+ # Settings panel
1582
+ settings = create_settings_panel()
1583
+
1584
+ # Main content area
1585
+ tab1, tab2, tab3 = st.tabs(["πŸ“ File Processing", "πŸ“Š Analytics", "ℹ️ About"])
1586
+
1587
+ with tab1:
1588
+ st.markdown("## πŸ“ File Processing")
1589
+
1590
+ # Enhanced file upload area
1591
+ create_file_upload_area()
1592
+
1593
+ # Multiple file uploader
1594
+ uploaded_files = st.file_uploader(
1595
+ "Choose files",
1596
+ type=['jpg', 'jpeg', 'png', 'bmp', 'mp4', 'avi', 'mov', 'mkv'],
1597
+ accept_multiple_files=True,
1598
+ help="Upload multiple images or videos for batch processing"
1599
+ )
1600
+
1601
+ if uploaded_files:
1602
+ st.success(f"βœ… {len(uploaded_files)} file(s) uploaded successfully")
1603
+
1604
+ # Show file details
1605
+ with st.expander("πŸ“‹ File Details", expanded=True):
1606
+ for file in uploaded_files:
1607
+ file_size = len(file.getvalue())
1608
+ st.write(f"β€’ **{file.name}** ({file_size / 1024:.1f} KB)")
1609
+
1610
+ # Process button
1611
+ col1, col2, col3 = st.columns([1, 2, 1])
1612
+ with col2:
1613
+ if st.button("πŸš€ Process All Files", type="primary", use_container_width=True):
1614
+ if not api_key:
1615
+ st.error("❌ Please provide an OpenAI API key to analyze gestures")
1616
+ else:
1617
+ with st.spinner("πŸ”„ Processing files..."):
1618
+ results = process_uploaded_files(uploaded_files, api_key, settings, use_gemini)
1619
+ st.session_state.processing_results = results
1620
+
1621
+ if results:
1622
+ st.success(f"βœ… Processing complete! {len(results)} files processed.")
1623
+ display_results(results)
1624
+
1625
+ # Export options
1626
+ st.markdown("### πŸ“€ Export Results")
1627
+ col_a, col_b = st.columns(2)
1628
+ with col_a:
1629
+ export_results(results, settings['export_format'])
1630
+ with col_b:
1631
+ if st.button("πŸ—‘οΈ Clear Results"):
1632
+ st.session_state.processing_results = []
1633
+ st.experimental_rerun()
1634
+
1635
+ # Display previous results if available
1636
+ elif st.session_state.processing_results:
1637
+ st.markdown("### πŸ“Š Previous Results")
1638
+ display_results(st.session_state.processing_results)
1639
+
1640
+ # Export options
1641
+ st.markdown("### πŸ“€ Export Results")
1642
+ col_a, col_b = st.columns(2)
1643
+ with col_a:
1644
+ export_results(st.session_state.processing_results, settings['export_format'])
1645
+ with col_b:
1646
+ if st.button("πŸ—‘οΈ Clear Results"):
1647
+ st.session_state.processing_results = []
1648
+ st.experimental_rerun()
1649
+
1650
+ with tab2:
1651
+ st.markdown("## πŸ“Š Analytics Dashboard")
1652
+
1653
+ if st.session_state.processing_results:
1654
+ results = st.session_state.processing_results
1655
+
1656
+ # Overall statistics
1657
+ st.markdown("### πŸ“ˆ Overall Statistics")
1658
+ create_metrics_dashboard(results)
1659
+
1660
+ # Detailed charts
1661
+ st.markdown("### πŸ“Š Detailed Analysis")
1662
+ col1, col2 = st.columns(2)
1663
+
1664
+ with col1:
1665
+ create_confidence_chart(results, "analytics_confidence_chart")
1666
+
1667
+ with col2:
1668
+ create_gesture_analysis_chart(results, "analytics_gesture_analysis_chart")
1669
+
1670
+ # File processing timeline
1671
+ st.markdown("### ⏱️ Processing Timeline")
1672
+ if results:
1673
+ timeline_data = []
1674
+ for i, result in enumerate(results):
1675
+ timeline_data.append({
1676
+ 'File': result.get('filename', f'File {i+1}'),
1677
+ 'Success': result.get('success', False),
1678
+ 'Hands': result.get('hands_detected', 0) if result.get('success') else 0,
1679
+ 'Size (KB)': result.get('file_size', 0) / 1024
1680
+ })
1681
+
1682
+ df = pd.DataFrame(timeline_data)
1683
+
1684
+ fig = px.scatter(df, x='Size (KB)', y='Hands',
1685
+ color='Success', size='Hands',
1686
+ hover_data=['File'],
1687
+ title='File Size vs Hands Detected')
1688
+ st.plotly_chart(fig, use_container_width=True, key="file_size_scatter")
1689
+ else:
1690
+ st.info("πŸ“Š No data available. Process some files to see analytics.")
1691
+
1692
+ with tab3:
1693
+ st.markdown("## ℹ️ About Sign Language Detector Pro")
1694
+
1695
+ col1, col2 = st.columns(2)
1696
+
1697
+ with col1:
1698
+ st.markdown("""
1699
+ ### 🎯 Features
1700
+ - **Advanced File Processing**: Batch analysis of images and videos
1701
+ - **AI-Powered Classification**: OpenAI API integration for accurate gesture recognition
1702
+ - **Interactive Analytics**: Real-time charts and metrics
1703
+ - **Multiple Export Formats**: JSON, CSV, and PDF reports
1704
+ - **Professional UI**: Modern, responsive design
1705
+ - **Comprehensive Analysis**: Hand landmarks, gesture features, and confidence scores
1706
+
1707
+ ### πŸ”§ How It Works
1708
+ 1. **Upload Files**: Drag and drop or select multiple files
1709
+ 2. **Hand Detection**: MediaPipe detects 21 hand landmarks
1710
+ 3. **Feature Extraction**: Advanced gesture analysis
1711
+ 4. **AI Classification**: OpenAI interprets gestures
1712
+ 5. **Results Display**: Interactive charts and detailed analysis
1713
+ """)
1714
+
1715
+ with col2:
1716
+ st.markdown("""
1717
+ ### πŸ“‹ Supported Formats
1718
+ **Images:**
1719
+ - JPG, JPEG, PNG, BMP
1720
+
1721
+ **Videos:**
1722
+ - MP4, AVI, MOV, MKV
1723
+
1724
+ ### βš™οΈ System Requirements
1725
+ - Python 3.8+
1726
+ - OpenAI API key
1727
+ - Modern web browser
1728
+
1729
+ ### πŸš€ Performance
1730
+ - Batch processing support
1731
+ - Optimized video frame sampling
1732
+ - Real-time progress tracking
1733
+ - Memory-efficient processing
1734
+ """)
1735
+
1736
+ # System information
1737
+ st.markdown("### πŸ’» System Information")
1738
+ info_col1, info_col2 = st.columns(2)
1739
+
1740
+ with info_col1:
1741
+ st.info(f"**Python:** {sys.version.split()[0]}")
1742
+ st.info(f"**OpenCV:** {cv2.__version__}")
1743
+
1744
+ with info_col2:
1745
+ st.info(f"**Streamlit:** {st.__version__}")
1746
+ api_status = "βœ… Configured" if api_key else "❌ Not configured"
1747
+ st.info(f"**OpenAI API:** {api_status}")
1748
+
1749
+ # Enhanced footer with improved text visibility
1750
+ st.markdown("---")
1751
+ st.markdown("""
1752
+ <div style="text-align: center; padding: 2rem; background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
1753
+ border-radius: 15px; color: #FFFFFF !important; margin-top: 2rem; box-shadow: 0 4px 15px rgba(0,0,0,0.1);">
1754
+ <h4 style="color: #FFFFFF !important; margin-bottom: 1rem; font-weight: 600;">🀟 Sign Language Detector Pro</h4>
1755
+ <p style="color: #FFFFFF !important; margin-bottom: 0.5rem; font-size: 1.1rem;">Empowering communication through AI-powered gesture recognition</p>
1756
+ <p style="color: #FFFFFF !important; margin: 0; opacity: 0.9;"><small>Built with ❀️ using MediaPipe, OpenAI, and Streamlit</small></p>
1757
+ </div>
1758
+ """, unsafe_allow_html=True)
1759
 
 
 
1760
 
1761
+ if __name__ == "__main__":
1762
+ main()