RuslanKain commited on
Commit
e969f9c
·
1 Parent(s): dd5ec97

Update Gradio version requirement and enhance theme handling in GradioApp

Browse files
Files changed (3) hide show
  1. app.py +10 -4
  2. app_oop_gradio.py +0 -914
  3. requirements.txt +1 -1
app.py CHANGED
@@ -531,12 +531,18 @@ class GradioApp:
531
  own responsibility. The final result is a complete interface.
532
  """
533
 
534
- with gr.Blocks(
535
- title="CISC 121 - OOP Sorting Visualizer",
536
- theme=gr.themes.Soft(
537
  primary_hue="blue",
538
- secondary_hue="red",
539
  )
 
 
 
 
 
 
540
  ) as demo:
541
 
542
  # Header
 
531
  own responsibility. The final result is a complete interface.
532
  """
533
 
534
+ # Create theme - handle older Gradio versions gracefully
535
+ try:
536
+ theme = gr.themes.Soft(
537
  primary_hue="blue",
538
+ secondary_hue="gray",
539
  )
540
+ except Exception:
541
+ theme = "soft" # Fallback for older versions
542
+
543
+ with gr.Blocks(
544
+ title="CISC 121 - OOP Sorting Visualizer",
545
+ theme=theme
546
  ) as demo:
547
 
548
  # Header
app_oop_gradio.py DELETED
@@ -1,914 +0,0 @@
1
- """
2
- ╔══════════════════════════════════════════════════════════════════════════════╗
3
- ║ ║
4
- ║ 🎓 CISC 121 - OOP Sorting & Searching Visualizer ║
5
- ║ ║
6
- ║ Queen's University - Introduction to Computing Science I ║
7
- ║ ║
8
- ║ This application demonstrates Object-Oriented Programming concepts ║
9
- ║ through interactive visualization of sorting and searching algorithms. ║
10
- ║ ║
11
- ║ HOW TO RUN: python app_oop_gradio.py ║
12
- ║ ║
13
- ╚══════════════════════════════════════════════════════════════════════════════╝
14
-
15
- 📚 PHASE 5: Gradio UI
16
-
17
- This is the final phase - creating a user-friendly web interface that:
18
- 1. Allows capturing/uploading gesture images
19
- 2. Displays the image list with gesture recognition
20
- 3. Lets users run sorting/searching algorithms
21
- 4. Visualizes each step of the algorithm
22
-
23
- The UI demonstrates COMPOSITION - the GradioApp class composes:
24
- - ImageList (data management)
25
- - SortingAlgorithm / SearchAlgorithm (algorithm execution)
26
- - Visualizer (step-by-step display)
27
- """
28
-
29
- # ==============================================================================
30
- # IMPORTS
31
- # ==============================================================================
32
-
33
- import gradio as gr
34
- from PIL import Image
35
- import os
36
- from typing import List, Tuple, Optional
37
-
38
- # Import our OOP package
39
- from oop_sorting_teaching import (
40
- # Models
41
- GestureRanking,
42
- GestureImage,
43
- ImageList,
44
- StepType,
45
- Step,
46
- # Sorting
47
- BubbleSort,
48
- MergeSort,
49
- QuickSort,
50
- PivotStrategy,
51
- PartitionScheme,
52
- # Searching
53
- LinearSearch,
54
- BinarySearch,
55
- # Visualization
56
- Visualizer,
57
- VisualizationConfig,
58
- RendererFactory,
59
- )
60
-
61
- # Try to import transformers for gesture recognition
62
- try:
63
- from transformers import pipeline
64
- CLASSIFIER_AVAILABLE = True
65
- except ImportError:
66
- CLASSIFIER_AVAILABLE = False
67
- print("⚠️ transformers not installed. Using manual gesture selection.")
68
-
69
-
70
- # ==============================================================================
71
- # CONFIGURATION
72
- # ==============================================================================
73
-
74
- MODEL_NAME = "dima806/hand_gestures_image_detection"
75
- HF_TOKEN = os.environ.get("HF_TOKEN", None)
76
-
77
- APP_TITLE = "## 🎓 CISC 121 - OOP Sorting & Searching Visualizer"
78
- APP_DESCRIPTION = """
79
- **Learn Object-Oriented Programming through Algorithm Visualization!**
80
-
81
- This app demonstrates key OOP concepts:
82
- - 📦 **Classes & Objects**: GestureImage, ImageList, Algorithms
83
- - 🎭 **Inheritance**: All sorting algorithms inherit from SortingAlgorithm
84
- - 🔄 **Polymorphism**: Swap between algorithms seamlessly
85
- - 🏭 **Factory Pattern**: RendererFactory creates the right visualizer
86
-
87
- **How to use:**
88
- 1. **Add images** using the buttons below (capture or manual)
89
- 2. **View your list** of gesture images
90
- 3. **Run an algorithm** to see step-by-step visualization
91
- 4. **Navigate steps** to understand how the algorithm works
92
- """
93
-
94
-
95
- # ==============================================================================
96
- # GRADIO APP CLASS
97
- # ==============================================================================
98
-
99
- class GradioApp:
100
- """
101
- 📚 CONCEPT: Composition
102
-
103
- The GradioApp class COMPOSES (contains) other objects:
104
- - ImageList for managing captured images
105
- - Visualizer for displaying algorithm steps
106
- - Classifier for gesture recognition (if available)
107
-
108
- This is the Controller in MVC pattern - it coordinates
109
- between user interface (View) and data/logic (Model).
110
- """
111
-
112
- def __init__(self):
113
- """Initialize the application state."""
114
- self.image_list = ImageList()
115
- self.visualizer = Visualizer(VisualizationConfig(
116
- show_statistics=True,
117
- show_legend=True,
118
- image_size=60
119
- ))
120
- self._capture_count = 0
121
-
122
- # Initialize classifier if available
123
- self.classifier = None
124
- if CLASSIFIER_AVAILABLE:
125
- try:
126
- self.classifier = pipeline(
127
- "image-classification",
128
- model=MODEL_NAME,
129
- token=HF_TOKEN
130
- )
131
- print(f"✅ Loaded model: {MODEL_NAME}")
132
- except Exception as e:
133
- print(f"⚠️ Could not load model: {e}")
134
-
135
- # -------------------------------------------------------------------------
136
- # Image Management Methods
137
- # -------------------------------------------------------------------------
138
-
139
- def add_manual_gesture(self, gesture_name: str) -> Tuple[str, str]:
140
- """
141
- Add a gesture image manually (without camera).
142
-
143
- Returns:
144
- Tuple of (image_list_html, status_message)
145
- """
146
- if not gesture_name:
147
- return self._render_image_list(), "⚠️ Please select a gesture"
148
-
149
- self._capture_count += 1
150
- self.image_list.add_new(gesture_name)
151
-
152
- return (
153
- self._render_image_list(),
154
- f"✅ Added {GestureRanking.get_emoji(gesture_name)} {gesture_name} (#{self._capture_count})"
155
- )
156
-
157
- def add_from_image(self, image: Image.Image) -> Tuple[str, str]:
158
- """
159
- Add a gesture from an uploaded/captured image.
160
- Uses AI classification if available, otherwise prompts for manual selection.
161
- """
162
- if image is None:
163
- return self._render_image_list(), "⚠️ No image provided"
164
-
165
- if self.classifier:
166
- try:
167
- # Classify the image
168
- results = self.classifier(image)
169
- if results:
170
- top_result = results[0]
171
- gesture_name = top_result['label'].lower()
172
- confidence = top_result['score']
173
-
174
- self._capture_count += 1
175
- img = GestureImage.create_from_prediction(
176
- gesture_name=gesture_name,
177
- capture_id=self._capture_count,
178
- image=image,
179
- confidence=confidence
180
- )
181
- self.image_list._save_state() # Save before modifying
182
- self.image_list._images.append(img)
183
-
184
- return (
185
- self._render_image_list(),
186
- f"✅ Detected: {img.emoji} {gesture_name} ({confidence:.1%} confidence)"
187
- )
188
- except Exception as e:
189
- return self._render_image_list(), f"⚠️ Classification error: {e}"
190
-
191
- return self._render_image_list(), "⚠️ No classifier available. Use manual gesture selection."
192
-
193
- def remove_image(self, index: int) -> Tuple[str, str]:
194
- """Remove an image at the given index."""
195
- if 0 <= index < len(self.image_list):
196
- removed = self.image_list[index]
197
- self.image_list.remove(index)
198
- return self._render_image_list(), f"✅ Removed {removed}"
199
- return self._render_image_list(), "⚠️ Invalid index"
200
-
201
- def shuffle_images(self) -> Tuple[str, str]:
202
- """Shuffle the image list."""
203
- self.image_list.shuffle()
204
- return self._render_image_list(), "🔀 Shuffled!"
205
-
206
- def clear_images(self) -> Tuple[str, str]:
207
- """Clear all images."""
208
- count = len(self.image_list)
209
- self.image_list.clear()
210
- self._capture_count = 0
211
- self.visualizer.reset()
212
- return self._render_image_list(), f"🗑️ Cleared {count} images"
213
-
214
- def undo_action(self) -> Tuple[str, str]:
215
- """Undo the last action."""
216
- if self.image_list.undo():
217
- return self._render_image_list(), "↩️ Undone!"
218
- return self._render_image_list(), "⚠️ Nothing to undo"
219
-
220
- def add_sample_data(self) -> Tuple[str, str]:
221
- """Add sample data for testing."""
222
- gestures = ['fist', 'peace', 'like', 'peace', 'ok', 'fist']
223
- for g in gestures:
224
- self._capture_count += 1
225
- self.image_list.add_new(g)
226
- return self._render_image_list(), f"✅ Added {len(gestures)} sample gestures"
227
-
228
- def add_instability_demo(self) -> Tuple[str, str]:
229
- """
230
- Add data specifically designed to demonstrate Quick Sort instability.
231
-
232
- 📚 EDUCATIONAL PURPOSE:
233
- This creates a scenario where Quick Sort will reorder equal elements,
234
- demonstrating that it's an UNSTABLE sorting algorithm.
235
-
236
- Setup: [✌️₁] [✌️₂] [✌️₃] [✊₄]
237
- After Quick Sort: The peace signs may be reordered (e.g., ₂,₃,₁)
238
- After Bubble/Merge Sort: Order preserved (₁,₂,₃)
239
- """
240
- self.clear_images()
241
- # Three peace signs followed by a lower-ranked fist
242
- demo_gestures = ['peace', 'peace', 'peace', 'fist']
243
- for g in demo_gestures:
244
- self._capture_count += 1
245
- self.image_list.add_new(g)
246
-
247
- return (
248
- self._render_image_list(),
249
- "🎓 Instability Demo: [✌️₁][✌️₂][✌️₃][✊₄]\n"
250
- "Try Quick Sort vs Bubble Sort - watch the subscript order!"
251
- )
252
-
253
- def add_worst_case_demo(self) -> Tuple[str, str]:
254
- """
255
- Add already-sorted data to demonstrate worst-case for Quick Sort.
256
-
257
- 📚 EDUCATIONAL PURPOSE:
258
- When data is already sorted and we use First Pivot strategy,
259
- Quick Sort degrades to O(n²) - its worst case!
260
- """
261
- self.clear_images()
262
- # Sorted order: fist(1) < peace(2) < like(3) < ok(4) < call(5)
263
- sorted_gestures = ['fist', 'peace', 'like', 'ok', 'call']
264
- for g in sorted_gestures:
265
- self._capture_count += 1
266
- self.image_list.add_new(g)
267
-
268
- return (
269
- self._render_image_list(),
270
- "🎓 Worst-Case Demo: Already sorted data!\n"
271
- "Quick Sort with First Pivot → O(n²)\n"
272
- "Try Median-of-3 or Random pivot to see the difference."
273
- )
274
-
275
- def add_binary_search_demo(self) -> Tuple[str, str]:
276
- """
277
- Add sorted data for binary search demonstration.
278
-
279
- 📚 EDUCATIONAL PURPOSE:
280
- Binary search requires sorted data. This preset shows
281
- how O(log n) is much faster than O(n) linear search.
282
- """
283
- self.clear_images()
284
- # Create larger sorted dataset for more dramatic comparison
285
- gestures = ['fist', 'fist', 'peace', 'peace', 'like', 'like',
286
- 'ok', 'ok', 'call', 'call', 'palm', 'palm']
287
- for g in gestures:
288
- self._capture_count += 1
289
- self.image_list.add_new(g)
290
-
291
- return (
292
- self._render_image_list(),
293
- "🎓 Search Demo: 12 sorted elements\n"
294
- "Linear Search: up to 12 comparisons\n"
295
- "Binary Search: at most 4 comparisons (log₂12 ≈ 3.6)"
296
- )
297
-
298
- # -------------------------------------------------------------------------
299
- # Algorithm Execution Methods
300
- # -------------------------------------------------------------------------
301
-
302
- def run_sort(self, algorithm_name: str, pivot_strategy: str = "first",
303
- partition_scheme: str = "2-way") -> Tuple[str, str, str]:
304
- """
305
- Run a sorting algorithm on the image list.
306
-
307
- Returns:
308
- Tuple of (visualization_html, image_list_html, status_message)
309
- """
310
- if len(self.image_list) < 2:
311
- return (
312
- self.visualizer.render_current(),
313
- self._render_image_list(),
314
- "⚠️ Need at least 2 images to sort"
315
- )
316
-
317
- # Create the algorithm instance
318
- if algorithm_name == "Bubble Sort":
319
- algo = BubbleSort()
320
- elif algorithm_name == "Merge Sort":
321
- algo = MergeSort()
322
- elif algorithm_name == "Quick Sort":
323
- # Map string to enum
324
- pivot_map = {
325
- "first": PivotStrategy.FIRST,
326
- "last": PivotStrategy.LAST,
327
- "median": PivotStrategy.MEDIAN_OF_THREE,
328
- "random": PivotStrategy.RANDOM,
329
- }
330
- partition_map = {
331
- "2-way": PartitionScheme.TWO_WAY,
332
- "3-way": PartitionScheme.THREE_WAY,
333
- }
334
- algo = QuickSort(
335
- pivot_strategy=pivot_map.get(pivot_strategy, PivotStrategy.FIRST),
336
- partition_scheme=partition_map.get(partition_scheme, PartitionScheme.TWO_WAY)
337
- )
338
- else:
339
- return (
340
- self.visualizer.render_current(),
341
- self._render_image_list(),
342
- f"⚠️ Unknown algorithm: {algorithm_name}"
343
- )
344
-
345
- # Get data copy and run algorithm
346
- data = list(self.image_list)
347
- sorted_data, steps = algo.run_full(data)
348
-
349
- # Load into visualizer
350
- self.visualizer.load_steps(steps, sorted_data, algo.name)
351
-
352
- # Update the image list to sorted order
353
- self.image_list._save_state() # Save before modifying
354
- self.image_list._images = list(sorted_data)
355
-
356
- return (
357
- self.visualizer.render_current(),
358
- self._render_image_list(),
359
- f"✅ {algo.name}: {len(steps)} steps"
360
- )
361
-
362
- def run_search(self, algorithm_name: str, target_index: int) -> Tuple[str, str]:
363
- """
364
- Run a search algorithm.
365
-
366
- Args:
367
- algorithm_name: "Linear Search" or "Binary Search"
368
- target_index: Index of the target element to search for
369
-
370
- Returns:
371
- Tuple of (visualization_html, status_message)
372
- """
373
- if len(self.image_list) < 1:
374
- return self.visualizer.render_current(), "⚠️ Need at least 1 image to search"
375
-
376
- if not (0 <= target_index < len(self.image_list)):
377
- return self.visualizer.render_current(), "⚠️ Invalid target index"
378
-
379
- data = list(self.image_list)
380
- target = data[target_index]
381
-
382
- # For binary search, we need sorted data
383
- if algorithm_name == "Binary Search":
384
- if not self.image_list.is_sorted():
385
- return (
386
- self.visualizer.render_current(),
387
- "⚠️ Binary Search requires sorted data! Run a sort first."
388
- )
389
- algo = BinarySearch(variant="iterative")
390
- else:
391
- algo = LinearSearch()
392
-
393
- # Run the search
394
- result_index, steps = algo.run_full(data, target)
395
-
396
- # Load into visualizer
397
- self.visualizer.load_steps(steps, data, algo.name)
398
-
399
- if result_index is not None:
400
- status = f"✅ {algo.name}: Found {target} at index {result_index}"
401
- else:
402
- status = f"❌ {algo.name}: {target} not found"
403
-
404
- return self.visualizer.render_current(), status
405
-
406
- # -------------------------------------------------------------------------
407
- # Visualization Navigation Methods
408
- # -------------------------------------------------------------------------
409
-
410
- def viz_next(self) -> str:
411
- """Go to next visualization step."""
412
- return self.visualizer.next_step()
413
-
414
- def viz_prev(self) -> str:
415
- """Go to previous visualization step."""
416
- return self.visualizer.prev_step()
417
-
418
- def viz_start(self) -> str:
419
- """Go to first step."""
420
- return self.visualizer.go_to_start()
421
-
422
- def viz_end(self) -> str:
423
- """Go to last step."""
424
- return self.visualizer.go_to_end()
425
-
426
- def viz_goto(self, step: int) -> str:
427
- """Go to a specific step."""
428
- return self.visualizer.go_to_step(int(step) - 1) # Convert to 0-based
429
-
430
- # -------------------------------------------------------------------------
431
- # Rendering Methods
432
- # -------------------------------------------------------------------------
433
-
434
- def _render_image_list(self) -> str:
435
- """Render the current image list as HTML."""
436
- if len(self.image_list) == 0:
437
- return """
438
- <div style="
439
- text-align: center;
440
- padding: 40px;
441
- color: #666;
442
- background: #f8f9fa;
443
- border-radius: 12px;
444
- border: 2px dashed #ddd;
445
- ">
446
- <div style="font-size: 48px; margin-bottom: 15px;">📷</div>
447
- <h3 style="margin: 0 0 10px 0;">No Images Yet</h3>
448
- <p style="margin: 0;">Add gestures using the buttons above!</p>
449
- </div>
450
- """
451
-
452
- # Build image cards
453
- cards = []
454
- for i, img in enumerate(self.image_list):
455
- card = f"""
456
- <div style="
457
- display: inline-flex;
458
- flex-direction: column;
459
- align-items: center;
460
- margin: 6px;
461
- padding: 12px;
462
- border-radius: 10px;
463
- background: white;
464
- border: 2px solid #ddd;
465
- min-width: 70px;
466
- box-shadow: 0 2px 4px rgba(0,0,0,0.1);
467
- ">
468
- <div style="font-size: 32px; margin-bottom: 4px;">{img.emoji}</div>
469
- <div style="font-size: 11px; color: #666;">₍{img.capture_id}₎</div>
470
- <div style="font-size: 10px; color: #999;">rank {img.rank}</div>
471
- <div style="font-size: 9px; color: #aaa; margin-top: 4px;">[{i}]</div>
472
- </div>
473
- """
474
- cards.append(card)
475
-
476
- # Analysis
477
- analysis = self.image_list.get_analysis()
478
- is_sorted = "✅ Sorted" if self.image_list.is_sorted() else "❌ Not Sorted"
479
-
480
- return f"""
481
- <div style="
482
- background: linear-gradient(135deg, #002D62 0%, #9B2335 100%);
483
- color: white;
484
- padding: 15px;
485
- border-radius: 12px 12px 0 0;
486
- ">
487
- <div style="display: flex; justify-content: space-between; align-items: center;">
488
- <strong>Image List ({len(self.image_list)} items)</strong>
489
- <span>{is_sorted}</span>
490
- </div>
491
- </div>
492
- <div style="
493
- background: #f8f9fa;
494
- padding: 15px;
495
- border-radius: 0 0 12px 12px;
496
- border: 1px solid #ddd;
497
- border-top: none;
498
- ">
499
- <div style="
500
- display: flex;
501
- flex-wrap: wrap;
502
- justify-content: center;
503
- gap: 4px;
504
- ">
505
- {''.join(cards)}
506
- </div>
507
- <div style="
508
- margin-top: 15px;
509
- padding-top: 10px;
510
- border-top: 1px solid #ddd;
511
- font-size: 12px;
512
- color: #666;
513
- text-align: center;
514
- ">
515
- {analysis}
516
- </div>
517
- </div>
518
- """
519
-
520
- # -------------------------------------------------------------------------
521
- # Create Gradio UI
522
- # -------------------------------------------------------------------------
523
-
524
- def create_ui(self) -> gr.Blocks:
525
- """
526
- Create the Gradio interface.
527
-
528
- 📚 CONCEPT: Builder Pattern (light version)
529
-
530
- We build up the UI component by component, each with its
531
- own responsibility. The final result is a complete interface.
532
- """
533
-
534
- with gr.Blocks(
535
- title="CISC 121 - OOP Sorting Visualizer",
536
- theme=gr.themes.Soft(
537
- primary_hue="blue",
538
- secondary_hue="red",
539
- )
540
- ) as demo:
541
-
542
- # Header
543
- gr.Markdown(APP_TITLE)
544
- gr.Markdown(APP_DESCRIPTION)
545
-
546
- with gr.Tabs():
547
- # ============================================================
548
- # TAB 1: Image Management
549
- # ============================================================
550
- with gr.TabItem("📷 Capture & Manage"):
551
- with gr.Row():
552
- # Left column: Add images
553
- with gr.Column(scale=1):
554
- gr.Markdown("### Add Gestures")
555
-
556
- # Manual gesture selection
557
- gesture_dropdown = gr.Dropdown(
558
- choices=GestureRanking.get_all_gestures(),
559
- label="Select Gesture",
560
- info="Choose a gesture to add"
561
- )
562
- add_btn = gr.Button("➕ Add Gesture", variant="primary")
563
-
564
- gr.Markdown("---")
565
-
566
- # Image upload
567
- image_input = gr.Image(
568
- label="Upload Image",
569
- type="pil",
570
- sources=["upload", "webcam"]
571
- )
572
- classify_btn = gr.Button("🔍 Classify & Add")
573
-
574
- gr.Markdown("---")
575
-
576
- # Quick actions
577
- with gr.Row():
578
- sample_btn = gr.Button("📝 Add Samples")
579
- shuffle_btn = gr.Button("🔀 Shuffle")
580
- with gr.Row():
581
- undo_btn = gr.Button("↩️ Undo")
582
- clear_btn = gr.Button("🗑️ Clear", variant="stop")
583
-
584
- gr.Markdown("---")
585
-
586
- # Educational demos
587
- gr.Markdown("### 🎓 Educational Demos")
588
- instability_btn = gr.Button(
589
- "⚠️ Instability Demo",
590
- variant="secondary"
591
- )
592
- worst_case_btn = gr.Button(
593
- "📉 Worst-Case Demo",
594
- variant="secondary"
595
- )
596
- search_demo_btn = gr.Button(
597
- "🔍 Search Demo",
598
- variant="secondary"
599
- )
600
-
601
- # Right column: Image list display
602
- with gr.Column(scale=2):
603
- gr.Markdown("### Current Image List")
604
- image_list_display = gr.HTML(
605
- value=self._render_image_list()
606
- )
607
- status_msg = gr.Textbox(
608
- label="Status",
609
- interactive=False
610
- )
611
-
612
- # Wire up events for Tab 1
613
- add_btn.click(
614
- fn=self.add_manual_gesture,
615
- inputs=[gesture_dropdown],
616
- outputs=[image_list_display, status_msg]
617
- )
618
- classify_btn.click(
619
- fn=self.add_from_image,
620
- inputs=[image_input],
621
- outputs=[image_list_display, status_msg]
622
- )
623
- sample_btn.click(
624
- fn=self.add_sample_data,
625
- outputs=[image_list_display, status_msg]
626
- )
627
- shuffle_btn.click(
628
- fn=self.shuffle_images,
629
- outputs=[image_list_display, status_msg]
630
- )
631
- undo_btn.click(
632
- fn=self.undo_action,
633
- outputs=[image_list_display, status_msg]
634
- )
635
- clear_btn.click(
636
- fn=self.clear_images,
637
- outputs=[image_list_display, status_msg]
638
- )
639
- instability_btn.click(
640
- fn=self.add_instability_demo,
641
- outputs=[image_list_display, status_msg]
642
- )
643
- worst_case_btn.click(
644
- fn=self.add_worst_case_demo,
645
- outputs=[image_list_display, status_msg]
646
- )
647
- search_demo_btn.click(
648
- fn=self.add_binary_search_demo,
649
- outputs=[image_list_display, status_msg]
650
- )
651
-
652
- # ============================================================
653
- # TAB 2: Sorting Algorithms
654
- # ============================================================
655
- with gr.TabItem("📊 Sorting"):
656
- with gr.Row():
657
- # Left: Algorithm selection
658
- with gr.Column(scale=1):
659
- gr.Markdown("### Select Algorithm")
660
-
661
- sort_algo = gr.Radio(
662
- choices=["Bubble Sort", "Merge Sort", "Quick Sort"],
663
- value="Bubble Sort",
664
- label="Algorithm",
665
- info="Each has different time complexity and stability"
666
- )
667
-
668
- # Educational info accordion
669
- with gr.Accordion("📚 Algorithm Info", open=False):
670
- gr.Markdown("""
671
- **Bubble Sort** - O(n²) average, O(n) best
672
- - ✅ Stable (preserves order of equal elements)
673
- - Simple but slow for large lists
674
- - Best when: Nearly sorted data
675
-
676
- **Merge Sort** - O(n log n) always
677
- - ✅ Stable
678
- - Consistent performance
679
- - Uses extra memory for merging
680
-
681
- **Quick Sort** - O(n log n) average, O(n²) worst
682
- - ❌ Unstable (may reorder equal elements)
683
- - Fast in practice, in-place
684
- - Best when: Random data, good pivot
685
- """)
686
-
687
- # Quick Sort options (only shown when Quick Sort selected)
688
- with gr.Group() as quicksort_options:
689
- gr.Markdown("**Quick Sort Options**")
690
- pivot_strategy = gr.Radio(
691
- choices=["first", "last", "median", "random"],
692
- value="first",
693
- label="Pivot Strategy",
694
- info="Median/Random avoid worst-case O(n²)"
695
- )
696
- partition_scheme = gr.Radio(
697
- choices=["2-way", "3-way"],
698
- value="2-way",
699
- label="Partition Scheme",
700
- info="3-way handles duplicates better"
701
- )
702
-
703
- run_sort_btn = gr.Button("▶️ Run Sort", variant="primary", size="lg")
704
-
705
- gr.Markdown("---")
706
- gr.Markdown("### Current List")
707
- sort_list_display = gr.HTML(value=self._render_image_list())
708
-
709
- # Right: Visualization
710
- with gr.Column(scale=2):
711
- gr.Markdown("### Visualization")
712
- sort_viz_display = gr.HTML(
713
- value=self.visualizer.render_current()
714
- )
715
-
716
- # Navigation controls
717
- with gr.Row():
718
- viz_start_btn = gr.Button("⏮️ Start")
719
- viz_prev_btn = gr.Button("◀️ Prev")
720
- step_slider = gr.Slider(
721
- minimum=1,
722
- maximum=100,
723
- step=1,
724
- value=1,
725
- label="Step"
726
- )
727
- viz_next_btn = gr.Button("Next ▶️")
728
- viz_end_btn = gr.Button("End ⏭️")
729
-
730
- sort_status = gr.Textbox(label="Status", interactive=False)
731
-
732
- # Wire up sorting events
733
- run_sort_btn.click(
734
- fn=self.run_sort,
735
- inputs=[sort_algo, pivot_strategy, partition_scheme],
736
- outputs=[sort_viz_display, sort_list_display, sort_status]
737
- )
738
- viz_next_btn.click(fn=self.viz_next, outputs=[sort_viz_display])
739
- viz_prev_btn.click(fn=self.viz_prev, outputs=[sort_viz_display])
740
- viz_start_btn.click(fn=self.viz_start, outputs=[sort_viz_display])
741
- viz_end_btn.click(fn=self.viz_end, outputs=[sort_viz_display])
742
- step_slider.change(fn=self.viz_goto, inputs=[step_slider], outputs=[sort_viz_display])
743
-
744
- # ============================================================
745
- # TAB 3: Searching Algorithms
746
- # ============================================================
747
- with gr.TabItem("🔍 Searching"):
748
- with gr.Row():
749
- # Left: Search controls
750
- with gr.Column(scale=1):
751
- gr.Markdown("### Search Settings")
752
-
753
- search_algo = gr.Radio(
754
- choices=["Linear Search", "Binary Search"],
755
- value="Linear Search",
756
- label="Algorithm",
757
- info="Binary Search is O(log n) but requires sorted data"
758
- )
759
-
760
- # Educational info accordion
761
- with gr.Accordion("📚 Algorithm Info", open=False):
762
- gr.Markdown("""
763
- **Linear Search** - O(n)
764
- - Works on ANY list (sorted or unsorted)
765
- - Checks each element one by one
766
- - Simple but slow for large lists
767
-
768
- **Binary Search** - O(log n)
769
- - ⚠️ REQUIRES SORTED DATA!
770
- - Halves the search space each step
771
- - Much faster: 1000 elements → only 10 comparisons!
772
-
773
- **Example (searching 1000 elements):**
774
- - Linear: up to 1000 checks
775
- - Binary: at most 10 checks (log₂1000 ≈ 10)
776
- """)
777
-
778
- target_index = gr.Number(
779
- label="Target Index",
780
- value=0,
781
- precision=0,
782
- info="Which element to search for (by index)"
783
- )
784
-
785
- run_search_btn = gr.Button("🔍 Run Search", variant="primary", size="lg")
786
-
787
- gr.Markdown("---")
788
- gr.Markdown("### Current List")
789
- search_list_display = gr.HTML(value=self._render_image_list())
790
-
791
- # Right: Visualization
792
- with gr.Column(scale=2):
793
- gr.Markdown("### Visualization")
794
- search_viz_display = gr.HTML(
795
- value=self.visualizer.render_current()
796
- )
797
-
798
- # Navigation controls
799
- with gr.Row():
800
- search_start_btn = gr.Button("⏮️ Start")
801
- search_prev_btn = gr.Button("◀️ Prev")
802
- search_next_btn = gr.Button("Next ▶️")
803
- search_end_btn = gr.Button("End ⏭️")
804
-
805
- search_status = gr.Textbox(label="Status", interactive=False)
806
-
807
- # Wire up search events
808
- run_search_btn.click(
809
- fn=self.run_search,
810
- inputs=[search_algo, target_index],
811
- outputs=[search_viz_display, search_status]
812
- )
813
- search_next_btn.click(fn=self.viz_next, outputs=[search_viz_display])
814
- search_prev_btn.click(fn=self.viz_prev, outputs=[search_viz_display])
815
- search_start_btn.click(fn=self.viz_start, outputs=[search_viz_display])
816
- search_end_btn.click(fn=self.viz_end, outputs=[search_viz_display])
817
-
818
- # ============================================================
819
- # TAB 4: Learn OOP
820
- # ============================================================
821
- with gr.TabItem("📚 Learn OOP"):
822
- gr.Markdown("""
823
- # Object-Oriented Programming Concepts
824
-
825
- This application demonstrates several key OOP concepts:
826
-
827
- ## 📦 Classes & Objects
828
-
829
- **Classes** are blueprints for creating objects. In this app:
830
- - `GestureImage` - represents a single captured gesture
831
- - `ImageList` - manages a collection of gestures
832
- - `BubbleSort`, `MergeSort`, `QuickSort` - sorting algorithms
833
- - `Visualizer` - handles step-by-step display
834
-
835
- ## 🎭 Inheritance
836
-
837
- **Inheritance** lets classes share code. All sorting algorithms inherit from `SortingAlgorithm`:
838
-
839
- ```python
840
- class SortingAlgorithm(ABC): # Abstract Base Class
841
- @abstractmethod
842
- def sort(self, data): ...
843
-
844
- class BubbleSort(SortingAlgorithm): # Inherits from SortingAlgorithm
845
- def sort(self, data):
846
- # Bubble sort implementation
847
- ```
848
-
849
- ## 🔄 Polymorphism
850
-
851
- **Polymorphism** means "same interface, different behavior":
852
-
853
- ```python
854
- # All these work the same way!
855
- algo = BubbleSort()
856
- algo = MergeSort()
857
- algo = QuickSort()
858
-
859
- # Same method call, different algorithms
860
- result, steps = algo.run_full(data)
861
- ```
862
-
863
- ## 🏭 Factory Pattern
864
-
865
- **Factory Pattern** creates objects without exposing creation logic:
866
-
867
- ```python
868
- # Factory creates the right renderer automatically
869
- renderer = RendererFactory.create("Bubble Sort")
870
- ```
871
-
872
- ## 📊 Algorithm Comparison
873
-
874
- | Algorithm | Time (Best) | Time (Worst) | Stable? | In-Place? |
875
- |-----------|-------------|--------------|---------|-----------|
876
- | Bubble Sort | O(n) | O(n²) | ✅ Yes | ✅ Yes |
877
- | Merge Sort | O(n log n) | O(n log n) | ✅ Yes | ❌ No |
878
- | Quick Sort | O(n log n) | O(n²) | ❌ No | ✅ Yes |
879
- | Linear Search | O(1) | O(n) | - | - |
880
- | Binary Search | O(1) | O(log n) | - | - |
881
-
882
- ## 🔍 Stability
883
-
884
- A **stable** sort preserves the relative order of equal elements.
885
-
886
- Example with two peace signs ✌️₁ and ✌️₂:
887
- - **Stable**: Always produces [✌️₁, ✌️₂] (original order kept)
888
- - **Unstable**: Might produce [✌️₂, ✌️₁] (order can change)
889
-
890
- Try Quick Sort with duplicate gestures to see instability!
891
- """)
892
-
893
- # Footer
894
- gr.Markdown("""
895
- ---
896
- *Built for CISC 121 - Queen's University*
897
- """)
898
-
899
- return demo
900
-
901
-
902
- # ==============================================================================
903
- # MAIN ENTRY POINT
904
- # ==============================================================================
905
-
906
- def main():
907
- """Create and launch the Gradio app."""
908
- app = GradioApp()
909
- demo = app.create_ui()
910
- demo.launch(share=False)
911
-
912
-
913
- if __name__ == "__main__":
914
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -24,7 +24,7 @@
24
  # Gradio - Creates web interfaces for Python apps
25
  # Website: https://gradio.app
26
  # We use it to build the camera input and results display
27
- gradio>=4.44.0
28
 
29
  # Transformers - Hugging Face's AI/ML library
30
  # Website: https://huggingface.co/transformers
 
24
  # Gradio - Creates web interfaces for Python apps
25
  # Website: https://gradio.app
26
  # We use it to build the camera input and results display
27
+ gradio>=4.0.0
28
 
29
  # Transformers - Hugging Face's AI/ML library
30
  # Website: https://huggingface.co/transformers