RuslanKain commited on
Commit
d19858a
·
1 Parent(s): 7bbef20
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. app_oop_gradio.py +914 -0
  2. oop_sorting_teaching/__init__.py +123 -0
  3. oop_sorting_teaching/__pycache__/__init__.cpython-313.pyc +0 -0
  4. oop_sorting_teaching/algorithms/__init__.py +54 -0
  5. oop_sorting_teaching/algorithms/__pycache__/__init__.cpython-313.pyc +0 -0
  6. oop_sorting_teaching/algorithms/__pycache__/base.cpython-313.pyc +0 -0
  7. oop_sorting_teaching/algorithms/base.py +289 -0
  8. oop_sorting_teaching/algorithms/searching/__init__.py +18 -0
  9. oop_sorting_teaching/algorithms/searching/__pycache__/__init__.cpython-313.pyc +0 -0
  10. oop_sorting_teaching/algorithms/searching/__pycache__/binary_search.cpython-313.pyc +0 -0
  11. oop_sorting_teaching/algorithms/searching/__pycache__/linear_search.cpython-313.pyc +0 -0
  12. oop_sorting_teaching/algorithms/searching/binary_search.py +349 -0
  13. oop_sorting_teaching/algorithms/searching/linear_search.py +98 -0
  14. oop_sorting_teaching/algorithms/sorting/__init__.py +23 -0
  15. oop_sorting_teaching/algorithms/sorting/__pycache__/__init__.cpython-313.pyc +0 -0
  16. oop_sorting_teaching/algorithms/sorting/__pycache__/bubble_sort.cpython-313.pyc +0 -0
  17. oop_sorting_teaching/algorithms/sorting/__pycache__/merge_sort.cpython-313.pyc +0 -0
  18. oop_sorting_teaching/algorithms/sorting/__pycache__/quick_sort.cpython-313.pyc +0 -0
  19. oop_sorting_teaching/algorithms/sorting/bubble_sort.py +161 -0
  20. oop_sorting_teaching/algorithms/sorting/merge_sort.py +218 -0
  21. oop_sorting_teaching/algorithms/sorting/quick_sort.py +521 -0
  22. oop_sorting_teaching/models/__init__.py +22 -0
  23. oop_sorting_teaching/models/__pycache__/__init__.cpython-313.pyc +0 -0
  24. oop_sorting_teaching/models/__pycache__/gesture.cpython-313.pyc +0 -0
  25. oop_sorting_teaching/models/__pycache__/image_list.cpython-313.pyc +0 -0
  26. oop_sorting_teaching/models/__pycache__/step.cpython-313.pyc +0 -0
  27. oop_sorting_teaching/models/gesture.py +521 -0
  28. oop_sorting_teaching/models/image_list.py +444 -0
  29. oop_sorting_teaching/models/step.py +158 -0
  30. oop_sorting_teaching/visualization/__init__.py +46 -0
  31. oop_sorting_teaching/visualization/__pycache__/__init__.cpython-313.pyc +0 -0
  32. oop_sorting_teaching/visualization/__pycache__/factory.cpython-313.pyc +0 -0
  33. oop_sorting_teaching/visualization/__pycache__/state.cpython-313.pyc +0 -0
  34. oop_sorting_teaching/visualization/__pycache__/visualizer.cpython-313.pyc +0 -0
  35. oop_sorting_teaching/visualization/factory.py +123 -0
  36. oop_sorting_teaching/visualization/renderers/__init__.py +27 -0
  37. oop_sorting_teaching/visualization/renderers/__pycache__/__init__.cpython-313.pyc +0 -0
  38. oop_sorting_teaching/visualization/renderers/__pycache__/base.cpython-313.pyc +0 -0
  39. oop_sorting_teaching/visualization/renderers/__pycache__/binary_renderer.cpython-313.pyc +0 -0
  40. oop_sorting_teaching/visualization/renderers/__pycache__/bubble_renderer.cpython-313.pyc +0 -0
  41. oop_sorting_teaching/visualization/renderers/__pycache__/linear_renderer.cpython-313.pyc +0 -0
  42. oop_sorting_teaching/visualization/renderers/__pycache__/merge_renderer.cpython-313.pyc +0 -0
  43. oop_sorting_teaching/visualization/renderers/__pycache__/quick_renderer.cpython-313.pyc +0 -0
  44. oop_sorting_teaching/visualization/renderers/base.py +233 -0
  45. oop_sorting_teaching/visualization/renderers/binary_renderer.py +152 -0
  46. oop_sorting_teaching/visualization/renderers/bubble_renderer.py +111 -0
  47. oop_sorting_teaching/visualization/renderers/linear_renderer.py +113 -0
  48. oop_sorting_teaching/visualization/renderers/merge_renderer.py +128 -0
  49. oop_sorting_teaching/visualization/renderers/quick_renderer.py +154 -0
  50. oop_sorting_teaching/visualization/state.py +64 -0
app_oop_gradio.py ADDED
@@ -0,0 +1,914 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ╔══════════════════════════════════════════════════════════════════════════════╗
3
+ ║ ║
4
+ ║ 🎓 CISC 121 - OOP Sorting & Searching Visualizer ║
5
+ ║ ║
6
+ ║ Queen's University - Introduction to Computing Science I ║
7
+ ║ ║
8
+ ║ This application demonstrates Object-Oriented Programming concepts ║
9
+ ║ through interactive visualization of sorting and searching algorithms. ║
10
+ ║ ║
11
+ ║ HOW TO RUN: python app_oop_gradio.py ║
12
+ ║ ║
13
+ ╚══════════════════════════════════════════════════════════════════════════════╝
14
+
15
+ 📚 PHASE 5: Gradio UI
16
+
17
+ This is the final phase - creating a user-friendly web interface that:
18
+ 1. Allows capturing/uploading gesture images
19
+ 2. Displays the image list with gesture recognition
20
+ 3. Lets users run sorting/searching algorithms
21
+ 4. Visualizes each step of the algorithm
22
+
23
+ The UI demonstrates COMPOSITION - the GradioApp class composes:
24
+ - ImageList (data management)
25
+ - SortingAlgorithm / SearchAlgorithm (algorithm execution)
26
+ - Visualizer (step-by-step display)
27
+ """
28
+
29
+ # ==============================================================================
30
+ # IMPORTS
31
+ # ==============================================================================
32
+
33
+ import gradio as gr
34
+ from PIL import Image
35
+ import os
36
+ from typing import List, Tuple, Optional
37
+
38
+ # Import our OOP package
39
+ from oop_sorting_teaching import (
40
+ # Models
41
+ GestureRanking,
42
+ GestureImage,
43
+ ImageList,
44
+ StepType,
45
+ Step,
46
+ # Sorting
47
+ BubbleSort,
48
+ MergeSort,
49
+ QuickSort,
50
+ PivotStrategy,
51
+ PartitionScheme,
52
+ # Searching
53
+ LinearSearch,
54
+ BinarySearch,
55
+ # Visualization
56
+ Visualizer,
57
+ VisualizationConfig,
58
+ RendererFactory,
59
+ )
60
+
61
+ # Try to import transformers for gesture recognition
62
+ try:
63
+ from transformers import pipeline
64
+ CLASSIFIER_AVAILABLE = True
65
+ except ImportError:
66
+ CLASSIFIER_AVAILABLE = False
67
+ print("⚠️ transformers not installed. Using manual gesture selection.")
68
+
69
+
70
+ # ==============================================================================
71
+ # CONFIGURATION
72
+ # ==============================================================================
73
+
74
+ MODEL_NAME = "dima806/hand_gestures_image_detection"
75
+ HF_TOKEN = os.environ.get("HF_TOKEN", None)
76
+
77
+ APP_TITLE = "## 🎓 CISC 121 - OOP Sorting & Searching Visualizer"
78
+ APP_DESCRIPTION = """
79
+ **Learn Object-Oriented Programming through Algorithm Visualization!**
80
+
81
+ This app demonstrates key OOP concepts:
82
+ - 📦 **Classes & Objects**: GestureImage, ImageList, Algorithms
83
+ - 🎭 **Inheritance**: All sorting algorithms inherit from SortingAlgorithm
84
+ - 🔄 **Polymorphism**: Swap between algorithms seamlessly
85
+ - 🏭 **Factory Pattern**: RendererFactory creates the right visualizer
86
+
87
+ **How to use:**
88
+ 1. **Add images** using the buttons below (capture or manual)
89
+ 2. **View your list** of gesture images
90
+ 3. **Run an algorithm** to see step-by-step visualization
91
+ 4. **Navigate steps** to understand how the algorithm works
92
+ """
93
+
94
+
95
+ # ==============================================================================
96
+ # GRADIO APP CLASS
97
+ # ==============================================================================
98
+
99
+ class GradioApp:
100
+ """
101
+ 📚 CONCEPT: Composition
102
+
103
+ The GradioApp class COMPOSES (contains) other objects:
104
+ - ImageList for managing captured images
105
+ - Visualizer for displaying algorithm steps
106
+ - Classifier for gesture recognition (if available)
107
+
108
+ This is the Controller in MVC pattern - it coordinates
109
+ between user interface (View) and data/logic (Model).
110
+ """
111
+
112
+ def __init__(self):
113
+ """Initialize the application state."""
114
+ self.image_list = ImageList()
115
+ self.visualizer = Visualizer(VisualizationConfig(
116
+ show_statistics=True,
117
+ show_legend=True,
118
+ image_size=60
119
+ ))
120
+ self._capture_count = 0
121
+
122
+ # Initialize classifier if available
123
+ self.classifier = None
124
+ if CLASSIFIER_AVAILABLE:
125
+ try:
126
+ self.classifier = pipeline(
127
+ "image-classification",
128
+ model=MODEL_NAME,
129
+ token=HF_TOKEN
130
+ )
131
+ print(f"✅ Loaded model: {MODEL_NAME}")
132
+ except Exception as e:
133
+ print(f"⚠️ Could not load model: {e}")
134
+
135
+ # -------------------------------------------------------------------------
136
+ # Image Management Methods
137
+ # -------------------------------------------------------------------------
138
+
139
+ def add_manual_gesture(self, gesture_name: str) -> Tuple[str, str]:
140
+ """
141
+ Add a gesture image manually (without camera).
142
+
143
+ Returns:
144
+ Tuple of (image_list_html, status_message)
145
+ """
146
+ if not gesture_name:
147
+ return self._render_image_list(), "⚠️ Please select a gesture"
148
+
149
+ self._capture_count += 1
150
+ self.image_list.add_new(gesture_name)
151
+
152
+ return (
153
+ self._render_image_list(),
154
+ f"✅ Added {GestureRanking.get_emoji(gesture_name)} {gesture_name} (#{self._capture_count})"
155
+ )
156
+
157
+ def add_from_image(self, image: Image.Image) -> Tuple[str, str]:
158
+ """
159
+ Add a gesture from an uploaded/captured image.
160
+ Uses AI classification if available, otherwise prompts for manual selection.
161
+ """
162
+ if image is None:
163
+ return self._render_image_list(), "⚠️ No image provided"
164
+
165
+ if self.classifier:
166
+ try:
167
+ # Classify the image
168
+ results = self.classifier(image)
169
+ if results:
170
+ top_result = results[0]
171
+ gesture_name = top_result['label'].lower()
172
+ confidence = top_result['score']
173
+
174
+ self._capture_count += 1
175
+ img = GestureImage.create_from_prediction(
176
+ gesture_name=gesture_name,
177
+ capture_id=self._capture_count,
178
+ image=image,
179
+ confidence=confidence
180
+ )
181
+ self.image_list._save_state() # Save before modifying
182
+ self.image_list._images.append(img)
183
+
184
+ return (
185
+ self._render_image_list(),
186
+ f"✅ Detected: {img.emoji} {gesture_name} ({confidence:.1%} confidence)"
187
+ )
188
+ except Exception as e:
189
+ return self._render_image_list(), f"⚠️ Classification error: {e}"
190
+
191
+ return self._render_image_list(), "⚠️ No classifier available. Use manual gesture selection."
192
+
193
+ def remove_image(self, index: int) -> Tuple[str, str]:
194
+ """Remove an image at the given index."""
195
+ if 0 <= index < len(self.image_list):
196
+ removed = self.image_list[index]
197
+ self.image_list.remove(index)
198
+ return self._render_image_list(), f"✅ Removed {removed}"
199
+ return self._render_image_list(), "⚠️ Invalid index"
200
+
201
+ def shuffle_images(self) -> Tuple[str, str]:
202
+ """Shuffle the image list."""
203
+ self.image_list.shuffle()
204
+ return self._render_image_list(), "🔀 Shuffled!"
205
+
206
+ def clear_images(self) -> Tuple[str, str]:
207
+ """Clear all images."""
208
+ count = len(self.image_list)
209
+ self.image_list.clear()
210
+ self._capture_count = 0
211
+ self.visualizer.reset()
212
+ return self._render_image_list(), f"🗑️ Cleared {count} images"
213
+
214
+ def undo_action(self) -> Tuple[str, str]:
215
+ """Undo the last action."""
216
+ if self.image_list.undo():
217
+ return self._render_image_list(), "↩️ Undone!"
218
+ return self._render_image_list(), "⚠️ Nothing to undo"
219
+
220
+ def add_sample_data(self) -> Tuple[str, str]:
221
+ """Add sample data for testing."""
222
+ gestures = ['fist', 'peace', 'like', 'peace', 'ok', 'fist']
223
+ for g in gestures:
224
+ self._capture_count += 1
225
+ self.image_list.add_new(g)
226
+ return self._render_image_list(), f"✅ Added {len(gestures)} sample gestures"
227
+
228
+ def add_instability_demo(self) -> Tuple[str, str]:
229
+ """
230
+ Add data specifically designed to demonstrate Quick Sort instability.
231
+
232
+ 📚 EDUCATIONAL PURPOSE:
233
+ This creates a scenario where Quick Sort will reorder equal elements,
234
+ demonstrating that it's an UNSTABLE sorting algorithm.
235
+
236
+ Setup: [✌️₁] [✌️₂] [✌️₃] [✊₄]
237
+ After Quick Sort: The peace signs may be reordered (e.g., ₂,₃,₁)
238
+ After Bubble/Merge Sort: Order preserved (₁,₂,₃)
239
+ """
240
+ self.clear_images()
241
+ # Three peace signs followed by a lower-ranked fist
242
+ demo_gestures = ['peace', 'peace', 'peace', 'fist']
243
+ for g in demo_gestures:
244
+ self._capture_count += 1
245
+ self.image_list.add_new(g)
246
+
247
+ return (
248
+ self._render_image_list(),
249
+ "🎓 Instability Demo: [✌️₁][✌️₂][✌️₃][✊₄]\n"
250
+ "Try Quick Sort vs Bubble Sort - watch the subscript order!"
251
+ )
252
+
253
+ def add_worst_case_demo(self) -> Tuple[str, str]:
254
+ """
255
+ Add already-sorted data to demonstrate worst-case for Quick Sort.
256
+
257
+ 📚 EDUCATIONAL PURPOSE:
258
+ When data is already sorted and we use First Pivot strategy,
259
+ Quick Sort degrades to O(n²) - its worst case!
260
+ """
261
+ self.clear_images()
262
+ # Sorted order: fist(1) < peace(2) < like(3) < ok(4) < call(5)
263
+ sorted_gestures = ['fist', 'peace', 'like', 'ok', 'call']
264
+ for g in sorted_gestures:
265
+ self._capture_count += 1
266
+ self.image_list.add_new(g)
267
+
268
+ return (
269
+ self._render_image_list(),
270
+ "🎓 Worst-Case Demo: Already sorted data!\n"
271
+ "Quick Sort with First Pivot → O(n²)\n"
272
+ "Try Median-of-3 or Random pivot to see the difference."
273
+ )
274
+
275
+ def add_binary_search_demo(self) -> Tuple[str, str]:
276
+ """
277
+ Add sorted data for binary search demonstration.
278
+
279
+ 📚 EDUCATIONAL PURPOSE:
280
+ Binary search requires sorted data. This preset shows
281
+ how O(log n) is much faster than O(n) linear search.
282
+ """
283
+ self.clear_images()
284
+ # Create larger sorted dataset for more dramatic comparison
285
+ gestures = ['fist', 'fist', 'peace', 'peace', 'like', 'like',
286
+ 'ok', 'ok', 'call', 'call', 'palm', 'palm']
287
+ for g in gestures:
288
+ self._capture_count += 1
289
+ self.image_list.add_new(g)
290
+
291
+ return (
292
+ self._render_image_list(),
293
+ "🎓 Search Demo: 12 sorted elements\n"
294
+ "Linear Search: up to 12 comparisons\n"
295
+ "Binary Search: at most 4 comparisons (log₂12 ≈ 3.6)"
296
+ )
297
+
298
+ # -------------------------------------------------------------------------
299
+ # Algorithm Execution Methods
300
+ # -------------------------------------------------------------------------
301
+
302
+ def run_sort(self, algorithm_name: str, pivot_strategy: str = "first",
303
+ partition_scheme: str = "2-way") -> Tuple[str, str, str]:
304
+ """
305
+ Run a sorting algorithm on the image list.
306
+
307
+ Returns:
308
+ Tuple of (visualization_html, image_list_html, status_message)
309
+ """
310
+ if len(self.image_list) < 2:
311
+ return (
312
+ self.visualizer.render_current(),
313
+ self._render_image_list(),
314
+ "⚠️ Need at least 2 images to sort"
315
+ )
316
+
317
+ # Create the algorithm instance
318
+ if algorithm_name == "Bubble Sort":
319
+ algo = BubbleSort()
320
+ elif algorithm_name == "Merge Sort":
321
+ algo = MergeSort()
322
+ elif algorithm_name == "Quick Sort":
323
+ # Map string to enum
324
+ pivot_map = {
325
+ "first": PivotStrategy.FIRST,
326
+ "last": PivotStrategy.LAST,
327
+ "median": PivotStrategy.MEDIAN_OF_THREE,
328
+ "random": PivotStrategy.RANDOM,
329
+ }
330
+ partition_map = {
331
+ "2-way": PartitionScheme.TWO_WAY,
332
+ "3-way": PartitionScheme.THREE_WAY,
333
+ }
334
+ algo = QuickSort(
335
+ pivot_strategy=pivot_map.get(pivot_strategy, PivotStrategy.FIRST),
336
+ partition_scheme=partition_map.get(partition_scheme, PartitionScheme.TWO_WAY)
337
+ )
338
+ else:
339
+ return (
340
+ self.visualizer.render_current(),
341
+ self._render_image_list(),
342
+ f"⚠️ Unknown algorithm: {algorithm_name}"
343
+ )
344
+
345
+ # Get data copy and run algorithm
346
+ data = list(self.image_list)
347
+ sorted_data, steps = algo.run_full(data)
348
+
349
+ # Load into visualizer
350
+ self.visualizer.load_steps(steps, sorted_data, algo.name)
351
+
352
+ # Update the image list to sorted order
353
+ self.image_list._save_state() # Save before modifying
354
+ self.image_list._images = list(sorted_data)
355
+
356
+ return (
357
+ self.visualizer.render_current(),
358
+ self._render_image_list(),
359
+ f"✅ {algo.name}: {len(steps)} steps"
360
+ )
361
+
362
+ def run_search(self, algorithm_name: str, target_index: int) -> Tuple[str, str]:
363
+ """
364
+ Run a search algorithm.
365
+
366
+ Args:
367
+ algorithm_name: "Linear Search" or "Binary Search"
368
+ target_index: Index of the target element to search for
369
+
370
+ Returns:
371
+ Tuple of (visualization_html, status_message)
372
+ """
373
+ if len(self.image_list) < 1:
374
+ return self.visualizer.render_current(), "⚠️ Need at least 1 image to search"
375
+
376
+ if not (0 <= target_index < len(self.image_list)):
377
+ return self.visualizer.render_current(), "⚠️ Invalid target index"
378
+
379
+ data = list(self.image_list)
380
+ target = data[target_index]
381
+
382
+ # For binary search, we need sorted data
383
+ if algorithm_name == "Binary Search":
384
+ if not self.image_list.is_sorted():
385
+ return (
386
+ self.visualizer.render_current(),
387
+ "⚠️ Binary Search requires sorted data! Run a sort first."
388
+ )
389
+ algo = BinarySearch(variant="iterative")
390
+ else:
391
+ algo = LinearSearch()
392
+
393
+ # Run the search
394
+ result_index, steps = algo.run_full(data, target)
395
+
396
+ # Load into visualizer
397
+ self.visualizer.load_steps(steps, data, algo.name)
398
+
399
+ if result_index is not None:
400
+ status = f"✅ {algo.name}: Found {target} at index {result_index}"
401
+ else:
402
+ status = f"❌ {algo.name}: {target} not found"
403
+
404
+ return self.visualizer.render_current(), status
405
+
406
+ # -------------------------------------------------------------------------
407
+ # Visualization Navigation Methods
408
+ # -------------------------------------------------------------------------
409
+
410
+ def viz_next(self) -> str:
411
+ """Go to next visualization step."""
412
+ return self.visualizer.next_step()
413
+
414
+ def viz_prev(self) -> str:
415
+ """Go to previous visualization step."""
416
+ return self.visualizer.prev_step()
417
+
418
+ def viz_start(self) -> str:
419
+ """Go to first step."""
420
+ return self.visualizer.go_to_start()
421
+
422
+ def viz_end(self) -> str:
423
+ """Go to last step."""
424
+ return self.visualizer.go_to_end()
425
+
426
+ def viz_goto(self, step: int) -> str:
427
+ """Go to a specific step."""
428
+ return self.visualizer.go_to_step(int(step) - 1) # Convert to 0-based
429
+
430
+ # -------------------------------------------------------------------------
431
+ # Rendering Methods
432
+ # -------------------------------------------------------------------------
433
+
434
+ def _render_image_list(self) -> str:
435
+ """Render the current image list as HTML."""
436
+ if len(self.image_list) == 0:
437
+ return """
438
+ <div style="
439
+ text-align: center;
440
+ padding: 40px;
441
+ color: #666;
442
+ background: #f8f9fa;
443
+ border-radius: 12px;
444
+ border: 2px dashed #ddd;
445
+ ">
446
+ <div style="font-size: 48px; margin-bottom: 15px;">📷</div>
447
+ <h3 style="margin: 0 0 10px 0;">No Images Yet</h3>
448
+ <p style="margin: 0;">Add gestures using the buttons above!</p>
449
+ </div>
450
+ """
451
+
452
+ # Build image cards
453
+ cards = []
454
+ for i, img in enumerate(self.image_list):
455
+ card = f"""
456
+ <div style="
457
+ display: inline-flex;
458
+ flex-direction: column;
459
+ align-items: center;
460
+ margin: 6px;
461
+ padding: 12px;
462
+ border-radius: 10px;
463
+ background: white;
464
+ border: 2px solid #ddd;
465
+ min-width: 70px;
466
+ box-shadow: 0 2px 4px rgba(0,0,0,0.1);
467
+ ">
468
+ <div style="font-size: 32px; margin-bottom: 4px;">{img.emoji}</div>
469
+ <div style="font-size: 11px; color: #666;">₍{img.capture_id}₎</div>
470
+ <div style="font-size: 10px; color: #999;">rank {img.rank}</div>
471
+ <div style="font-size: 9px; color: #aaa; margin-top: 4px;">[{i}]</div>
472
+ </div>
473
+ """
474
+ cards.append(card)
475
+
476
+ # Analysis
477
+ analysis = self.image_list.get_analysis()
478
+ is_sorted = "✅ Sorted" if self.image_list.is_sorted() else "❌ Not Sorted"
479
+
480
+ return f"""
481
+ <div style="
482
+ background: linear-gradient(135deg, #002D62 0%, #9B2335 100%);
483
+ color: white;
484
+ padding: 15px;
485
+ border-radius: 12px 12px 0 0;
486
+ ">
487
+ <div style="display: flex; justify-content: space-between; align-items: center;">
488
+ <strong>Image List ({len(self.image_list)} items)</strong>
489
+ <span>{is_sorted}</span>
490
+ </div>
491
+ </div>
492
+ <div style="
493
+ background: #f8f9fa;
494
+ padding: 15px;
495
+ border-radius: 0 0 12px 12px;
496
+ border: 1px solid #ddd;
497
+ border-top: none;
498
+ ">
499
+ <div style="
500
+ display: flex;
501
+ flex-wrap: wrap;
502
+ justify-content: center;
503
+ gap: 4px;
504
+ ">
505
+ {''.join(cards)}
506
+ </div>
507
+ <div style="
508
+ margin-top: 15px;
509
+ padding-top: 10px;
510
+ border-top: 1px solid #ddd;
511
+ font-size: 12px;
512
+ color: #666;
513
+ text-align: center;
514
+ ">
515
+ {analysis}
516
+ </div>
517
+ </div>
518
+ """
519
+
520
+ # -------------------------------------------------------------------------
521
+ # Create Gradio UI
522
+ # -------------------------------------------------------------------------
523
+
524
+ def create_ui(self) -> gr.Blocks:
525
+ """
526
+ Create the Gradio interface.
527
+
528
+ 📚 CONCEPT: Builder Pattern (light version)
529
+
530
+ We build up the UI component by component, each with its
531
+ own responsibility. The final result is a complete interface.
532
+ """
533
+
534
+ with gr.Blocks(
535
+ title="CISC 121 - OOP Sorting Visualizer",
536
+ theme=gr.themes.Soft(
537
+ primary_hue="blue",
538
+ secondary_hue="red",
539
+ )
540
+ ) as demo:
541
+
542
+ # Header
543
+ gr.Markdown(APP_TITLE)
544
+ gr.Markdown(APP_DESCRIPTION)
545
+
546
+ with gr.Tabs():
547
+ # ============================================================
548
+ # TAB 1: Image Management
549
+ # ============================================================
550
+ with gr.TabItem("📷 Capture & Manage"):
551
+ with gr.Row():
552
+ # Left column: Add images
553
+ with gr.Column(scale=1):
554
+ gr.Markdown("### Add Gestures")
555
+
556
+ # Manual gesture selection
557
+ gesture_dropdown = gr.Dropdown(
558
+ choices=GestureRanking.get_all_gestures(),
559
+ label="Select Gesture",
560
+ info="Choose a gesture to add"
561
+ )
562
+ add_btn = gr.Button("➕ Add Gesture", variant="primary")
563
+
564
+ gr.Markdown("---")
565
+
566
+ # Image upload
567
+ image_input = gr.Image(
568
+ label="Upload Image",
569
+ type="pil",
570
+ sources=["upload", "webcam"]
571
+ )
572
+ classify_btn = gr.Button("🔍 Classify & Add")
573
+
574
+ gr.Markdown("---")
575
+
576
+ # Quick actions
577
+ with gr.Row():
578
+ sample_btn = gr.Button("📝 Add Samples")
579
+ shuffle_btn = gr.Button("🔀 Shuffle")
580
+ with gr.Row():
581
+ undo_btn = gr.Button("↩️ Undo")
582
+ clear_btn = gr.Button("🗑️ Clear", variant="stop")
583
+
584
+ gr.Markdown("---")
585
+
586
+ # Educational demos
587
+ gr.Markdown("### 🎓 Educational Demos")
588
+ instability_btn = gr.Button(
589
+ "⚠️ Instability Demo",
590
+ variant="secondary"
591
+ )
592
+ worst_case_btn = gr.Button(
593
+ "📉 Worst-Case Demo",
594
+ variant="secondary"
595
+ )
596
+ search_demo_btn = gr.Button(
597
+ "🔍 Search Demo",
598
+ variant="secondary"
599
+ )
600
+
601
+ # Right column: Image list display
602
+ with gr.Column(scale=2):
603
+ gr.Markdown("### Current Image List")
604
+ image_list_display = gr.HTML(
605
+ value=self._render_image_list()
606
+ )
607
+ status_msg = gr.Textbox(
608
+ label="Status",
609
+ interactive=False
610
+ )
611
+
612
+ # Wire up events for Tab 1
613
+ add_btn.click(
614
+ fn=self.add_manual_gesture,
615
+ inputs=[gesture_dropdown],
616
+ outputs=[image_list_display, status_msg]
617
+ )
618
+ classify_btn.click(
619
+ fn=self.add_from_image,
620
+ inputs=[image_input],
621
+ outputs=[image_list_display, status_msg]
622
+ )
623
+ sample_btn.click(
624
+ fn=self.add_sample_data,
625
+ outputs=[image_list_display, status_msg]
626
+ )
627
+ shuffle_btn.click(
628
+ fn=self.shuffle_images,
629
+ outputs=[image_list_display, status_msg]
630
+ )
631
+ undo_btn.click(
632
+ fn=self.undo_action,
633
+ outputs=[image_list_display, status_msg]
634
+ )
635
+ clear_btn.click(
636
+ fn=self.clear_images,
637
+ outputs=[image_list_display, status_msg]
638
+ )
639
+ instability_btn.click(
640
+ fn=self.add_instability_demo,
641
+ outputs=[image_list_display, status_msg]
642
+ )
643
+ worst_case_btn.click(
644
+ fn=self.add_worst_case_demo,
645
+ outputs=[image_list_display, status_msg]
646
+ )
647
+ search_demo_btn.click(
648
+ fn=self.add_binary_search_demo,
649
+ outputs=[image_list_display, status_msg]
650
+ )
651
+
652
+ # ============================================================
653
+ # TAB 2: Sorting Algorithms
654
+ # ============================================================
655
+ with gr.TabItem("📊 Sorting"):
656
+ with gr.Row():
657
+ # Left: Algorithm selection
658
+ with gr.Column(scale=1):
659
+ gr.Markdown("### Select Algorithm")
660
+
661
+ sort_algo = gr.Radio(
662
+ choices=["Bubble Sort", "Merge Sort", "Quick Sort"],
663
+ value="Bubble Sort",
664
+ label="Algorithm",
665
+ info="Each has different time complexity and stability"
666
+ )
667
+
668
+ # Educational info accordion
669
+ with gr.Accordion("📚 Algorithm Info", open=False):
670
+ gr.Markdown("""
671
+ **Bubble Sort** - O(n²) average, O(n) best
672
+ - ✅ Stable (preserves order of equal elements)
673
+ - Simple but slow for large lists
674
+ - Best when: Nearly sorted data
675
+
676
+ **Merge Sort** - O(n log n) always
677
+ - ✅ Stable
678
+ - Consistent performance
679
+ - Uses extra memory for merging
680
+
681
+ **Quick Sort** - O(n log n) average, O(n²) worst
682
+ - ❌ Unstable (may reorder equal elements)
683
+ - Fast in practice, in-place
684
+ - Best when: Random data, good pivot
685
+ """)
686
+
687
+ # Quick Sort options (only shown when Quick Sort selected)
688
+ with gr.Group() as quicksort_options:
689
+ gr.Markdown("**Quick Sort Options**")
690
+ pivot_strategy = gr.Radio(
691
+ choices=["first", "last", "median", "random"],
692
+ value="first",
693
+ label="Pivot Strategy",
694
+ info="Median/Random avoid worst-case O(n²)"
695
+ )
696
+ partition_scheme = gr.Radio(
697
+ choices=["2-way", "3-way"],
698
+ value="2-way",
699
+ label="Partition Scheme",
700
+ info="3-way handles duplicates better"
701
+ )
702
+
703
+ run_sort_btn = gr.Button("▶️ Run Sort", variant="primary", size="lg")
704
+
705
+ gr.Markdown("---")
706
+ gr.Markdown("### Current List")
707
+ sort_list_display = gr.HTML(value=self._render_image_list())
708
+
709
+ # Right: Visualization
710
+ with gr.Column(scale=2):
711
+ gr.Markdown("### Visualization")
712
+ sort_viz_display = gr.HTML(
713
+ value=self.visualizer.render_current()
714
+ )
715
+
716
+ # Navigation controls
717
+ with gr.Row():
718
+ viz_start_btn = gr.Button("⏮️ Start")
719
+ viz_prev_btn = gr.Button("◀️ Prev")
720
+ step_slider = gr.Slider(
721
+ minimum=1,
722
+ maximum=100,
723
+ step=1,
724
+ value=1,
725
+ label="Step"
726
+ )
727
+ viz_next_btn = gr.Button("Next ▶️")
728
+ viz_end_btn = gr.Button("End ⏭️")
729
+
730
+ sort_status = gr.Textbox(label="Status", interactive=False)
731
+
732
+ # Wire up sorting events
733
+ run_sort_btn.click(
734
+ fn=self.run_sort,
735
+ inputs=[sort_algo, pivot_strategy, partition_scheme],
736
+ outputs=[sort_viz_display, sort_list_display, sort_status]
737
+ )
738
+ viz_next_btn.click(fn=self.viz_next, outputs=[sort_viz_display])
739
+ viz_prev_btn.click(fn=self.viz_prev, outputs=[sort_viz_display])
740
+ viz_start_btn.click(fn=self.viz_start, outputs=[sort_viz_display])
741
+ viz_end_btn.click(fn=self.viz_end, outputs=[sort_viz_display])
742
+ step_slider.change(fn=self.viz_goto, inputs=[step_slider], outputs=[sort_viz_display])
743
+
744
+ # ============================================================
745
+ # TAB 3: Searching Algorithms
746
+ # ============================================================
747
+ with gr.TabItem("🔍 Searching"):
748
+ with gr.Row():
749
+ # Left: Search controls
750
+ with gr.Column(scale=1):
751
+ gr.Markdown("### Search Settings")
752
+
753
+ search_algo = gr.Radio(
754
+ choices=["Linear Search", "Binary Search"],
755
+ value="Linear Search",
756
+ label="Algorithm",
757
+ info="Binary Search is O(log n) but requires sorted data"
758
+ )
759
+
760
+ # Educational info accordion
761
+ with gr.Accordion("📚 Algorithm Info", open=False):
762
+ gr.Markdown("""
763
+ **Linear Search** - O(n)
764
+ - Works on ANY list (sorted or unsorted)
765
+ - Checks each element one by one
766
+ - Simple but slow for large lists
767
+
768
+ **Binary Search** - O(log n)
769
+ - ⚠️ REQUIRES SORTED DATA!
770
+ - Halves the search space each step
771
+ - Much faster: 1000 elements → only 10 comparisons!
772
+
773
+ **Example (searching 1000 elements):**
774
+ - Linear: up to 1000 checks
775
+ - Binary: at most 10 checks (log₂1000 ≈ 10)
776
+ """)
777
+
778
+ target_index = gr.Number(
779
+ label="Target Index",
780
+ value=0,
781
+ precision=0,
782
+ info="Which element to search for (by index)"
783
+ )
784
+
785
+ run_search_btn = gr.Button("🔍 Run Search", variant="primary", size="lg")
786
+
787
+ gr.Markdown("---")
788
+ gr.Markdown("### Current List")
789
+ search_list_display = gr.HTML(value=self._render_image_list())
790
+
791
+ # Right: Visualization
792
+ with gr.Column(scale=2):
793
+ gr.Markdown("### Visualization")
794
+ search_viz_display = gr.HTML(
795
+ value=self.visualizer.render_current()
796
+ )
797
+
798
+ # Navigation controls
799
+ with gr.Row():
800
+ search_start_btn = gr.Button("⏮️ Start")
801
+ search_prev_btn = gr.Button("◀️ Prev")
802
+ search_next_btn = gr.Button("Next ▶️")
803
+ search_end_btn = gr.Button("End ⏭️")
804
+
805
+ search_status = gr.Textbox(label="Status", interactive=False)
806
+
807
+ # Wire up search events
808
+ run_search_btn.click(
809
+ fn=self.run_search,
810
+ inputs=[search_algo, target_index],
811
+ outputs=[search_viz_display, search_status]
812
+ )
813
+ search_next_btn.click(fn=self.viz_next, outputs=[search_viz_display])
814
+ search_prev_btn.click(fn=self.viz_prev, outputs=[search_viz_display])
815
+ search_start_btn.click(fn=self.viz_start, outputs=[search_viz_display])
816
+ search_end_btn.click(fn=self.viz_end, outputs=[search_viz_display])
817
+
818
+ # ============================================================
819
+ # TAB 4: Learn OOP
820
+ # ============================================================
821
+ with gr.TabItem("📚 Learn OOP"):
822
+ gr.Markdown("""
823
+ # Object-Oriented Programming Concepts
824
+
825
+ This application demonstrates several key OOP concepts:
826
+
827
+ ## 📦 Classes & Objects
828
+
829
+ **Classes** are blueprints for creating objects. In this app:
830
+ - `GestureImage` - represents a single captured gesture
831
+ - `ImageList` - manages a collection of gestures
832
+ - `BubbleSort`, `MergeSort`, `QuickSort` - sorting algorithms
833
+ - `Visualizer` - handles step-by-step display
834
+
835
+ ## 🎭 Inheritance
836
+
837
+ **Inheritance** lets classes share code. All sorting algorithms inherit from `SortingAlgorithm`:
838
+
839
+ ```python
840
+ class SortingAlgorithm(ABC): # Abstract Base Class
841
+ @abstractmethod
842
+ def sort(self, data): ...
843
+
844
+ class BubbleSort(SortingAlgorithm): # Inherits from SortingAlgorithm
845
+ def sort(self, data):
846
+ # Bubble sort implementation
847
+ ```
848
+
849
+ ## 🔄 Polymorphism
850
+
851
+ **Polymorphism** means "same interface, different behavior":
852
+
853
+ ```python
854
+ # All these work the same way!
855
+ algo = BubbleSort()
856
+ algo = MergeSort()
857
+ algo = QuickSort()
858
+
859
+ # Same method call, different algorithms
860
+ result, steps = algo.run_full(data)
861
+ ```
862
+
863
+ ## 🏭 Factory Pattern
864
+
865
+ **Factory Pattern** creates objects without exposing creation logic:
866
+
867
+ ```python
868
+ # Factory creates the right renderer automatically
869
+ renderer = RendererFactory.create("Bubble Sort")
870
+ ```
871
+
872
+ ## 📊 Algorithm Comparison
873
+
874
+ | Algorithm | Time (Best) | Time (Worst) | Stable? | In-Place? |
875
+ |-----------|-------------|--------------|---------|-----------|
876
+ | Bubble Sort | O(n) | O(n²) | ✅ Yes | ✅ Yes |
877
+ | Merge Sort | O(n log n) | O(n log n) | ✅ Yes | ❌ No |
878
+ | Quick Sort | O(n log n) | O(n²) | ❌ No | ✅ Yes |
879
+ | Linear Search | O(1) | O(n) | - | - |
880
+ | Binary Search | O(1) | O(log n) | - | - |
881
+
882
+ ## 🔍 Stability
883
+
884
+ A **stable** sort preserves the relative order of equal elements.
885
+
886
+ Example with two peace signs ✌️₁ and ✌️₂:
887
+ - **Stable**: Always produces [✌️₁, ✌️₂] (original order kept)
888
+ - **Unstable**: Might produce [✌️₂, ✌️₁] (order can change)
889
+
890
+ Try Quick Sort with duplicate gestures to see instability!
891
+ """)
892
+
893
+ # Footer
894
+ gr.Markdown("""
895
+ ---
896
+ *Built for CISC 121 - Queen's University*
897
+ """)
898
+
899
+ return demo
900
+
901
+
902
+ # ==============================================================================
903
+ # MAIN ENTRY POINT
904
+ # ==============================================================================
905
+
906
+ def main():
907
+ """Create and launch the Gradio app."""
908
+ app = GradioApp()
909
+ demo = app.create_ui()
910
+ demo.launch(share=False)
911
+
912
+
913
+ if __name__ == "__main__":
914
+ main()
oop_sorting_teaching/__init__.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ╔══════════════════════════════════════════════════════════════════════════════╗
3
+ ║ ║
4
+ ║ 🎓 CISC 121 - OOP Sorting & Searching Visualizer ║
5
+ ║ ║
6
+ ║ Queen's University - Introduction to Computing Science I ║
7
+ ║ ║
8
+ ║ Package: oop_sorting_teaching ║
9
+ ║ Purpose: Learn Object-Oriented Programming through visual algorithm demos ║
10
+ ║ ║
11
+ ╚══════════════════════════════════════════════════════════════════════════════╝
12
+
13
+ 📚 CONCEPT: Python Packages
14
+ ═══════════════════════════
15
+
16
+ A PACKAGE is a way to organize related Python code into a folder structure.
17
+
18
+ Think of it like a filing cabinet:
19
+ • The cabinet (package) holds related folders
20
+ • Each folder (subpackage) holds related files
21
+ • Each file (module) holds related code
22
+
23
+ WHY USE PACKAGES?
24
+ • Organization: Related code lives together
25
+ • Reusability: Import just what you need
26
+ • Maintainability: Smaller files are easier to understand
27
+ • Collaboration: Different people can work on different modules
28
+
29
+ PACKAGE STRUCTURE:
30
+ ├── oop_sorting_teaching/ # Main package
31
+ │ ├── __init__.py # This file - makes it a package
32
+ │ ├── models/ # Data structures
33
+ │ │ ├── gesture.py # GestureRanking, GestureImage
34
+ │ │ ├── step.py # StepType, Step
35
+ │ │ └── image_list.py # ImageList
36
+ │ ├── algorithms/ # Sorting & searching
37
+ │ │ ├── sorting/ # Sorting algorithms
38
+ │ │ └── searching/ # Search algorithms
39
+ │ ├── visualization/ # Display logic
40
+ │ │ ├── renderers/ # HTML renderers
41
+ │ │ └── visualizer.py # Main visualizer
42
+ │ └── tests/ # Test functions
43
+
44
+ IMPORTING FROM THIS PACKAGE:
45
+ # Import specific classes
46
+ from oop_sorting_teaching.models import GestureImage, GestureRanking
47
+
48
+ # Import algorithm
49
+ from oop_sorting_teaching.algorithms.sorting import BubbleSort
50
+
51
+ # Or use the convenient shortcuts below:
52
+ from oop_sorting_teaching import GestureImage, BubbleSort
53
+ """
54
+
55
+ # ==============================================================================
56
+ # CONVENIENT IMPORTS
57
+ # ==============================================================================
58
+ # These imports let users do:
59
+ # from oop_sorting_teaching import GestureImage
60
+ # instead of:
61
+ # from oop_sorting_teaching.models.gesture import GestureImage
62
+ # ==============================================================================
63
+
64
+ # Core models
65
+ from .models import (
66
+ GestureRanking,
67
+ GestureImage,
68
+ StepType,
69
+ Step,
70
+ ImageList,
71
+ )
72
+
73
+ # Sorting algorithms
74
+ from .algorithms import (
75
+ SortingAlgorithm,
76
+ SearchAlgorithm,
77
+ BubbleSort,
78
+ MergeSort,
79
+ QuickSort,
80
+ PivotStrategy,
81
+ PartitionScheme,
82
+ LinearSearch,
83
+ BinarySearch,
84
+ )
85
+
86
+ # Visualization
87
+ from .visualization import (
88
+ VisualizationState,
89
+ VisualizationConfig,
90
+ Visualizer,
91
+ StepRenderer,
92
+ RendererFactory,
93
+ )
94
+
95
+ # Define what gets exported with "from oop_sorting_teaching import *"
96
+ __all__ = [
97
+ # Models
98
+ "GestureRanking",
99
+ "GestureImage",
100
+ "StepType",
101
+ "Step",
102
+ "ImageList",
103
+ # Sorting
104
+ "SortingAlgorithm",
105
+ "BubbleSort",
106
+ "MergeSort",
107
+ "QuickSort",
108
+ "PivotStrategy",
109
+ "PartitionScheme",
110
+ # Searching
111
+ "SearchAlgorithm",
112
+ "LinearSearch",
113
+ "BinarySearch",
114
+ # Visualization
115
+ "VisualizationState",
116
+ "VisualizationConfig",
117
+ "Visualizer",
118
+ "StepRenderer",
119
+ "RendererFactory",
120
+ ]
121
+
122
+ # Package version
123
+ __version__ = "1.0.0"
oop_sorting_teaching/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (3.82 kB). View file
 
oop_sorting_teaching/algorithms/__init__.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Algorithms subpackage - Sorting and searching algorithms.
3
+
4
+ This package contains:
5
+ • SortingAlgorithm - Abstract base class for sorting
6
+ • SearchAlgorithm - Abstract base class for searching
7
+ • BubbleSort, MergeSort, QuickSort - Sorting implementations
8
+ • LinearSearch, BinarySearch - Search implementations
9
+
10
+ 📚 PACKAGE ORGANIZATION:
11
+ algorithms/
12
+ ├── __init__.py (this file)
13
+ ├── base.py (abstract base classes)
14
+ ├── sorting/ (sorting algorithms)
15
+ │ ├── bubble_sort.py
16
+ │ ├── merge_sort.py
17
+ │ └── quick_sort.py
18
+ └── searching/ (search algorithms)
19
+ ├── linear_search.py
20
+ └── binary_search.py
21
+ """
22
+
23
+ # Import base classes
24
+ from .base import SortingAlgorithm, SearchAlgorithm
25
+
26
+ # Import sorting algorithms
27
+ from .sorting import (
28
+ BubbleSort,
29
+ MergeSort,
30
+ QuickSort,
31
+ PivotStrategy,
32
+ PartitionScheme,
33
+ )
34
+
35
+ # Import search algorithms
36
+ from .searching import (
37
+ LinearSearch,
38
+ BinarySearch,
39
+ )
40
+
41
+ __all__ = [
42
+ # Base classes
43
+ "SortingAlgorithm",
44
+ "SearchAlgorithm",
45
+ # Sorting
46
+ "BubbleSort",
47
+ "MergeSort",
48
+ "QuickSort",
49
+ "PivotStrategy",
50
+ "PartitionScheme",
51
+ # Searching
52
+ "LinearSearch",
53
+ "BinarySearch",
54
+ ]
oop_sorting_teaching/algorithms/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (1.28 kB). View file
 
oop_sorting_teaching/algorithms/__pycache__/base.cpython-313.pyc ADDED
Binary file (11.7 kB). View file
 
oop_sorting_teaching/algorithms/base.py ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Base classes for sorting and searching algorithms.
3
+
4
+ This module defines the abstract base classes (interfaces) that all
5
+ sorting and searching algorithms must implement.
6
+
7
+ OOP Concepts Demonstrated:
8
+ - Abstract Base Classes (ABC)
9
+ - Abstract methods (@abstractmethod)
10
+ - Properties (@property)
11
+ - Generator functions (yield)
12
+ - Type hints with Generator
13
+ """
14
+
15
+ from abc import ABC, abstractmethod
16
+ from typing import List, Generator, Tuple, Optional
17
+
18
+ from ..models import GestureImage, Step, StepType
19
+
20
+
21
+ # ==============================================================================
22
+ # ABSTRACT CLASS: SortingAlgorithm (The Interface)
23
+ # ==============================================================================
24
+
25
+ class SortingAlgorithm(ABC):
26
+ """
27
+ Abstract base class (interface) for all sorting algorithms.
28
+
29
+ This defines the CONTRACT that all sorting algorithms must follow:
30
+ - They must have a name
31
+ - They must indicate if they're stable
32
+ - They must indicate if they sort in-place
33
+ - They must implement a sort() method that yields steps
34
+
35
+ ┌─────────────────────────────────────────────────────────────────────────┐
36
+ │ 🔄 PROCEDURAL vs OOP: Algorithm Organization │
37
+ │ │
38
+ │ PROCEDURAL (scattered functions): │
39
+ │ def bubble_sort(arr): ... │
40
+ │ def merge_sort(arr): ... │
41
+ │ def quick_sort(arr): ... │
42
+ │ # No clear structure, hard to add new algorithms │
43
+ │ │
44
+ │ OOP (organized hierarchy): │
45
+ │ class SortingAlgorithm(ABC): # The contract │
46
+ │ def sort(self): ... │
47
+ │ │
48
+ │ class BubbleSort(SortingAlgorithm): # Implements contract │
49
+ │ class MergeSort(SortingAlgorithm): # Implements contract │
50
+ │ class QuickSort(SortingAlgorithm): # Implements contract │
51
+ │ │
52
+ │ # Easy to add new algorithms, all follow same pattern! │
53
+ └─────────────────────────────────────────────────────────────────────────┘
54
+ """
55
+
56
+ # -------------------------------------------------------------------------
57
+ # Abstract Properties (MUST be implemented by subclasses)
58
+ # -------------------------------------------------------------------------
59
+
60
+ @property
61
+ @abstractmethod
62
+ def name(self) -> str:
63
+ """The display name of the algorithm (e.g., 'Bubble Sort')."""
64
+ pass
65
+
66
+ @property
67
+ @abstractmethod
68
+ def is_stable(self) -> bool:
69
+ """
70
+ Whether the algorithm is stable.
71
+
72
+ A STABLE algorithm preserves the relative order of equal elements.
73
+
74
+ Example with [✌️₁, ✌️₂, ✊]:
75
+ - Stable: Always produces [✊, ✌️₁, ✌️₂] (original order of peace signs kept)
76
+ - Unstable: Might produce [✊, ✌️₂, ✌️₁] (order can change)
77
+ """
78
+ pass
79
+
80
+ @property
81
+ @abstractmethod
82
+ def is_in_place(self) -> bool:
83
+ """
84
+ Whether the algorithm sorts in-place (modifies the original array).
85
+
86
+ In-place: Uses O(1) extra memory (just swaps elements)
87
+ Not in-place: Creates new arrays (uses O(n) extra memory)
88
+ """
89
+ pass
90
+
91
+ @property
92
+ def description(self) -> str:
93
+ """Human-readable description of the algorithm."""
94
+ stability = "Stable" if self.is_stable else "Unstable"
95
+ memory = "In-place" if self.is_in_place else "Out-of-place"
96
+ return f"{self.name} ({stability}, {memory})"
97
+
98
+ # -------------------------------------------------------------------------
99
+ # Abstract Method: sort (MUST be implemented by subclasses)
100
+ # -------------------------------------------------------------------------
101
+
102
+ @abstractmethod
103
+ def sort(self, data: List[GestureImage]) -> Generator[Step, None, List[GestureImage]]:
104
+ """
105
+ Sort the data and yield steps for visualization.
106
+
107
+ This is a GENERATOR function (uses yield instead of return).
108
+ It allows us to pause the algorithm after each step for visualization.
109
+
110
+ Args:
111
+ data: List of GestureImage objects to sort
112
+
113
+ Yields:
114
+ Step objects describing each operation
115
+
116
+ Returns:
117
+ The sorted list
118
+ """
119
+ pass
120
+
121
+ # -------------------------------------------------------------------------
122
+ # Concrete Methods (shared by all subclasses)
123
+ # -------------------------------------------------------------------------
124
+
125
+ def run_full(self, data: List[GestureImage]) -> Tuple[List[GestureImage], List[Step]]:
126
+ """
127
+ Run the sort and collect all steps (non-generator version).
128
+
129
+ Use this when you want all steps at once, not one at a time.
130
+
131
+ Args:
132
+ data: List to sort
133
+
134
+ Returns:
135
+ Tuple of (sorted_list, list_of_all_steps)
136
+ """
137
+ steps = []
138
+ result = None
139
+
140
+ # Consume the generator and collect steps
141
+ generator = self.sort(data.copy())
142
+ try:
143
+ while True:
144
+ step = next(generator)
145
+ steps.append(step)
146
+ except StopIteration as e:
147
+ result = e.value # The return value of the generator
148
+
149
+ return result if result else data, steps
150
+
151
+ def _create_step(
152
+ self,
153
+ step_type: StepType,
154
+ indices: List[int],
155
+ description: str,
156
+ data: List[GestureImage],
157
+ depth: int = 0,
158
+ highlight: List[int] = None,
159
+ metadata: dict = None
160
+ ) -> Step:
161
+ """
162
+ Helper method to create a Step object.
163
+
164
+ The underscore prefix indicates this is for internal use.
165
+ """
166
+ return Step(
167
+ step_type=step_type,
168
+ indices=indices,
169
+ description=description,
170
+ depth=depth,
171
+ array_state=[img for img in data], # Copy the current state
172
+ highlight_indices=highlight or [],
173
+ metadata=metadata or {}
174
+ )
175
+
176
+
177
+ # ==============================================================================
178
+ # ABSTRACT CLASS: SearchAlgorithm (The Interface for Search Algorithms)
179
+ # ==============================================================================
180
+
181
+ class SearchAlgorithm(ABC):
182
+ """
183
+ Abstract base class (interface) for all search algorithms.
184
+
185
+ This is similar to SortingAlgorithm but for searching.
186
+ By having a common interface, we can swap between different
187
+ search algorithms easily (Linear Search, Binary Search, etc.)
188
+
189
+ ┌─────────────────────────────────────────────────────────────────────────┐
190
+ │ 🔄 PROCEDURAL vs OOP: Search Functions │
191
+ │ │
192
+ │ PROCEDURAL: │
193
+ │ def linear_search(arr, target): ... │
194
+ │ def binary_search(arr, target): ... │
195
+ │ # No clear structure, different return types, etc. │
196
+ │ │
197
+ │ OOP: │
198
+ │ class SearchAlgorithm(ABC): │
199
+ │ def search(self, data, target) -> Generator[Step]: ... │
200
+ │ │
201
+ │ class LinearSearch(SearchAlgorithm): ... │
202
+ │ class BinarySearch(SearchAlgorithm): ... │
203
+ │ │
204
+ │ # All search algorithms follow the same pattern! │
205
+ └─────────────────────────────────────────────────────────────────────────┘
206
+ """
207
+
208
+ @property
209
+ @abstractmethod
210
+ def name(self) -> str:
211
+ """The display name of the algorithm."""
212
+ pass
213
+
214
+ @property
215
+ @abstractmethod
216
+ def requires_sorted(self) -> bool:
217
+ """Whether the algorithm requires sorted input."""
218
+ pass
219
+
220
+ @property
221
+ def description(self) -> str:
222
+ """Human-readable description."""
223
+ sorted_req = "requires sorted input" if self.requires_sorted else "works on unsorted"
224
+ return f"{self.name} ({sorted_req})"
225
+
226
+ @abstractmethod
227
+ def search(
228
+ self,
229
+ data: List[GestureImage],
230
+ target: GestureImage
231
+ ) -> Generator[Step, None, Optional[int]]:
232
+ """
233
+ Search for target in data and yield steps for visualization.
234
+
235
+ Args:
236
+ data: List to search in
237
+ target: Element to find
238
+
239
+ Yields:
240
+ Step objects describing each operation
241
+
242
+ Returns:
243
+ Index of target if found, None otherwise
244
+ """
245
+ pass
246
+
247
+ def run_full(
248
+ self,
249
+ data: List[GestureImage],
250
+ target: GestureImage
251
+ ) -> Tuple[Optional[int], List[Step]]:
252
+ """
253
+ Run the search and collect all steps.
254
+
255
+ Returns:
256
+ Tuple of (result_index, list_of_all_steps)
257
+ """
258
+ steps = []
259
+ result = None
260
+
261
+ generator = self.search(data, target)
262
+ try:
263
+ while True:
264
+ step = next(generator)
265
+ steps.append(step)
266
+ except StopIteration as e:
267
+ result = e.value
268
+
269
+ return result, steps
270
+
271
+ def _create_step(
272
+ self,
273
+ step_type: StepType,
274
+ indices: List[int],
275
+ description: str,
276
+ data: List[GestureImage],
277
+ highlight: List[int] = None,
278
+ metadata: dict = None
279
+ ) -> Step:
280
+ """Helper to create Step objects."""
281
+ return Step(
282
+ step_type=step_type,
283
+ indices=indices,
284
+ description=description,
285
+ depth=0,
286
+ array_state=[img for img in data],
287
+ highlight_indices=highlight or [],
288
+ metadata=metadata or {}
289
+ )
oop_sorting_teaching/algorithms/searching/__init__.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Searching algorithms package.
3
+
4
+ Contains implementations of various search algorithms:
5
+ - LinearSearch: Simple sequential search (works on unsorted data)
6
+ - BinarySearch: Efficient divide-and-conquer search (requires sorted data)
7
+
8
+ Each algorithm inherits from SearchAlgorithm and implements
9
+ the search() generator method.
10
+ """
11
+
12
+ from .linear_search import LinearSearch
13
+ from .binary_search import BinarySearch
14
+
15
+ __all__ = [
16
+ 'LinearSearch',
17
+ 'BinarySearch',
18
+ ]
oop_sorting_teaching/algorithms/searching/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (673 Bytes). View file
 
oop_sorting_teaching/algorithms/searching/__pycache__/binary_search.cpython-313.pyc ADDED
Binary file (17.7 kB). View file
 
oop_sorting_teaching/algorithms/searching/__pycache__/linear_search.cpython-313.pyc ADDED
Binary file (4.33 kB). View file
 
oop_sorting_teaching/algorithms/searching/binary_search.py ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Binary Search implementation.
3
+
4
+ ╔══════════════════════════════════════════════════════════════════════════════╗
5
+ ║ 📚 ALGORITHM: Binary Search ║
6
+ ╠══════════════════════════════════════════════════════════════════════════════╣
7
+ ║ ║
8
+ ║ WHAT IS BINARY SEARCH? ║
9
+ ║ Binary search is an efficient algorithm for finding an item in a SORTED ║
10
+ ║ list. Instead of checking every element (linear search), it repeatedly ║
11
+ ║ divides the search space in half. ║
12
+ ║ ║
13
+ ║ HOW IT WORKS: ║
14
+ ║ 1. Look at the MIDDLE element ║
15
+ ║ 2. If it's the target, we're done! ║
16
+ ║ 3. If target is SMALLER, search the LEFT half ║
17
+ ║ 4. If target is LARGER, search the RIGHT half ║
18
+ ║ 5. Repeat until found or search space is empty ║
19
+ ║ ║
20
+ ║ VISUALIZATION: ║
21
+ ║ ║
22
+ ║ Target: 🖐️ (rank 6) ║
23
+ ║ ║
24
+ ║ Step 1: [✊] [☝️] [✌️] [🤟] [🖖] [🖐️] [👌] [👍] ║
25
+ ║ [=================↑==================] ║
26
+ ║ mid=3 (🤟, rank 4) ║
27
+ ║ 🤟 < 🖐️ → search RIGHT ║
28
+ ║ ║
29
+ ║ Step 2: [✊] [☝️] [✌️] [🤟] [🖖] [🖐️] [👌] [👍] ║
30
+ ║ [=====↑=====] ║
31
+ ║ mid=5 (🖐️, rank 6) ║
32
+ ║ FOUND! ✅ ║
33
+ ║ ║
34
+ ║ PROPERTIES: ║
35
+ ║ • Time: O(log n) - halves search space each step ║
36
+ ║ • Space: O(1) iterative, O(log n) recursive ║
37
+ ║ • Requirement: Data MUST be sorted! ║
38
+ ║ ║
39
+ ║ COMPARISON WITH LINEAR SEARCH: ║
40
+ ║ ───────────────────────────── ║
41
+ ║ For 1000 elements: ║
42
+ ║ • Linear Search: up to 1000 comparisons (O(n)) ║
43
+ ║ • Binary Search: at most 10 comparisons (O(log n)) ║
44
+ ║ ║
45
+ ║ For 1,000,000 elements: ║
46
+ ║ • Linear Search: up to 1,000,000 comparisons ║
47
+ ║ • Binary Search: at most 20 comparisons! ║
48
+ ║ ║
49
+ ╚══════════════════════════════════════════════════════════════════════════════╝
50
+ """
51
+
52
+ import math
53
+ from typing import List, Generator, Optional
54
+
55
+ from ..base import SearchAlgorithm
56
+ from ...models import GestureImage, Step, StepType
57
+
58
+
59
+ class BinarySearch(SearchAlgorithm):
60
+ """
61
+ Binary Search - efficient search for sorted data.
62
+
63
+ Repeatedly divides the search space in half.
64
+
65
+ ┌───────────────────────────────────────────────────────��─────────────────┐
66
+ │ 📚 CONCEPT: Divide and Conquer │
67
+ │ │
68
+ │ Binary Search uses the same strategy as Merge Sort: │
69
+ │ 1. DIVIDE the problem in half │
70
+ │ 2. CONQUER by recursively solving smaller problem │
71
+ │ 3. COMBINE (trivial for search - just return the result) │
72
+ │ │
73
+ │ Why is this efficient? │
74
+ │ • Each step eliminates HALF of the remaining elements │
75
+ │ • After k steps, only n/2^k elements remain │
76
+ │ • When n/2^k = 1, we've found our answer: k = log₂(n) │
77
+ │ │
78
+ │ Example: │
79
+ │ • 1,000 elements → log₂(1000) ≈ 10 steps │
80
+ │ • 1,000,000 elements → log₂(1000000) ≈ 20 steps │
81
+ │ • 1,000,000,000 elements → log₂(10⁹) ≈ 30 steps! │
82
+ └─────────────────────────────────────────────────────────────────────────┘
83
+
84
+ ┌─────────────────────────────────────────────────────────────────────────┐
85
+ │ ⚠️ IMPORTANT: Binary Search REQUIRES SORTED DATA! │
86
+ │ │
87
+ │ If the data is not sorted, Binary Search will give WRONG results! │
88
+ │ │
89
+ │ Our implementation checks for this and warns the user. │
90
+ └─────────────────────────────────────────────────────────────────────────┘
91
+ """
92
+
93
+ def __init__(self, variant: str = "iterative"):
94
+ """
95
+ Initialize Binary Search.
96
+
97
+ Args:
98
+ variant: "iterative" or "recursive"
99
+ Both do the same thing, just different implementations.
100
+ Iterative uses a loop, Recursive uses function calls.
101
+ """
102
+ self.variant = variant
103
+ self._comparisons = 0
104
+
105
+ @property
106
+ def name(self) -> str:
107
+ return f"Binary Search ({self.variant.title()})"
108
+
109
+ @property
110
+ def requires_sorted(self) -> bool:
111
+ return True # MUST be sorted!
112
+
113
+ def search(
114
+ self,
115
+ data: List[GestureImage],
116
+ target: GestureImage
117
+ ) -> Generator[Step, None, Optional[int]]:
118
+ """
119
+ Search using binary search.
120
+
121
+ Time Complexity: O(log n)
122
+ Space Complexity: O(1) iterative, O(log n) recursive
123
+ """
124
+ self._comparisons = 0
125
+
126
+ # First, validate that data is sorted
127
+ if not self._is_sorted(data):
128
+ yield self._create_step(
129
+ step_type=StepType.NOT_FOUND,
130
+ indices=[],
131
+ description="⚠️ ERROR: Data is NOT sorted! Binary Search requires sorted input.",
132
+ data=data,
133
+ metadata={"error": "unsorted_input"}
134
+ )
135
+ return None
136
+
137
+ yield self._create_step(
138
+ step_type=StepType.SEARCH_RANGE,
139
+ indices=list(range(len(data))),
140
+ description=f"Binary Search for {target} (rank {target.rank}) in sorted list of {len(data)} elements",
141
+ data=data,
142
+ metadata={"target_rank": target.rank, "max_steps": self._calculate_max_steps(len(data))}
143
+ )
144
+
145
+ if self.variant == "iterative":
146
+ result = yield from self._search_iterative(data, target)
147
+ else:
148
+ result = yield from self._search_recursive(data, target, 0, len(data) - 1)
149
+
150
+ return result
151
+
152
+ def _search_iterative(
153
+ self,
154
+ data: List[GestureImage],
155
+ target: GestureImage
156
+ ) -> Generator[Step, None, Optional[int]]:
157
+ """
158
+ Iterative implementation of binary search.
159
+
160
+ Uses a while loop instead of recursion.
161
+ More memory efficient (O(1) space).
162
+ """
163
+ left = 0
164
+ right = len(data) - 1
165
+ step_num = 0
166
+ max_steps = self._calculate_max_steps(len(data))
167
+
168
+ while left <= right:
169
+ step_num += 1
170
+ mid = (left + right) // 2
171
+ self._comparisons += 1
172
+
173
+ # Show the current search range
174
+ yield self._create_step(
175
+ step_type=StepType.SEARCH_RANGE,
176
+ indices=list(range(left, right + 1)),
177
+ description=f"Step {step_num}/{max_steps}: Searching range [{left}:{right}], mid={mid}",
178
+ data=data,
179
+ highlight=[mid],
180
+ metadata={
181
+ "left": left,
182
+ "right": right,
183
+ "mid": mid,
184
+ "comparisons": self._comparisons,
185
+ "step": step_num
186
+ }
187
+ )
188
+
189
+ # Compare middle element with target
190
+ mid_value = data[mid]
191
+
192
+ yield self._create_step(
193
+ step_type=StepType.COMPARE,
194
+ indices=[mid],
195
+ description=f"Comparing: {mid_value} (rank {mid_value.rank}) vs target {target} (rank {target.rank})",
196
+ data=data,
197
+ highlight=[mid],
198
+ metadata={"comparisons": self._comparisons}
199
+ )
200
+
201
+ if mid_value.rank == target.rank:
202
+ # Found it!
203
+ yield self._create_step(
204
+ step_type=StepType.FOUND,
205
+ indices=[mid],
206
+ description=f"✅ FOUND at index {mid} in only {self._comparisons} comparisons!",
207
+ data=data,
208
+ highlight=[mid],
209
+ metadata={
210
+ "comparisons": self._comparisons,
211
+ "found": True,
212
+ "efficiency": f"Found in {step_num} steps (max possible: {max_steps})"
213
+ }
214
+ )
215
+ return mid
216
+
217
+ elif mid_value.rank < target.rank:
218
+ # Target is in the right half
219
+ yield self._create_step(
220
+ step_type=StepType.SEARCH_RANGE,
221
+ indices=list(range(mid + 1, right + 1)),
222
+ description=f"{mid_value} < {target} → Eliminating left half, searching [{mid + 1}:{right}]",
223
+ data=data,
224
+ highlight=list(range(mid + 1, right + 1)),
225
+ metadata={"eliminated": list(range(left, mid + 1))}
226
+ )
227
+ left = mid + 1
228
+
229
+ else:
230
+ # Target is in the left half
231
+ yield self._create_step(
232
+ step_type=StepType.SEARCH_RANGE,
233
+ indices=list(range(left, mid)),
234
+ description=f"{mid_value} > {target} → Eliminating right half, searching [{left}:{mid - 1}]",
235
+ data=data,
236
+ highlight=list(range(left, mid)),
237
+ metadata={"eliminated": list(range(mid, right + 1))}
238
+ )
239
+ right = mid - 1
240
+
241
+ # Not found
242
+ yield self._create_step(
243
+ step_type=StepType.NOT_FOUND,
244
+ indices=[],
245
+ description=f"❌ NOT FOUND after {self._comparisons} comparisons. Target {target} is not in the list.",
246
+ data=data,
247
+ metadata={"comparisons": self._comparisons, "found": False}
248
+ )
249
+ return None
250
+
251
+ def _search_recursive(
252
+ self,
253
+ data: List[GestureImage],
254
+ target: GestureImage,
255
+ left: int,
256
+ right: int,
257
+ depth: int = 0
258
+ ) -> Generator[Step, None, Optional[int]]:
259
+ """
260
+ Recursive implementation of binary search.
261
+
262
+ Uses function call stack instead of explicit loop.
263
+ Shows the recursive nature more clearly (good for teaching).
264
+ """
265
+ # Base case: empty range
266
+ if left > right:
267
+ yield self._create_step(
268
+ step_type=StepType.NOT_FOUND,
269
+ indices=[],
270
+ description=f"❌ NOT FOUND: Search range is empty (left={left} > right={right})",
271
+ data=data,
272
+ metadata={"comparisons": self._comparisons, "found": False, "depth": depth}
273
+ )
274
+ return None
275
+
276
+ mid = (left + right) // 2
277
+ self._comparisons += 1
278
+
279
+ # Show current recursive call
280
+ yield self._create_step(
281
+ step_type=StepType.SEARCH_RANGE,
282
+ indices=list(range(left, right + 1)),
283
+ description=f"Depth {depth}: binary_search(data, target, left={left}, right={right}), mid={mid}",
284
+ data=data,
285
+ highlight=[mid],
286
+ metadata={"depth": depth, "left": left, "right": right, "mid": mid}
287
+ )
288
+
289
+ mid_value = data[mid]
290
+
291
+ yield self._create_step(
292
+ step_type=StepType.COMPARE,
293
+ indices=[mid],
294
+ description=f"Depth {depth}: Comparing {mid_value} (rank {mid_value.rank}) vs {target} (rank {target.rank})",
295
+ data=data,
296
+ highlight=[mid],
297
+ metadata={"comparisons": self._comparisons, "depth": depth}
298
+ )
299
+
300
+ if mid_value.rank == target.rank:
301
+ yield self._create_step(
302
+ step_type=StepType.FOUND,
303
+ indices=[mid],
304
+ description=f"✅ FOUND at index {mid} (recursion depth {depth}, {self._comparisons} comparisons)",
305
+ data=data,
306
+ highlight=[mid],
307
+ metadata={"comparisons": self._comparisons, "found": True, "depth": depth}
308
+ )
309
+ return mid
310
+
311
+ elif mid_value.rank < target.rank:
312
+ yield self._create_step(
313
+ step_type=StepType.SEARCH_RANGE,
314
+ indices=list(range(mid + 1, right + 1)),
315
+ description=f"Depth {depth}: Recursing into RIGHT half [{mid + 1}:{right}]",
316
+ data=data,
317
+ highlight=list(range(mid + 1, right + 1)),
318
+ metadata={"depth": depth}
319
+ )
320
+ # Recursive call to right half
321
+ result = yield from self._search_recursive(data, target, mid + 1, right, depth + 1)
322
+ return result
323
+
324
+ else:
325
+ yield self._create_step(
326
+ step_type=StepType.SEARCH_RANGE,
327
+ indices=list(range(left, mid)),
328
+ description=f"Depth {depth}: Recursing into LEFT half [{left}:{mid - 1}]",
329
+ data=data,
330
+ highlight=list(range(left, mid)),
331
+ metadata={"depth": depth}
332
+ )
333
+ # Recursive call to left half
334
+ result = yield from self._search_recursive(data, target, left, mid - 1, depth + 1)
335
+ return result
336
+
337
+ def _is_sorted(self, data: List[GestureImage]) -> bool:
338
+ """Check if data is sorted in ascending order."""
339
+ for i in range(len(data) - 1):
340
+ if data[i].rank > data[i + 1].rank:
341
+ return False
342
+ return True
343
+
344
+ @staticmethod
345
+ def _calculate_max_steps(n: int) -> int:
346
+ """Calculate maximum number of steps needed for binary search."""
347
+ if n <= 0:
348
+ return 0
349
+ return math.floor(math.log2(n)) + 1
oop_sorting_teaching/algorithms/searching/linear_search.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Linear Search implementation.
3
+
4
+ The simplest search algorithm - just checks every element from start to end.
5
+ Works on both sorted and unsorted data.
6
+
7
+ ┌─────────────────────────────────────────────────────────────────────────┐
8
+ │ 💡 WHEN TO USE LINEAR SEARCH? │
9
+ │ │
10
+ │ ✓ Data is unsorted (or sorting is too expensive) │
11
+ │ ✓ Data is very small (< 10 elements) │
12
+ │ ✓ You only need to search once │
13
+ │ ✓ You need to find ALL occurrences │
14
+ │ │
15
+ │ ✗ Large datasets with many searches → use Binary Search │
16
+ └─────────────────────────────────────────────────────────────────────────┘
17
+ """
18
+
19
+ from typing import List, Generator, Optional
20
+
21
+ from ..base import SearchAlgorithm
22
+ from ...models import GestureImage, Step, StepType
23
+
24
+
25
+ class LinearSearch(SearchAlgorithm):
26
+ """
27
+ Linear Search - the simplest search algorithm.
28
+
29
+ Just checks every element from start to end.
30
+ Works on both sorted and unsorted data.
31
+
32
+ Time Complexity: O(n)
33
+ Space Complexity: O(1)
34
+ """
35
+
36
+ @property
37
+ def name(self) -> str:
38
+ return "Linear Search"
39
+
40
+ @property
41
+ def requires_sorted(self) -> bool:
42
+ return False # Works on unsorted data!
43
+
44
+ def search(
45
+ self,
46
+ data: List[GestureImage],
47
+ target: GestureImage
48
+ ) -> Generator[Step, None, Optional[int]]:
49
+ """
50
+ Search by checking each element from left to right.
51
+
52
+ Time Complexity: O(n)
53
+ Space Complexity: O(1)
54
+ """
55
+ comparisons = 0
56
+
57
+ yield self._create_step(
58
+ step_type=StepType.SEARCH_RANGE,
59
+ indices=list(range(len(data))),
60
+ description=f"Searching for {target} (rank {target.rank}) using Linear Search",
61
+ data=data,
62
+ metadata={"target_rank": target.rank}
63
+ )
64
+
65
+ for i in range(len(data)):
66
+ comparisons += 1
67
+
68
+ # Show which element we're checking
69
+ yield self._create_step(
70
+ step_type=StepType.COMPARE,
71
+ indices=[i],
72
+ description=f"Checking index {i}: {data[i]} (rank {data[i].rank}) vs target {target} (rank {target.rank})",
73
+ data=data,
74
+ highlight=[i],
75
+ metadata={"comparisons": comparisons}
76
+ )
77
+
78
+ if data[i].rank == target.rank:
79
+ # Found it!
80
+ yield self._create_step(
81
+ step_type=StepType.FOUND,
82
+ indices=[i],
83
+ description=f"FOUND at index {i} after {comparisons} comparisons!",
84
+ data=data,
85
+ highlight=[i],
86
+ metadata={"comparisons": comparisons, "found": True}
87
+ )
88
+ return i
89
+
90
+ # Not found
91
+ yield self._create_step(
92
+ step_type=StepType.NOT_FOUND,
93
+ indices=[],
94
+ description=f"NOT FOUND after checking all {comparisons} elements",
95
+ data=data,
96
+ metadata={"comparisons": comparisons, "found": False}
97
+ )
98
+ return None
oop_sorting_teaching/algorithms/sorting/__init__.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Sorting algorithms package.
3
+
4
+ Contains implementations of various sorting algorithms:
5
+ - BubbleSort: Simple comparison-based sort (stable, in-place)
6
+ - MergeSort: Divide-and-conquer sort (stable, not in-place)
7
+ - QuickSort: Fast divide-and-conquer sort (unstable, in-place)
8
+
9
+ Each algorithm inherits from SortingAlgorithm and implements
10
+ the sort() generator method.
11
+ """
12
+
13
+ from .bubble_sort import BubbleSort
14
+ from .merge_sort import MergeSort
15
+ from .quick_sort import QuickSort, PivotStrategy, PartitionScheme
16
+
17
+ __all__ = [
18
+ 'BubbleSort',
19
+ 'MergeSort',
20
+ 'QuickSort',
21
+ 'PivotStrategy',
22
+ 'PartitionScheme',
23
+ ]
oop_sorting_teaching/algorithms/sorting/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (810 Bytes). View file
 
oop_sorting_teaching/algorithms/sorting/__pycache__/bubble_sort.cpython-313.pyc ADDED
Binary file (7.61 kB). View file
 
oop_sorting_teaching/algorithms/sorting/__pycache__/merge_sort.cpython-313.pyc ADDED
Binary file (10.1 kB). View file
 
oop_sorting_teaching/algorithms/sorting/__pycache__/quick_sort.cpython-313.pyc ADDED
Binary file (24.1 kB). View file
 
oop_sorting_teaching/algorithms/sorting/bubble_sort.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Bubble Sort implementation.
3
+
4
+ ╔══════════════════════════════════════════════════════════════════════════════╗
5
+ ║ 📚 ALGORITHM: Bubble Sort ║
6
+ ╠══════════════════════════════════════════════════════════════════════════════╣
7
+ ║ ║
8
+ ║ HOW IT WORKS: ║
9
+ ║ 1. Compare adjacent elements ║
10
+ ║ 2. If they're in wrong order, swap them ║
11
+ ║ 3. Repeat until no more swaps needed ║
12
+ ║ ║
13
+ ║ VISUALIZATION: ║
14
+ ║ [5] [3] [8] [1] ← Compare [5] and [3] ║
15
+ ║ [3] [5] [8] [1] ← Swapped! Compare [5] and [8] ║
16
+ ║ [3] [5] [8] [1] ← OK. Compare [8] and [1] ║
17
+ ║ [3] [5] [1] [8] ← Swapped! [8] "bubbled up" to the end ✓ ║
18
+ ║ ║
19
+ ║ PROPERTIES: ║
20
+ ║ • Time: O(n²) average/worst, O(n) best (already sorted) ║
21
+ ║ • Space: O(1) - in-place ║
22
+ ║ • Stable: YES - equal elements keep their relative order ║
23
+ ║ • Early Exit: We stop if a pass makes no swaps (already sorted!) ║
24
+ ║ ║
25
+ ╚══════════════════════════════════════════════════════════════════════════════╝
26
+ """
27
+
28
+ from typing import List, Generator
29
+
30
+ from ..base import SortingAlgorithm
31
+ from ...models import GestureImage, Step, StepType
32
+
33
+
34
+ class BubbleSort(SortingAlgorithm):
35
+ """
36
+ Bubble Sort with early exit optimization.
37
+
38
+ The simplest sorting algorithm - great for learning!
39
+
40
+ ┌─────────────────────────────────────────────────────────────────────────┐
41
+ │ 💡 WHY BUBBLE SORT? │
42
+ │ │
43
+ │ It's not the fastest, but it's: │
44
+ │ ✓ Easy to understand │
45
+ │ ✓ Easy to implement │
46
+ │ ✓ Stable (preserves order of equal elements) │
47
+ │ ✓ Efficient for nearly-sorted data (with early exit) │
48
+ │ ✓ Great for teaching sorting concepts │
49
+ └─────────────────────────────────────────────────────────────────────────┘
50
+ """
51
+
52
+ @property
53
+ def name(self) -> str:
54
+ return "Bubble Sort"
55
+
56
+ @property
57
+ def is_stable(self) -> bool:
58
+ return True # Bubble sort is stable!
59
+
60
+ @property
61
+ def is_in_place(self) -> bool:
62
+ return True # Only uses swaps, no extra arrays
63
+
64
+ def sort(self, data: List[GestureImage]) -> Generator[Step, None, List[GestureImage]]:
65
+ """
66
+ Sort using bubble sort with early exit.
67
+
68
+ 📚 CONCEPT: Generator Functions (yield)
69
+
70
+ A generator function uses 'yield' instead of 'return'.
71
+ Each yield PAUSES the function and returns a value.
72
+ The function resumes when next() is called again.
73
+
74
+ This lets us:
75
+ 1. Execute one step of the algorithm
76
+ 2. Pause and show that step to the user
77
+ 3. Continue to the next step
78
+
79
+ Without generators, we'd need to pre-compute ALL steps,
80
+ which wastes memory and prevents real-time visualization.
81
+ """
82
+ n = len(data)
83
+
84
+ # Track statistics for educational display
85
+ comparisons = 0
86
+ swaps = 0
87
+
88
+ # Outer loop: each pass "bubbles" the largest unsorted element up
89
+ for i in range(n - 1):
90
+ swapped = False # Track if we made any swaps this pass
91
+
92
+ # Yield a step showing we're starting a new pass
93
+ yield self._create_step(
94
+ step_type=StepType.COMPARE,
95
+ indices=[],
96
+ description=f"Pass {i + 1}: Scanning from left to right",
97
+ data=data,
98
+ highlight=list(range(n - i, n)) # Highlight already-sorted portion
99
+ )
100
+
101
+ # Inner loop: compare adjacent elements
102
+ for j in range(n - 1 - i):
103
+ comparisons += 1
104
+
105
+ # Yield a step showing the comparison
106
+ yield self._create_step(
107
+ step_type=StepType.COMPARE,
108
+ indices=[j, j + 1],
109
+ description=f"Comparing {data[j]} and {data[j + 1]}",
110
+ data=data,
111
+ highlight=list(range(n - i, n)),
112
+ metadata={"comparisons": comparisons, "swaps": swaps}
113
+ )
114
+
115
+ # If left > right, swap them
116
+ if data[j] > data[j + 1]:
117
+ # Perform the swap
118
+ data[j], data[j + 1] = data[j + 1], data[j]
119
+ swapped = True
120
+ swaps += 1
121
+
122
+ # Yield a step showing the swap
123
+ yield self._create_step(
124
+ step_type=StepType.SWAP,
125
+ indices=[j, j + 1],
126
+ description=f"Swapped! {data[j]} ↔ {data[j + 1]}",
127
+ data=data,
128
+ highlight=list(range(n - i, n)),
129
+ metadata={"comparisons": comparisons, "swaps": swaps}
130
+ )
131
+
132
+ # Mark the element that bubbled to its final position
133
+ yield self._create_step(
134
+ step_type=StepType.MARK_SORTED,
135
+ indices=[n - 1 - i],
136
+ description=f"{data[n - 1 - i]} is now in its final position",
137
+ data=data,
138
+ highlight=list(range(n - 1 - i, n))
139
+ )
140
+
141
+ # EARLY EXIT: If no swaps occurred, the array is sorted!
142
+ if not swapped:
143
+ yield self._create_step(
144
+ step_type=StepType.COMPLETE,
145
+ indices=[],
146
+ description=f"No swaps in this pass - array is sorted! (Early exit)",
147
+ data=data,
148
+ metadata={"comparisons": comparisons, "swaps": swaps, "early_exit": True}
149
+ )
150
+ return data
151
+
152
+ # Final step: algorithm complete
153
+ yield self._create_step(
154
+ step_type=StepType.COMPLETE,
155
+ indices=[],
156
+ description=f"Sorting complete! {comparisons} comparisons, {swaps} swaps",
157
+ data=data,
158
+ metadata={"comparisons": comparisons, "swaps": swaps, "early_exit": False}
159
+ )
160
+
161
+ return data
oop_sorting_teaching/algorithms/sorting/merge_sort.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Merge Sort implementation.
3
+
4
+ ╔══════════════════════════════════════════════════════════════════════════════╗
5
+ ║ 📚 ALGORITHM: Merge Sort ║
6
+ ╠══════════════════════════════════════════════════════════════════════════════╣
7
+ ║ ║
8
+ ║ HOW IT WORKS (Divide and Conquer): ║
9
+ ║ 1. DIVIDE: Split the array in half ║
10
+ ║ 2. CONQUER: Recursively sort each half ║
11
+ ║ 3. COMBINE: Merge the sorted halves back together ║
12
+ ║ ║
13
+ ║ VISUALIZATION: ║
14
+ ║ ║
15
+ ║ Depth 0: [5, 3, 8, 1] ║
16
+ ║ ↓ split ║
17
+ ║ Depth 1: [5, 3] [8, 1] ║
18
+ ║ ↓ ↓ ║
19
+ ║ Depth 2: [5] [3] [8] [1] ║
20
+ ║ ↓ merge ↓ merge ║
21
+ ║ Depth 1: [3, 5] [1, 8] ║
22
+ ║ ↓ merge ║
23
+ ║ Depth 0: [1, 3, 5, 8] ← SORTED! ║
24
+ ║ ║
25
+ ║ PROPERTIES: ║
26
+ ║ • Time: O(n log n) always (best = average = worst) ║
27
+ ║ • Space: O(n) - needs extra array for merging ║
28
+ ║ • Stable: YES - equal elements keep their relative order ║
29
+ ║ ║
30
+ ╚══════════════════════════════════════════════════════════════════════════════╝
31
+ """
32
+
33
+ from typing import List, Generator
34
+
35
+ from ..base import SortingAlgorithm
36
+ from ...models import GestureImage, Step, StepType
37
+
38
+
39
+ class MergeSort(SortingAlgorithm):
40
+ """
41
+ Merge Sort - a stable, efficient divide-and-conquer algorithm.
42
+
43
+ ┌─────────────────────────────────────────────────────────────────────────┐
44
+ │ 💡 WHY MERGE SORT? │
45
+ │ │
46
+ │ ✓ Guaranteed O(n log n) - no worst case! │
47
+ │ ✓ Stable - perfect for our stability demonstrations │
48
+ │ ✓ Parallelizable - each half can be sorted independently │
49
+ │ ✓ Great for linked lists (no random access needed) │
50
+ │ │
51
+ │ ✗ Uses O(n) extra space (not in-place) │
52
+ │ ✗ Slower than quicksort in practice (more memory operations) │
53
+ └─────────────────────────────────────────────────────────────────────────┘
54
+ """
55
+
56
+ def __init__(self):
57
+ """Initialize with tracking variables."""
58
+ self._comparisons = 0
59
+ self._moves = 0
60
+
61
+ @property
62
+ def name(self) -> str:
63
+ return "Merge Sort"
64
+
65
+ @property
66
+ def is_stable(self) -> bool:
67
+ return True # Merge sort is stable!
68
+
69
+ @property
70
+ def is_in_place(self) -> bool:
71
+ return False # Needs extra space for merging
72
+
73
+ def sort(self, data: List[GestureImage]) -> Generator[Step, None, List[GestureImage]]:
74
+ """
75
+ Sort using merge sort.
76
+
77
+ This is a wrapper that starts the recursive process.
78
+ """
79
+ self._comparisons = 0
80
+ self._moves = 0
81
+
82
+ if len(data) <= 1:
83
+ return data
84
+
85
+ # Start the recursive sorting
86
+ yield from self._merge_sort_recursive(data, 0, len(data) - 1, 0)
87
+
88
+ # Final step
89
+ yield self._create_step(
90
+ step_type=StepType.COMPLETE,
91
+ indices=[],
92
+ description=f"Sorting complete! {self._comparisons} comparisons, {self._moves} moves",
93
+ data=data,
94
+ metadata={"comparisons": self._comparisons, "moves": self._moves}
95
+ )
96
+
97
+ return data
98
+
99
+ def _merge_sort_recursive(
100
+ self,
101
+ data: List[GestureImage],
102
+ left: int,
103
+ right: int,
104
+ depth: int
105
+ ) -> Generator[Step, None, None]:
106
+ """
107
+ Recursive merge sort implementation.
108
+
109
+ 📚 CONCEPT: Recursion
110
+
111
+ Recursion is when a function calls itself.
112
+ Each call works on a smaller piece of the problem.
113
+
114
+ Base case: When to stop (array of size 1)
115
+ Recursive case: Split, sort halves, merge
116
+ """
117
+ # BASE CASE: Array of 1 element is already sorted
118
+ if left >= right:
119
+ return
120
+
121
+ # Calculate middle point
122
+ mid = (left + right) // 2
123
+
124
+ # Yield step showing the split
125
+ yield self._create_step(
126
+ step_type=StepType.SPLIT,
127
+ indices=list(range(left, right + 1)),
128
+ description=f"Depth {depth}: Splitting [{left}:{right}] into [{left}:{mid}] and [{mid+1}:{right}]",
129
+ data=data,
130
+ depth=depth,
131
+ metadata={"left": left, "mid": mid, "right": right}
132
+ )
133
+
134
+ # RECURSIVE CASE: Sort left half
135
+ yield from self._merge_sort_recursive(data, left, mid, depth + 1)
136
+
137
+ # Sort right half
138
+ yield from self._merge_sort_recursive(data, mid + 1, right, depth + 1)
139
+
140
+ # Merge the sorted halves
141
+ yield from self._merge(data, left, mid, right, depth)
142
+
143
+ def _merge(
144
+ self,
145
+ data: List[GestureImage],
146
+ left: int,
147
+ mid: int,
148
+ right: int,
149
+ depth: int
150
+ ) -> Generator[Step, None, None]:
151
+ """
152
+ Merge two sorted subarrays.
153
+
154
+ Left subarray: data[left:mid+1]
155
+ Right subarray: data[mid+1:right+1]
156
+ """
157
+ # Create temporary arrays (this is why merge sort needs O(n) space)
158
+ left_arr = data[left:mid + 1]
159
+ right_arr = data[mid + 1:right + 1]
160
+
161
+ yield self._create_step(
162
+ step_type=StepType.MERGE,
163
+ indices=list(range(left, right + 1)),
164
+ description=f"Depth {depth}: Merging [{left}:{mid}] and [{mid+1}:{right}]",
165
+ data=data,
166
+ depth=depth
167
+ )
168
+
169
+ i = 0 # Index for left subarray
170
+ j = 0 # Index for right subarray
171
+ k = left # Index for merged array
172
+
173
+ # Merge while both subarrays have elements
174
+ while i < len(left_arr) and j < len(right_arr):
175
+ self._comparisons += 1
176
+
177
+ # Compare elements from both subarrays
178
+ # Using <= (not <) to maintain stability!
179
+ if left_arr[i] <= right_arr[j]:
180
+ data[k] = left_arr[i]
181
+ i += 1
182
+ else:
183
+ data[k] = right_arr[j]
184
+ j += 1
185
+
186
+ self._moves += 1
187
+ k += 1
188
+
189
+ yield self._create_step(
190
+ step_type=StepType.MOVE,
191
+ indices=[k - 1],
192
+ description=f"Placed {data[k - 1]} at position {k - 1}",
193
+ data=data,
194
+ depth=depth,
195
+ metadata={"comparisons": self._comparisons, "moves": self._moves}
196
+ )
197
+
198
+ # Copy remaining elements from left subarray
199
+ while i < len(left_arr):
200
+ data[k] = left_arr[i]
201
+ self._moves += 1
202
+ i += 1
203
+ k += 1
204
+
205
+ # Copy remaining elements from right subarray
206
+ while j < len(right_arr):
207
+ data[k] = right_arr[j]
208
+ self._moves += 1
209
+ j += 1
210
+ k += 1
211
+
212
+ yield self._create_step(
213
+ step_type=StepType.MARK_SORTED,
214
+ indices=list(range(left, right + 1)),
215
+ description=f"Merged: positions {left} to {right} are now sorted",
216
+ data=data,
217
+ depth=depth
218
+ )
oop_sorting_teaching/algorithms/sorting/quick_sort.py ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Quick Sort implementation.
3
+
4
+ ╔══════════════════════════════════════════════════════════════════════════════╗
5
+ ║ 📚 ALGORITHM: Quick Sort ║
6
+ ╠══════════════════════════════════════════════════════════════════════════════╣
7
+ ║ ║
8
+ ║ HOW IT WORKS: ║
9
+ ║ 1. Pick a PIVOT element ║
10
+ ║ 2. PARTITION: Move smaller elements left, larger elements right ║
11
+ ║ 3. RECURSE: Sort the left and right partitions ║
12
+ ║ ║
13
+ ║ VISUALIZATION (2-way partitioning): ║
14
+ ║ ║
15
+ ║ [3, 8, 1, 5, 2, 9, 4] pivot = 5 ║
16
+ ║ ↑ ↑ ║
17
+ ║ L R ║
18
+ ║ ║
19
+ ║ After partition: ║
20
+ ║ [3, 4, 1, 2] [5] [8, 9] ║
21
+ ║ < pivot = > pivot ║
22
+ ║ ║
23
+ ║ PROPERTIES: ║
24
+ ║ • Time: O(n log n) average, O(n²) worst case ║
25
+ ║ • Space: O(log n) for recursion stack ║
26
+ ║ • ⚠️ UNSTABLE: Equal elements may be reordered! ║
27
+ ║ ║
28
+ ╚══════════════════════════════════════════════════════════════════════════════╝
29
+ """
30
+
31
+ import random
32
+ from enum import Enum
33
+ from typing import List, Generator, Tuple
34
+
35
+ from ..base import SortingAlgorithm
36
+ from ...models import GestureImage, Step, StepType
37
+
38
+
39
+ class PivotStrategy(Enum):
40
+ """
41
+ Strategies for selecting the pivot in Quick Sort.
42
+
43
+ The pivot choice significantly affects performance:
44
+ - Bad pivot: O(n²) worst case
45
+ - Good pivot: O(n log n) average case
46
+ """
47
+ FIRST = "first" # Always pick first element (simple but risky)
48
+ LAST = "last" # Always pick last element
49
+ MEDIAN_OF_THREE = "median" # Pick median of first, middle, last (balanced)
50
+ RANDOM = "random" # Pick randomly (good average case)
51
+
52
+
53
+ class PartitionScheme(Enum):
54
+ """
55
+ Partitioning schemes for Quick Sort.
56
+ """
57
+ TWO_WAY = "2-way" # Classic: elements < pivot, elements >= pivot
58
+ THREE_WAY = "3-way" # Dutch National Flag: <, ==, > (better for duplicates)
59
+
60
+
61
+ class QuickSort(SortingAlgorithm):
62
+ """
63
+ Quick Sort with configurable pivot selection and partitioning.
64
+
65
+ ┌─────────────────────────────────────────────────────────────────────────┐
66
+ │ ⚠️ IMPORTANT: Quick Sort is UNSTABLE! │
67
+ │ │
68
+ │ This is the KEY algorithm for demonstrating instability. │
69
+ │ │
70
+ │ Example: │
71
+ │ Before: [✌️₁, ✌️₂, ✌️₃, ✊] (three peace signs in order 1,2,3) │
72
+ │ After: [✊, ✌️₂, ✌️₁, ✌️₃] (order changed to 2,1,3!) │
73
+ │ │
74
+ │ The capture_id subscripts let us SEE this instability! │
75
+ └─────────────────────────────────────────────────────────────────────────┘
76
+ """
77
+
78
+ def __init__(
79
+ self,
80
+ pivot_strategy: PivotStrategy = PivotStrategy.FIRST,
81
+ partition_scheme: PartitionScheme = PartitionScheme.TWO_WAY
82
+ ):
83
+ """
84
+ Initialize Quick Sort with configuration.
85
+
86
+ Args:
87
+ pivot_strategy: How to choose the pivot element
88
+ partition_scheme: How to partition around the pivot
89
+ """
90
+ self.pivot_strategy = pivot_strategy
91
+ self.partition_scheme = partition_scheme
92
+ self._comparisons = 0
93
+ self._swaps = 0
94
+ self._instability_detected = False
95
+ self._original_order: dict = {} # Track original positions for stability check
96
+
97
+ @property
98
+ def name(self) -> str:
99
+ pivot_name = self.pivot_strategy.value.title()
100
+ partition_name = self.partition_scheme.value
101
+ return f"Quick Sort ({pivot_name} Pivot, {partition_name})"
102
+
103
+ @property
104
+ def is_stable(self) -> bool:
105
+ return False # Quick sort is NOT stable!
106
+
107
+ @property
108
+ def is_in_place(self) -> bool:
109
+ return True # Only uses swaps
110
+
111
+ def sort(self, data: List[GestureImage]) -> Generator[Step, None, List[GestureImage]]:
112
+ """Sort using quick sort."""
113
+ self._comparisons = 0
114
+ self._swaps = 0
115
+ self._instability_detected = False
116
+
117
+ # Record original positions for stability checking
118
+ self._original_order = {img.capture_id: i for i, img in enumerate(data)}
119
+
120
+ if len(data) <= 1:
121
+ return data
122
+
123
+ yield from self._quick_sort_recursive(data, 0, len(data) - 1, 0)
124
+
125
+ # Check for instability in final result
126
+ instability_msg = ""
127
+ if self._instability_detected:
128
+ instability_msg = " ⚠️ INSTABILITY DETECTED: Equal elements changed order!"
129
+
130
+ yield self._create_step(
131
+ step_type=StepType.COMPLETE,
132
+ indices=[],
133
+ description=f"Sorting complete! {self._comparisons} comparisons, {self._swaps} swaps{instability_msg}",
134
+ data=data,
135
+ metadata={
136
+ "comparisons": self._comparisons,
137
+ "swaps": self._swaps,
138
+ "instability_detected": self._instability_detected
139
+ }
140
+ )
141
+
142
+ return data
143
+
144
+ def _select_pivot_index(self, data: List[GestureImage], left: int, right: int) -> int:
145
+ """
146
+ Select pivot based on the configured strategy.
147
+
148
+ Different strategies have different trade-offs:
149
+ - FIRST: Simple but O(n²) on sorted data
150
+ - MEDIAN_OF_THREE: Good balance, avoids worst case
151
+ - RANDOM: Probabilistically good
152
+ """
153
+ if self.pivot_strategy == PivotStrategy.FIRST:
154
+ return left
155
+
156
+ elif self.pivot_strategy == PivotStrategy.LAST:
157
+ return right
158
+
159
+ elif self.pivot_strategy == PivotStrategy.RANDOM:
160
+ return random.randint(left, right)
161
+
162
+ elif self.pivot_strategy == PivotStrategy.MEDIAN_OF_THREE:
163
+ mid = (left + right) // 2
164
+
165
+ # Find median of first, middle, last
166
+ a, b, c = data[left], data[mid], data[right]
167
+
168
+ if a <= b <= c or c <= b <= a:
169
+ return mid
170
+ elif b <= a <= c or c <= a <= b:
171
+ return left
172
+ else:
173
+ return right
174
+
175
+ return left # Default
176
+
177
+ def _quick_sort_recursive(
178
+ self,
179
+ data: List[GestureImage],
180
+ left: int,
181
+ right: int,
182
+ depth: int
183
+ ) -> Generator[Step, None, None]:
184
+ """Recursive quick sort implementation."""
185
+
186
+ if left >= right:
187
+ return
188
+
189
+ # Select and show pivot
190
+ pivot_idx = self._select_pivot_index(data, left, right)
191
+
192
+ yield self._create_step(
193
+ step_type=StepType.PIVOT_SELECT,
194
+ indices=[pivot_idx],
195
+ description=f"Depth {depth}: Selected pivot {data[pivot_idx]} at index {pivot_idx}",
196
+ data=data,
197
+ depth=depth,
198
+ metadata={"pivot_strategy": self.pivot_strategy.value}
199
+ )
200
+
201
+ # Partition based on scheme
202
+ if self.partition_scheme == PartitionScheme.TWO_WAY:
203
+ pivot_final = yield from self._partition_two_way(data, left, right, pivot_idx, depth)
204
+
205
+ # Recurse on partitions
206
+ yield from self._quick_sort_recursive(data, left, pivot_final - 1, depth + 1)
207
+ yield from self._quick_sort_recursive(data, pivot_final + 1, right, depth + 1)
208
+
209
+ else: # THREE_WAY
210
+ lt, gt = yield from self._partition_three_way(data, left, right, pivot_idx, depth)
211
+
212
+ # Recurse on partitions (skip the equal section)
213
+ yield from self._quick_sort_recursive(data, left, lt - 1, depth + 1)
214
+ yield from self._quick_sort_recursive(data, gt + 1, right, depth + 1)
215
+
216
+ def _partition_two_way(
217
+ self,
218
+ data: List[GestureImage],
219
+ left: int,
220
+ right: int,
221
+ pivot_idx: int,
222
+ depth: int
223
+ ) -> Generator[Step, None, int]:
224
+ """
225
+ Standard two-way partitioning (Lomuto scheme).
226
+
227
+ Returns the final position of the pivot.
228
+ """
229
+ # Move pivot to the end
230
+ data[pivot_idx], data[right] = data[right], data[pivot_idx]
231
+ pivot = data[right]
232
+
233
+ i = left # Boundary for elements < pivot
234
+
235
+ yield self._create_step(
236
+ step_type=StepType.PARTITION,
237
+ indices=list(range(left, right + 1)),
238
+ description=f"Partitioning around pivot {pivot}",
239
+ data=data,
240
+ depth=depth
241
+ )
242
+
243
+ for j in range(left, right):
244
+ self._comparisons += 1
245
+
246
+ if data[j] < pivot:
247
+ # Check for instability before swap
248
+ if i != j and data[i].rank == data[j].rank:
249
+ self._check_stability(data[i], data[j])
250
+
251
+ data[i], data[j] = data[j], data[i]
252
+ self._swaps += 1
253
+
254
+ yield self._create_step(
255
+ step_type=StepType.SWAP,
256
+ indices=[i, j],
257
+ description=f"Moving {data[i]} to left partition",
258
+ data=data,
259
+ depth=depth
260
+ )
261
+
262
+ i += 1
263
+
264
+ # Move pivot to final position
265
+ data[i], data[right] = data[right], data[i]
266
+ self._swaps += 1
267
+
268
+ yield self._create_step(
269
+ step_type=StepType.MARK_SORTED,
270
+ indices=[i],
271
+ description=f"Pivot {data[i]} is now in final position {i}",
272
+ data=data,
273
+ depth=depth
274
+ )
275
+
276
+ return i
277
+
278
+ def _partition_three_way(
279
+ self,
280
+ data: List[GestureImage],
281
+ left: int,
282
+ right: int,
283
+ pivot_idx: int,
284
+ depth: int
285
+ ) -> Generator[Step, None, Tuple[int, int]]:
286
+ """
287
+ Three-way partitioning (Dutch National Flag).
288
+
289
+ Creates three regions: < pivot, == pivot, > pivot
290
+ Better for arrays with many duplicates!
291
+
292
+ Returns (lt, gt) where:
293
+ - data[left:lt] < pivot
294
+ - data[lt:gt+1] == pivot
295
+ - data[gt+1:right+1] > pivot
296
+ """
297
+ pivot = data[pivot_idx]
298
+
299
+ lt = left # data[left:lt] < pivot
300
+ gt = right # data[gt+1:right+1] > pivot
301
+ i = left # Current element
302
+
303
+ yield self._create_step(
304
+ step_type=StepType.PARTITION,
305
+ indices=list(range(left, right + 1)),
306
+ description=f"3-way partitioning around pivot {pivot} (Dutch National Flag)",
307
+ data=data,
308
+ depth=depth
309
+ )
310
+
311
+ while i <= gt:
312
+ self._comparisons += 1
313
+
314
+ if data[i] < pivot:
315
+ data[lt], data[i] = data[i], data[lt]
316
+ lt += 1
317
+ i += 1
318
+ self._swaps += 1
319
+
320
+ elif data[i] > pivot:
321
+ # Check stability before swap
322
+ if data[gt].rank == data[i].rank:
323
+ self._check_stability(data[i], data[gt])
324
+
325
+ data[gt], data[i] = data[i], data[gt]
326
+ gt -= 1
327
+ self._swaps += 1
328
+
329
+ else:
330
+ i += 1
331
+
332
+ yield self._create_step(
333
+ step_type=StepType.MOVE,
334
+ indices=[lt - 1, i - 1, gt + 1] if lt > left else [i - 1],
335
+ description=f"< pivot: [{left}:{lt}], == pivot: [{lt}:{i}], > pivot: [{gt + 1}:{right + 1}]",
336
+ data=data,
337
+ depth=depth
338
+ )
339
+
340
+ yield self._create_step(
341
+ step_type=StepType.MARK_SORTED,
342
+ indices=list(range(lt, gt + 1)),
343
+ description=f"All elements equal to pivot are in final positions [{lt}:{gt + 1}]",
344
+ data=data,
345
+ depth=depth
346
+ )
347
+
348
+ return lt, gt
349
+
350
+ def _check_stability(self, img1: GestureImage, img2: GestureImage) -> None:
351
+ """
352
+ Check if swapping these elements violates stability.
353
+
354
+ Stability is violated if:
355
+ - Two elements have the same rank (are "equal")
356
+ - But their relative order changes from the original
357
+ """
358
+ if img1.rank == img2.rank: # Equal elements
359
+ orig_pos1 = self._original_order.get(img1.capture_id, 0)
360
+ orig_pos2 = self._original_order.get(img2.capture_id, 0)
361
+
362
+ # If originally img1 came before img2, but now img2 will come first
363
+ if orig_pos1 < orig_pos2:
364
+ self._instability_detected = True
365
+
366
+ # -------------------------------------------------------------------------
367
+ # Worst Case Analysis Methods
368
+ # -------------------------------------------------------------------------
369
+
370
+ @staticmethod
371
+ def analyze_input_for_worst_case(
372
+ data: List[GestureImage],
373
+ pivot_strategy: PivotStrategy,
374
+ partition_scheme: PartitionScheme
375
+ ) -> dict:
376
+ """
377
+ Analyze input data to predict if it will cause worst-case behavior.
378
+
379
+ ╔═════════════════════════════════════════════════════════════════════╗
380
+ ║ 📚 WORST CASE SCENARIOS FOR QUICK SORT ║
381
+ ╠═════════════════════════════════════════════════════════════════════╣
382
+ ║ ║
383
+ ║ Quick Sort degrades to O(n²) when partitions are UNBALANCED. ║
384
+ ║ ║
385
+ ║ SCENARIO 1: Already Sorted + First/Last Pivot ║
386
+ ║ ───────────────────────────────────────────────────────────── ║
387
+ ║ Input: [1, 2, 3, 4, 5] Pivot = 1 (first) ║
388
+ ║ After partition: [] [1] [2, 3, 4, 5] ║
389
+ ║ → Left partition is EMPTY, right has n-1 elements ║
390
+ ║ → Recursion depth = n (not log n) ║
391
+ ║ → Time = n + (n-1) + (n-2) + ... = O(n²) ║
392
+ ║ ║
393
+ ║ SCENARIO 2: Reverse Sorted + First/Last Pivot ║
394
+ ║ ───────────────────────────────────────────────────────────── ║
395
+ ║ Same problem, just on the other side. ║
396
+ ║ ║
397
+ ║ SCENARIO 3: Many Duplicates + 2-Way Partitioning ║
398
+ ║ ───────────────────────────────────────────────────────────── ║
399
+ ║ Input: [3, 3, 3, 3, 3] All elements equal ║
400
+ ║ 2-way puts duplicates on ONE side → unbalanced! ║
401
+ ║ 3-way groups duplicates in MIDDLE → balanced! ║
402
+ ║ ║
403
+ ╚═════════════════════════════════════════════════════════════════════╝
404
+
405
+ Returns:
406
+ Dictionary with analysis results including:
407
+ - is_worst_case: bool
408
+ - risk_level: "low", "medium", "high"
409
+ - reasons: list of strings explaining the risks
410
+ - recommendations: list of strings with suggestions
411
+ """
412
+ n = len(data)
413
+ if n <= 2:
414
+ return {
415
+ "is_worst_case": False,
416
+ "risk_level": "low",
417
+ "reasons": ["Array too small for worst case to matter"],
418
+ "recommendations": []
419
+ }
420
+
421
+ reasons = []
422
+ recommendations = []
423
+ risk_score = 0
424
+
425
+ # Check 1: Is the data already sorted or reverse sorted?
426
+ is_sorted_asc = all(data[i] <= data[i+1] for i in range(n-1))
427
+ is_sorted_desc = all(data[i] >= data[i+1] for i in range(n-1))
428
+ is_nearly_sorted = sum(1 for i in range(n-1) if data[i] <= data[i+1]) / (n-1) > 0.8
429
+
430
+ if is_sorted_asc or is_sorted_desc:
431
+ if pivot_strategy in [PivotStrategy.FIRST, PivotStrategy.LAST]:
432
+ reasons.append(
433
+ f"Data is {'sorted' if is_sorted_asc else 'reverse sorted'} + "
434
+ f"{pivot_strategy.value} pivot = WORST CASE! "
435
+ f"Every partition will be maximally unbalanced (0 vs n-1 split)."
436
+ )
437
+ risk_score += 3
438
+ recommendations.append("Use MEDIAN_OF_THREE or RANDOM pivot strategy")
439
+ elif is_nearly_sorted:
440
+ if pivot_strategy in [PivotStrategy.FIRST, PivotStrategy.LAST]:
441
+ reasons.append(
442
+ f"Data is nearly sorted ({sum(1 for i in range(n-1) if data[i] <= data[i+1])*100//(n-1)}% in order) + "
443
+ f"{pivot_strategy.value} pivot = HIGH RISK of unbalanced partitions."
444
+ )
445
+ risk_score += 2
446
+ recommendations.append("Consider MEDIAN_OF_THREE pivot for nearly-sorted data")
447
+
448
+ # Check 2: How many duplicates?
449
+ unique_values = len(set(img.rank for img in data))
450
+ duplicate_ratio = 1 - (unique_values / n)
451
+
452
+ if duplicate_ratio > 0.5: # More than 50% duplicates
453
+ if partition_scheme == PartitionScheme.TWO_WAY:
454
+ reasons.append(
455
+ f"High duplicate ratio ({duplicate_ratio*100:.0f}%) + 2-way partitioning = "
456
+ f"INEFFICIENT! All duplicates go to one side."
457
+ )
458
+ risk_score += 2
459
+ recommendations.append("Use 3-WAY partitioning for data with many duplicates")
460
+ else:
461
+ reasons.append(
462
+ f"High duplicate ratio ({duplicate_ratio*100:.0f}%) but using 3-way partitioning - GOOD! "
463
+ f"Duplicates will be grouped efficiently."
464
+ )
465
+
466
+ # Check 3: Pivot strategy effectiveness
467
+ if pivot_strategy == PivotStrategy.RANDOM:
468
+ reasons.append("RANDOM pivot provides probabilistic O(n log n) - safe choice!")
469
+ elif pivot_strategy == PivotStrategy.MEDIAN_OF_THREE:
470
+ reasons.append("MEDIAN_OF_THREE pivot avoids worst case for sorted/reverse-sorted data")
471
+
472
+ # Determine overall risk level
473
+ if risk_score >= 3:
474
+ risk_level = "high"
475
+ is_worst_case = True
476
+ elif risk_score >= 2:
477
+ risk_level = "medium"
478
+ is_worst_case = False
479
+ else:
480
+ risk_level = "low"
481
+ is_worst_case = False
482
+
483
+ return {
484
+ "is_worst_case": is_worst_case,
485
+ "risk_level": risk_level,
486
+ "reasons": reasons,
487
+ "recommendations": recommendations,
488
+ "details": {
489
+ "is_sorted_asc": is_sorted_asc,
490
+ "is_sorted_desc": is_sorted_desc,
491
+ "is_nearly_sorted": is_nearly_sorted,
492
+ "duplicate_ratio": duplicate_ratio,
493
+ "unique_values": unique_values,
494
+ "total_elements": n
495
+ }
496
+ }
497
+
498
+ def analyze_partition_balance(self, left_size: int, right_size: int, total: int) -> str:
499
+ """
500
+ Analyze how balanced a partition is.
501
+
502
+ Perfect balance: 50/50 split
503
+ Worst case: 0/n or n/0 split
504
+
505
+ Returns a description of the partition quality.
506
+ """
507
+ if total <= 1:
508
+ return "trivial"
509
+
510
+ # Calculate balance ratio (0 = worst, 1 = perfect)
511
+ smaller = min(left_size, right_size)
512
+ balance_ratio = smaller / (total - 1) if total > 1 else 1
513
+
514
+ if balance_ratio >= 0.4:
515
+ return f"GOOD ({left_size}/{right_size} split, {balance_ratio*100:.0f}% balanced)"
516
+ elif balance_ratio >= 0.2:
517
+ return f"MODERATE ({left_size}/{right_size} split, {balance_ratio*100:.0f}% balanced)"
518
+ elif balance_ratio >= 0.1:
519
+ return f"POOR ({left_size}/{right_size} split, {balance_ratio*100:.0f}% balanced)"
520
+ else:
521
+ return f"WORST CASE ({left_size}/{right_size} split - maximally unbalanced!)"
oop_sorting_teaching/models/__init__.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Models subpackage - Core data structures for the sorting visualizer.
3
+
4
+ This package contains the fundamental data types:
5
+ • GestureRanking - Defines ordering of gestures
6
+ • GestureImage - Represents a captured gesture image
7
+ • StepType - Types of algorithm steps
8
+ • Step - A single step in algorithm execution
9
+ • ImageList - Managed collection of gesture images
10
+ """
11
+
12
+ from .gesture import GestureRanking, GestureImage
13
+ from .step import StepType, Step
14
+ from .image_list import ImageList
15
+
16
+ __all__ = [
17
+ "GestureRanking",
18
+ "GestureImage",
19
+ "StepType",
20
+ "Step",
21
+ "ImageList",
22
+ ]
oop_sorting_teaching/models/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (781 Bytes). View file
 
oop_sorting_teaching/models/__pycache__/gesture.cpython-313.pyc ADDED
Binary file (15.9 kB). View file
 
oop_sorting_teaching/models/__pycache__/image_list.cpython-313.pyc ADDED
Binary file (17.5 kB). View file
 
oop_sorting_teaching/models/__pycache__/step.cpython-313.pyc ADDED
Binary file (5.37 kB). View file
 
oop_sorting_teaching/models/gesture.py ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ╔══════════════════════════════════════════════════════════════════════════════╗
3
+ ║ Models: gesture.py ║
4
+ ║ Core classes for representing hand gestures ║
5
+ ╚══════════════════════════════════════════════════════════════════════════════╝
6
+
7
+ This module contains:
8
+ • GestureRanking - The ranking system for gestures
9
+ • GestureImage - Represents a captured gesture with its data
10
+
11
+ 📚 WHY SEPARATE FILES?
12
+ In the procedural style, you might put everything in one big file.
13
+ In OOP, we organize related classes into modules (files).
14
+
15
+ Benefits:
16
+ • Easier to find code (gesture stuff is in gesture.py)
17
+ • Easier to test (can test gesture.py independently)
18
+ • Easier to reuse (import just what you need)
19
+ • Easier to collaborate (different people work on different files)
20
+ """
21
+
22
+ from dataclasses import dataclass
23
+ from typing import List, Optional
24
+ from PIL import Image
25
+
26
+
27
+ # ==============================================================================
28
+ # CLASS: GestureRanking
29
+ # ==============================================================================
30
+ # This class holds the RANKING SYSTEM for gestures.
31
+ # It's like a rulebook that says "fist comes before one, one comes before peace..."
32
+ #
33
+ # 💡 WHY A CLASS?
34
+ # In procedural code, this would be a dictionary floating around globally.
35
+ # As a class, we can:
36
+ # 1. Add METHODS to work with the data (get_rank, get_emoji, compare)
37
+ # 2. PROTECT the data from accidental changes
38
+ # 3. Keep related functions TOGETHER with the data they use
39
+ # ==============================================================================
40
+
41
+ class GestureRanking:
42
+ """
43
+ Defines the ordering of hand gestures for sorting purposes.
44
+
45
+ This class encapsulates (bundles together):
46
+ - The ranking of each gesture (which comes first in sorted order)
47
+ - The emoji representation of each gesture
48
+ - Methods to compare gestures
49
+
50
+ ┌─────────────────────────────────────────────────────────────────────────┐
51
+ │ 📚 CONCEPT: Class Attributes vs Instance Attributes │
52
+ │ │
53
+ │ CLASS ATTRIBUTES: Shared by ALL instances (objects) of the class │
54
+ │ - Defined directly in the class body │
55
+ │ - Like a shared resource everyone can read │
56
+ │ - Here: RANKINGS and EMOJIS are class attributes │
57
+ │ │
58
+ │ INSTANCE ATTRIBUTES: Unique to EACH instance │
59
+ │ - Defined in __init__ using self.attribute_name │
60
+ │ - Like personal belongings each person carries │
61
+ │ - Here: GestureRanking doesn't have instance attributes │
62
+ │ (it's a utility class with shared data) │
63
+ └─────────────────────────────────────────────────────────────────────────┘
64
+ """
65
+
66
+ # -------------------------------------------------------------------------
67
+ # Class Attribute: RANKINGS
68
+ # -------------------------------------------------------------------------
69
+ # This dictionary maps gesture names to their rank (sorting order).
70
+ # Lower rank = comes first when sorted in ascending order.
71
+ #
72
+ # 💭 Design Decision: We ordered gestures roughly by "finger count"
73
+ # fist (0 fingers) → one → peace (2) → three → four → palm (5) → special signs
74
+ # -------------------------------------------------------------------------
75
+ RANKINGS = {
76
+ "fist": 1, # ✊ Closed fist (0 fingers showing)
77
+ "one": 2, # ☝️ One finger up
78
+ "two_up": 3, # ✌️ Two fingers (peace sign)
79
+ "peace": 3, # ✌️ Alias for two_up (same gesture, different name)
80
+ "three": 4, # 🤟 Three fingers
81
+ "four": 5, # 🖖 Four fingers
82
+ "palm": 6, # 🖐️ Open palm (5 fingers)
83
+ "stop": 6, # 🖐️ Alias for palm
84
+ "ok": 7, # 👌 OK sign
85
+ "like": 8, # 👍 Thumbs up
86
+ "dislike": 9, # 👎 Thumbs down
87
+ "rock": 10, # �� Rock sign
88
+ "call": 11, # 🤙 Call me sign
89
+ "mute": 12, # 🤫 Shush/mute gesture
90
+ "no_gesture": 99, # Unknown or no gesture detected
91
+ }
92
+
93
+ # -------------------------------------------------------------------------
94
+ # Class Attribute: EMOJIS
95
+ # -------------------------------------------------------------------------
96
+ # Visual representation of each gesture.
97
+ # Makes the UI more engaging and helps identify gestures quickly.
98
+ # -------------------------------------------------------------------------
99
+ EMOJIS = {
100
+ "fist": "✊",
101
+ "one": "☝️",
102
+ "two_up": "✌️",
103
+ "peace": "✌️",
104
+ "three": "🤟",
105
+ "four": "🖖",
106
+ "palm": "🖐️",
107
+ "stop": "🖐️",
108
+ "ok": "👌",
109
+ "like": "👍",
110
+ "dislike": "👎",
111
+ "rock": "🤘",
112
+ "call": "🤙",
113
+ "mute": "🤫",
114
+ "no_gesture": "❓",
115
+ }
116
+
117
+ # -------------------------------------------------------------------------
118
+ # Class Method: get_rank
119
+ # -------------------------------------------------------------------------
120
+ # 📚 CONCEPT: @classmethod
121
+ #
122
+ # A classmethod belongs to the CLASS, not to an instance.
123
+ # - Regular method: needs an object to be called (object.method())
124
+ # - Class method: can be called on the class itself (ClassName.method())
125
+ #
126
+ # Use @classmethod when the method needs CLASS data but not INSTANCE data.
127
+ # -------------------------------------------------------------------------
128
+ @classmethod
129
+ def get_rank(cls, gesture_name: str) -> int:
130
+ """
131
+ Get the sorting rank of a gesture.
132
+
133
+ Args:
134
+ gesture_name: The name of the gesture (e.g., "peace", "fist")
135
+
136
+ Returns:
137
+ The rank (1-99) of the gesture. Lower = earlier in sorted order.
138
+ Returns 99 if gesture is unknown.
139
+
140
+ Example:
141
+ >>> GestureRanking.get_rank("peace")
142
+ 3
143
+ >>> GestureRanking.get_rank("fist")
144
+ 1
145
+ """
146
+ # .get() returns the value if key exists, otherwise the default (99)
147
+ # This prevents crashes if someone passes an unknown gesture name
148
+ return cls.RANKINGS.get(gesture_name.lower(), 99)
149
+
150
+ @classmethod
151
+ def get_emoji(cls, gesture_name: str) -> str:
152
+ """
153
+ Get the emoji representation of a gesture.
154
+
155
+ Args:
156
+ gesture_name: The name of the gesture
157
+
158
+ Returns:
159
+ The emoji string for this gesture, or ❓ if unknown.
160
+ """
161
+ return cls.EMOJIS.get(gesture_name.lower(), "❓")
162
+
163
+ @classmethod
164
+ def compare(cls, gesture_a: str, gesture_b: str) -> int:
165
+ """
166
+ Compare two gestures for sorting order.
167
+
168
+ This follows the standard comparison convention:
169
+ - Returns NEGATIVE if a < b (a comes before b)
170
+ - Returns ZERO if a == b (same rank)
171
+ - Returns POSITIVE if a > b (a comes after b)
172
+
173
+ Args:
174
+ gesture_a: First gesture name
175
+ gesture_b: Second gesture name
176
+
177
+ Returns:
178
+ Negative, zero, or positive integer.
179
+
180
+ Example:
181
+ >>> GestureRanking.compare("fist", "peace")
182
+ -2 # Negative: fist comes before peace
183
+ >>> GestureRanking.compare("peace", "fist")
184
+ 2 # Positive: peace comes after fist
185
+ """
186
+ return cls.get_rank(gesture_a) - cls.get_rank(gesture_b)
187
+
188
+ @classmethod
189
+ def get_all_gestures(cls) -> List[str]:
190
+ """
191
+ Get a list of all known gestures, sorted by rank.
192
+
193
+ Returns:
194
+ List of gesture names in sorted order.
195
+ """
196
+ # Sort the gesture names by their rank value
197
+ # This uses a lambda function as the sorting key
198
+ sorted_gestures = sorted(
199
+ cls.RANKINGS.keys(),
200
+ key=lambda name: cls.RANKINGS[name]
201
+ )
202
+ # Remove duplicates while preserving order
203
+ seen = set()
204
+ unique = []
205
+ for gesture in sorted_gestures:
206
+ if gesture not in seen:
207
+ seen.add(gesture)
208
+ unique.append(gesture)
209
+ return unique
210
+
211
+
212
+ # ==============================================================================
213
+ # CLASS: GestureImage (using @dataclass)
214
+ # ==============================================================================
215
+ """
216
+ ╔══════════════════════════════════════════════════════════════════════════════╗
217
+ ║ 📚 CONCEPT: What is a @dataclass? ║
218
+ ╠═══════════════════════════════════��══════════════════════════════════════════╣
219
+ ║ ║
220
+ ║ A @dataclass is a shortcut for creating classes that mainly hold DATA. ║
221
+ ║ ║
222
+ ║ WITHOUT @dataclass (the long way): ║
223
+ ║ ───────────────────────────────── ║
224
+ ║ class GestureImage: ║
225
+ ║ def __init__(self, gesture, rank, emoji, image, capture_id): ║
226
+ ║ self.gesture = gesture ║
227
+ ║ self.rank = rank ║
228
+ ║ self.emoji = emoji ║
229
+ ║ self.image = image ║
230
+ ║ self.capture_id = capture_id ║
231
+ ║ ║
232
+ ║ def __repr__(self): ║
233
+ ║ return f"GestureImage(gesture={self.gesture}, ...)" ║
234
+ ║ ║
235
+ ║ def __eq__(self, other): ║
236
+ ║ return self.gesture == other.gesture and ... ║
237
+ ║ ║
238
+ ║ WITH @dataclass (the shortcut): ║
239
+ ║ ─────────────────────────────── ║
240
+ ║ @dataclass ║
241
+ ║ class GestureImage: ║
242
+ ║ gesture: str ║
243
+ ║ rank: int ║
244
+ ║ emoji: str ║
245
+ ║ image: Image ║
246
+ ║ capture_id: int ║
247
+ ║ ║
248
+ ║ The @dataclass automatically generates __init__, __repr__, __eq__, etc! ║
249
+ ║ ║
250
+ ╚══════════════════════════════════════════════════════════════════════════════╝
251
+ """
252
+
253
+ @dataclass
254
+ class GestureImage:
255
+ """
256
+ Represents a captured hand gesture image with its classification.
257
+
258
+ This is the CORE DATA STRUCTURE of our application.
259
+ Each GestureImage bundles together:
260
+ - The actual image (pixels)
261
+ - The AI's prediction of what gesture it shows
262
+ - A unique ID for tracking (important for stability testing)
263
+ - Visual representations (emoji, rank)
264
+
265
+ ┌─────────────────────────────────────────────────────────────────────────┐
266
+ │ 💡 WHY THIS MATTERS: Encapsulation │
267
+ │ │
268
+ │ In procedural code, you'd pass around separate variables: │
269
+ │ process_gesture(image, name, rank, emoji, id) # 5 parameters! │
270
+ │ │
271
+ │ With OOP, you pass ONE object that contains everything: │
272
+ │ process_gesture(gesture_image) # 1 parameter! │
273
+ │ │
274
+ │ Benefits: │
275
+ │ ✓ Less room for errors (can't mix up parameter order) │
276
+ │ ✓ Easier to add new attributes later │
277
+ │ ✓ Methods travel WITH the data they operate on │
278
+ └─────────────────────────────────────────────────────────────────────────┘
279
+
280
+ Attributes:
281
+ gesture: The name of the detected gesture (e.g., "peace", "fist")
282
+ rank: Numeric rank for sorting (lower = comes first)
283
+ emoji: Visual emoji representation
284
+ image: The actual PIL Image (can be None if not needed)
285
+ capture_id: Unique ID from capture order (for stability testing)
286
+ thumbnail: Smaller version for display (generated automatically)
287
+ """
288
+
289
+ # -------------------------------------------------------------------------
290
+ # Dataclass Fields (Attributes)
291
+ # -------------------------------------------------------------------------
292
+ # These define what data each GestureImage object will hold.
293
+ # The type hints (: str, : int, etc.) help document and catch errors.
294
+ # -------------------------------------------------------------------------
295
+
296
+ gesture: str # Name of the gesture
297
+ rank: int # Sorting rank (from GestureRanking)
298
+ emoji: str # Emoji representation
299
+ capture_id: int # Unique ID (for stability tracking)
300
+ image: Optional[Image.Image] = None # The actual image (optional)
301
+ thumbnail: Optional[Image.Image] = None # Small version for display
302
+ confidence: float = 0.0 # AI's confidence in the prediction
303
+
304
+ # -------------------------------------------------------------------------
305
+ # Special Method: __post_init__
306
+ # -------------------------------------------------------------------------
307
+ # This runs AFTER the automatic __init__ created by @dataclass.
308
+ # We use it to create the thumbnail from the full image.
309
+ # -------------------------------------------------------------------------
310
+ def __post_init__(self):
311
+ """
312
+ Called automatically after the object is created.
313
+ Generates a thumbnail if an image is provided.
314
+ """
315
+ if self.image is not None and self.thumbnail is None:
316
+ self._create_thumbnail()
317
+
318
+ def _create_thumbnail(self, max_size: int = 80):
319
+ """
320
+ Create a smaller version of the image for display.
321
+
322
+ The underscore prefix (_create_thumbnail) is a Python convention
323
+ meaning "this is an internal method, not meant to be called from outside".
324
+
325
+ Args:
326
+ max_size: Maximum width/height of the thumbnail
327
+ """
328
+ if self.image is not None:
329
+ # Create a copy so we don't modify the original
330
+ thumb = self.image.copy()
331
+ # Resize while maintaining aspect ratio
332
+ thumb.thumbnail((max_size, max_size), Image.Resampling.LANCZOS)
333
+ self.thumbnail = thumb
334
+
335
+ # -------------------------------------------------------------------------
336
+ # Comparison Methods: Making objects sortable
337
+ # -------------------------------------------------------------------------
338
+ """
339
+ ╔═════════════════════════════════════════════════════════════════════════╗
340
+ ║ 📚 CONCEPT: Magic Methods (Dunder Methods) ║
341
+ ╠═════════════════════════════════════════════════════════════════════════╣
342
+ ║ ║
343
+ ║ Python has special method names surrounded by double underscores. ║
344
+ ║ These are called "magic methods" or "dunder methods" (double under). ║
345
+ ║ ║
346
+ ║ They let your objects work with Python's built-in operations: ║
347
+ ║ ║
348
+ ║ __lt__(self, other) → enables: object1 < object2 ║
349
+ ║ __le__(self, other) → enables: object1 <= object2 ║
350
+ ║ __eq__(self, other) → enables: object1 == object2 ║
351
+ ║ __gt__(self, other) → enables: object1 > object2 ║
352
+ ║ __ge__(self, other) → enables: object1 >= object2 ║
353
+ ║ __str__(self) → enables: str(object) or print(object) ║
354
+ ║ __repr__(self) → enables: repr(object) (for debugging) ║
355
+ ║ ║
356
+ ║ 💡 WHY THIS MATTERS: ║
357
+ ║ With these methods, Python's built-in sorted() function ║
358
+ ║ automatically works with our GestureImage objects! ║
359
+ ║ ║
360
+ ║ gestures = [gesture1, gesture2, gesture3] ║
361
+ ║ sorted_gestures = sorted(gestures) # Just works! ✨ ║
362
+ ║ ║
363
+ ╚═════════════════════════════════════════════════════════════════════════╝
364
+ """
365
+
366
+ def __lt__(self, other: 'GestureImage') -> bool:
367
+ """
368
+ Less than comparison. Enables: gesture1 < gesture2
369
+
370
+ Compares by rank. If ranks are equal, maintains stability
371
+ by comparing capture_id (earlier captured = smaller).
372
+ """
373
+ if self.rank != other.rank:
374
+ return self.rank < other.rank
375
+ # If same rank, compare by capture_id for stable sorting
376
+ return self.capture_id < other.capture_id
377
+
378
+ def __le__(self, other: 'GestureImage') -> bool:
379
+ """Less than or equal. Enables: gesture1 <= gesture2"""
380
+ return self.rank <= other.rank
381
+
382
+ def __gt__(self, other: 'GestureImage') -> bool:
383
+ """Greater than. Enables: gesture1 > gesture2"""
384
+ if self.rank != other.rank:
385
+ return self.rank > other.rank
386
+ return self.capture_id > other.capture_id
387
+
388
+ def __ge__(self, other: 'GestureImage') -> bool:
389
+ """Greater than or equal. Enables: gesture1 >= gesture2"""
390
+ return self.rank >= other.rank
391
+
392
+ def __eq__(self, other: object) -> bool:
393
+ """
394
+ Equality comparison. Enables: gesture1 == gesture2
395
+
396
+ Two gestures are equal if they have the same rank.
397
+ Note: We compare RANKS, not capture_ids, for sorting purposes.
398
+ """
399
+ if not isinstance(other, GestureImage):
400
+ return False
401
+ return self.rank == other.rank
402
+
403
+ def __hash__(self) -> int:
404
+ """
405
+ Hash function. Required for using objects in sets or as dict keys.
406
+ We hash by capture_id since it's unique.
407
+ """
408
+ return hash(self.capture_id)
409
+
410
+ # -------------------------------------------------------------------------
411
+ # Display Methods
412
+ # -------------------------------------------------------------------------
413
+
414
+ def __str__(self) -> str:
415
+ """
416
+ Human-readable string representation.
417
+ Called by print() and str().
418
+
419
+ Example: "✌️₁" (peace sign, capture #1)
420
+ """
421
+ # Subscript numbers for capture_id
422
+ subscripts = "₀₁₂₃₄₅₆₇₈₉"
423
+ sub_id = ''.join(subscripts[int(d)] for d in str(self.capture_id))
424
+ return f"{self.emoji}{sub_id}"
425
+
426
+ def __repr__(self) -> str:
427
+ """
428
+ Developer-friendly representation (for debugging).
429
+ Called by repr() and shown in interactive Python.
430
+ """
431
+ return f"GestureImage(gesture='{self.gesture}', rank={self.rank}, id={self.capture_id})"
432
+
433
+ def display_label(self) -> str:
434
+ """
435
+ Get a label for UI display.
436
+ Shows emoji, gesture name, and capture ID.
437
+ """
438
+ return f"{self.emoji} {self.gesture} (#{self.capture_id})"
439
+
440
+ # -------------------------------------------------------------------------
441
+ # Factory Methods
442
+ # -------------------------------------------------------------------------
443
+ """
444
+ ╔═════════════════════════════════════════════════════════════════════════╗
445
+ ║ 📚 CONCEPT: Factory Methods ║
446
+ ╠═════════════════════════════════════════════════════════════════════════╣
447
+ ║ ║
448
+ ║ A Factory Method is a class method that CREATES instances. ║
449
+ ║ ║
450
+ ║ Instead of: ║
451
+ ║ gesture = GestureImage( ║
452
+ ║ gesture="peace", ║
453
+ ║ rank=GestureRanking.get_rank("peace"), ║
454
+ ║ emoji=GestureRanking.get_emoji("peace"), ║
455
+ ║ capture_id=1, ║
456
+ ║ image=my_image, ║
457
+ ║ confidence=0.95 ║
458
+ ║ ) ║
459
+ ║ ║
460
+ ║ You can use: ║
461
+ ║ gesture = GestureImage.create_from_prediction( ║
462
+ ║ gesture_name="peace", ║
463
+ ║ capture_id=1, ║
464
+ ║ image=my_image, ║
465
+ ║ confidence=0.95 ║
466
+ ║ ) ║
467
+ ║ ║
468
+ ║ The factory method handles the details of looking up rank/emoji! ║
469
+ ║ ║
470
+ ╚═════════════════════════════════════════════════════════════════════════╝
471
+ """
472
+
473
+ @classmethod
474
+ def create_from_prediction(
475
+ cls,
476
+ gesture_name: str,
477
+ capture_id: int,
478
+ image: Optional[Image.Image] = None,
479
+ confidence: float = 0.0
480
+ ) -> 'GestureImage':
481
+ """
482
+ Factory method to create a GestureImage from an AI prediction.
483
+
484
+ This is a convenient way to create GestureImage objects without
485
+ needing to manually look up ranks and emojis.
486
+
487
+ Args:
488
+ gesture_name: The predicted gesture name
489
+ capture_id: Unique identifier for this capture
490
+ image: The original image (optional)
491
+ confidence: AI confidence score (0.0 to 1.0)
492
+
493
+ Returns:
494
+ A new GestureImage instance
495
+ """
496
+ return cls(
497
+ gesture=gesture_name.lower(),
498
+ rank=GestureRanking.get_rank(gesture_name),
499
+ emoji=GestureRanking.get_emoji(gesture_name),
500
+ capture_id=capture_id,
501
+ image=image,
502
+ confidence=confidence
503
+ )
504
+
505
+ @classmethod
506
+ def create_manual(
507
+ cls,
508
+ gesture_name: str,
509
+ capture_id: int,
510
+ image: Optional[Image.Image] = None
511
+ ) -> 'GestureImage':
512
+ """
513
+ Create a GestureImage with manual gesture assignment (no AI).
514
+ Same as create_from_prediction but with 100% confidence.
515
+ """
516
+ return cls.create_from_prediction(
517
+ gesture_name=gesture_name,
518
+ capture_id=capture_id,
519
+ image=image,
520
+ confidence=1.0 # Manual assignment = 100% confident
521
+ )
oop_sorting_teaching/models/image_list.py ADDED
@@ -0,0 +1,444 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ╔══════════════════════════════════════════════════════════════════════════════╗
3
+ ║ Models: image_list.py ║
4
+ ║ A managed collection of GestureImage objects ║
5
+ ╚══════════════════════════════════════════════════════════════════════════════╝
6
+
7
+ This module contains:
8
+ • ImageList - A class that manages a list of GestureImage objects
9
+
10
+ 📚 WHY A SPECIAL LIST CLASS?
11
+ Python already has lists, so why create ImageList?
12
+
13
+ 1. ENCAPSULATION: Bundle the list with its operations
14
+ 2. VALIDATION: Enforce rules (max 100 elements)
15
+ 3. HISTORY: Built-in undo functionality
16
+ 4. CONVENIENCE: Methods like shuffle(), duplicate(), is_sorted()
17
+
18
+ This is the OOP way: data + behavior together in one package.
19
+ """
20
+
21
+ from typing import List, Optional
22
+ from copy import deepcopy
23
+ import random
24
+
25
+ from PIL import Image
26
+
27
+ from oop_sorting_teaching.models.gesture import GestureImage
28
+
29
+
30
+ # ==============================================================================
31
+ # CLASS: ImageList
32
+ # ==============================================================================
33
+
34
+ class ImageList:
35
+ """
36
+ A managed collection of GestureImage objects.
37
+
38
+ This class handles:
39
+ - Adding and removing images
40
+ - Duplicating images (for testing)
41
+ - Tracking history (for undo)
42
+ - Enforcing the 100-element limit
43
+
44
+ ┌─────────────────────────────────────────────────────────────────────────┐
45
+ │ 🔄 PROCEDURAL vs OOP: Managing a list of items │
46
+ │ │
47
+ │ PROCEDURAL: │
48
+ │ images = [] │
49
+ │ def add_image(images, img): │
50
+ │ if len(images) < 100: │
51
+ │ images.append(img) │
52
+ │ def remove_image(images, index): │
53
+ │ if 0 <= index < len(images): │
54
+ │ images.pop(index) │
55
+ │ # Functions scattered, list is just raw data │
56
+ │ │
57
+ │ OOP: │
58
+ │ class ImageList: │
59
+ │ def add(self, img): ... │
60
+ │ def remove(self, index): ... │
61
+ │ # List AND its operations are bundled together! │
62
+ │ # Can't accidentally use wrong function on wrong list. │
63
+ └─────────────────────────────────────────────────────────────────────────┘
64
+ """
65
+
66
+ # Class constant: maximum number of elements allowed
67
+ MAX_SIZE = 100
68
+
69
+ def __init__(self):
70
+ """
71
+ Initialize an empty ImageList.
72
+
73
+ 📚 CONCEPT: __init__ is the Constructor
74
+
75
+ The constructor is called when you create a new object:
76
+ my_list = ImageList() # This calls __init__
77
+
78
+ It sets up the initial state of the object.
79
+ """
80
+ self._images: List[GestureImage] = [] # The actual list (private)
81
+ self._history: List[List[GestureImage]] = [] # For undo functionality
82
+ self._next_capture_id: int = 1 # Counter for unique IDs
83
+
84
+ # -------------------------------------------------------------------------
85
+ # Properties: Controlled access to data
86
+ # -------------------------------------------------------------------------
87
+ """
88
+ ╔═════════════════════════════════════════════════════════════════════════╗
89
+ ║ 📚 CONCEPT: Properties (@property) ║
90
+ ╠════════════════════════════════════════��════════════════════════════════╣
91
+ ║ ║
92
+ ║ A property looks like an attribute but is actually a method. ║
93
+ ║ ║
94
+ ║ WITHOUT property: ║
95
+ ║ length = my_list.get_length() # Method call (ugly) ║
96
+ ║ ║
97
+ ║ WITH property: ║
98
+ ║ length = my_list.length # Looks like attribute (clean!) ║
99
+ ║ ║
100
+ ║ Benefits: ║
101
+ ║ ✓ Clean syntax (no parentheses needed) ║
102
+ ║ ✓ Can add validation/computation without changing interface ║
103
+ ║ ✓ Can be read-only (no setter = cannot modify) ║
104
+ ║ ║
105
+ ╚═════════════════════════════════════════════════════════════════════════╝
106
+ """
107
+
108
+ @property
109
+ def length(self) -> int:
110
+ """Number of images in the list."""
111
+ return len(self._images)
112
+
113
+ @property
114
+ def is_empty(self) -> bool:
115
+ """Check if the list is empty."""
116
+ return len(self._images) == 0
117
+
118
+ @property
119
+ def is_full(self) -> bool:
120
+ """Check if the list has reached maximum capacity."""
121
+ return len(self._images) >= self.MAX_SIZE
122
+
123
+ @property
124
+ def images(self) -> List[GestureImage]:
125
+ """
126
+ Get a COPY of the images list.
127
+
128
+ Why a copy? To prevent external code from modifying
129
+ our internal list directly. This is called ENCAPSULATION.
130
+ """
131
+ return self._images.copy()
132
+
133
+ # -------------------------------------------------------------------------
134
+ # Core Operations
135
+ # -------------------------------------------------------------------------
136
+
137
+ def add(self, image: GestureImage) -> bool:
138
+ """
139
+ Add an image to the list.
140
+
141
+ Args:
142
+ image: The GestureImage to add
143
+
144
+ Returns:
145
+ True if added successfully, False if list is full
146
+ """
147
+ if self.is_full:
148
+ print(f"⚠️ Cannot add: list is at maximum capacity ({self.MAX_SIZE})")
149
+ return False
150
+
151
+ self._save_state() # Save for undo
152
+ self._images.append(image)
153
+ return True
154
+
155
+ def add_new(
156
+ self,
157
+ gesture_name: str,
158
+ image: Optional[Image.Image] = None,
159
+ confidence: float = 0.0
160
+ ) -> Optional[GestureImage]:
161
+ """
162
+ Create and add a new GestureImage.
163
+
164
+ This is a convenience method that:
165
+ 1. Creates a new GestureImage with auto-generated capture_id
166
+ 2. Adds it to the list
167
+
168
+ Args:
169
+ gesture_name: Name of the gesture
170
+ image: The image data
171
+ confidence: AI confidence score
172
+
173
+ Returns:
174
+ The created GestureImage, or None if list is full
175
+ """
176
+ if self.is_full:
177
+ return None
178
+
179
+ gesture_image = GestureImage.create_from_prediction(
180
+ gesture_name=gesture_name,
181
+ capture_id=self._next_capture_id,
182
+ image=image,
183
+ confidence=confidence
184
+ )
185
+
186
+ if self.add(gesture_image):
187
+ self._next_capture_id += 1
188
+ return gesture_image
189
+ return None
190
+
191
+ def remove(self, index: int) -> Optional[GestureImage]:
192
+ """
193
+ Remove and return the image at the given index.
194
+
195
+ Args:
196
+ index: Position of the image to remove (0-based)
197
+
198
+ Returns:
199
+ The removed GestureImage, or None if index invalid
200
+ """
201
+ if not 0 <= index < len(self._images):
202
+ print(f"⚠️ Invalid index: {index}")
203
+ return None
204
+
205
+ self._save_state()
206
+ return self._images.pop(index)
207
+
208
+ def duplicate(self, index: int) -> Optional[GestureImage]:
209
+ """
210
+ Create a duplicate of the image at the given index.
211
+
212
+ The duplicate gets a NEW capture_id (important for stability testing).
213
+
214
+ Args:
215
+ index: Position of the image to duplicate
216
+
217
+ Returns:
218
+ The new duplicate GestureImage, or None if failed
219
+ """
220
+ if not 0 <= index < len(self._images):
221
+ print(f"⚠️ Invalid index: {index}")
222
+ return None
223
+
224
+ if self.is_full:
225
+ print(f"⚠️ Cannot duplicate: list is at maximum capacity")
226
+ return None
227
+
228
+ original = self._images[index]
229
+
230
+ # Create duplicate with new capture_id
231
+ duplicate = GestureImage(
232
+ gesture=original.gesture,
233
+ rank=original.rank,
234
+ emoji=original.emoji,
235
+ capture_id=self._next_capture_id,
236
+ image=original.image,
237
+ thumbnail=original.thumbnail,
238
+ confidence=original.confidence
239
+ )
240
+
241
+ self._save_state()
242
+ # Insert right after the original
243
+ self._images.insert(index + 1, duplicate)
244
+ self._next_capture_id += 1
245
+
246
+ return duplicate
247
+
248
+ def clear(self) -> None:
249
+ """Remove all images from the list."""
250
+ self._save_state()
251
+ self._images.clear()
252
+
253
+ def swap(self, i: int, j: int) -> bool:
254
+ """
255
+ Swap elements at indices i and j.
256
+
257
+ This is the fundamental operation for in-place sorting algorithms.
258
+
259
+ Args:
260
+ i: First index
261
+ j: Second index
262
+
263
+ Returns:
264
+ True if swap successful, False if indices invalid
265
+ """
266
+ if not (0 <= i < len(self._images) and 0 <= j < len(self._images)):
267
+ return False
268
+
269
+ self._images[i], self._images[j] = self._images[j], self._images[i]
270
+ return True
271
+
272
+ # -------------------------------------------------------------------------
273
+ # List Manipulation
274
+ # -------------------------------------------------------------------------
275
+
276
+ def shuffle(self) -> None:
277
+ """Randomly shuffle the images."""
278
+ self._save_state()
279
+ random.shuffle(self._images)
280
+
281
+ def reverse(self) -> None:
282
+ """Reverse the order of images."""
283
+ self._save_state()
284
+ self._images.reverse()
285
+
286
+ def sort_ascending(self) -> None:
287
+ """Sort images in ascending order (by rank)."""
288
+ self._save_state()
289
+ self._images.sort() # Uses __lt__ we defined!
290
+
291
+ def sort_descending(self) -> None:
292
+ """Sort images in descending order (by rank)."""
293
+ self._save_state()
294
+ self._images.sort(reverse=True)
295
+
296
+ # -------------------------------------------------------------------------
297
+ # Analysis Methods
298
+ # -------------------------------------------------------------------------
299
+
300
+ def is_sorted(self, ascending: bool = True) -> bool:
301
+ """
302
+ Check if the list is sorted.
303
+
304
+ Args:
305
+ ascending: If True, check ascending order; else descending
306
+
307
+ Returns:
308
+ True if sorted in the specified order
309
+ """
310
+ if len(self._images) <= 1:
311
+ return True
312
+
313
+ for i in range(len(self._images) - 1):
314
+ if ascending:
315
+ if self._images[i] > self._images[i + 1]:
316
+ return False
317
+ else:
318
+ if self._images[i] < self._images[i + 1]:
319
+ return False
320
+ return True
321
+
322
+ def count_unique_gestures(self) -> int:
323
+ """Count how many unique gestures are in the list."""
324
+ return len(set(img.gesture for img in self._images))
325
+
326
+ def count_duplicates(self) -> int:
327
+ """Count how many duplicate gestures exist."""
328
+ return len(self._images) - self.count_unique_gestures()
329
+
330
+ def get_sortedness_percentage(self) -> float:
331
+ """
332
+ Calculate how sorted the list is (0% to 100%).
333
+
334
+ This counts how many adjacent pairs are in correct order.
335
+ """
336
+ if len(self._images) <= 1:
337
+ return 100.0
338
+
339
+ correct_pairs = 0
340
+ for i in range(len(self._images) - 1):
341
+ if self._images[i] <= self._images[i + 1]:
342
+ correct_pairs += 1
343
+
344
+ return (correct_pairs / (len(self._images) - 1)) * 100
345
+
346
+ def get_analysis(self) -> str:
347
+ """Get a human-readable analysis of the current list state."""
348
+ if self.is_empty:
349
+ return "Empty list"
350
+
351
+ unique = self.count_unique_gestures()
352
+ dupes = self.count_duplicates()
353
+ sorted_pct = self.get_sortedness_percentage()
354
+
355
+ analysis = f"{len(self._images)} elements, {unique} unique"
356
+ if dupes > 0:
357
+ analysis += f", {dupes} duplicates"
358
+ analysis += f", {sorted_pct:.0f}% sorted"
359
+
360
+ return analysis
361
+
362
+ # -------------------------------------------------------------------------
363
+ # History / Undo
364
+ # -------------------------------------------------------------------------
365
+
366
+ def _save_state(self) -> None:
367
+ """Save current state for undo. (Internal method)"""
368
+ # Keep only last 10 states to save memory
369
+ if len(self._history) >= 10:
370
+ self._history.pop(0)
371
+ self._history.append(deepcopy(self._images))
372
+
373
+ def undo(self) -> bool:
374
+ """
375
+ Restore the previous state.
376
+
377
+ Returns:
378
+ True if undo successful, False if no history available
379
+ """
380
+ if not self._history:
381
+ print("⚠️ Nothing to undo")
382
+ return False
383
+
384
+ self._images = self._history.pop()
385
+ return True
386
+
387
+ # -------------------------------------------------------------------------
388
+ # Iteration Support
389
+ # -------------------------------------------------------------------------
390
+ """
391
+ ╔═════════════════════════════════════════════════════════════════════════╗
392
+ ║ 📚 CONCEPT: Making Objects Iterable ║
393
+ ╠═════════════════════════════════════════════════════════════════════════╣
394
+ ║ ║
395
+ ║ By implementing __iter__ and __len__, our ImageList can be used ║
396
+ ║ just like a regular Python list: ║
397
+ ║ ║
398
+ ║ for image in my_image_list: # Works! ║
399
+ ║ print(image) ║
400
+ ║ ║
401
+ ║ length = len(my_image_list) # Works! ║
402
+ ║ ║
403
+ ║ first = my_image_list[0] # Works! ║
404
+ ║ ║
405
+ ╚═════════════════════════════════════════════════════════════════════════╝
406
+ """
407
+
408
+ def __len__(self) -> int:
409
+ """Enable len(image_list)."""
410
+ return len(self._images)
411
+
412
+ def __iter__(self):
413
+ """Enable for image in image_list."""
414
+ return iter(self._images)
415
+
416
+ def __getitem__(self, index: int) -> GestureImage:
417
+ """Enable image_list[0], image_list[1], etc."""
418
+ return self._images[index]
419
+
420
+ def __setitem__(self, index: int, value: GestureImage) -> None:
421
+ """Enable image_list[0] = new_image."""
422
+ self._images[index] = value
423
+
424
+ # -------------------------------------------------------------------------
425
+ # Display Methods
426
+ # -------------------------------------------------------------------------
427
+
428
+ def display_string(self) -> str:
429
+ """
430
+ Get a string representation showing all images with their emojis.
431
+
432
+ Example: "[✊₁] [✌️₂] [✌️₃] [👍₄]"
433
+ """
434
+ if self.is_empty:
435
+ return "(empty)"
436
+ return " ".join(f"[{img}]" for img in self._images)
437
+
438
+ def __str__(self) -> str:
439
+ """Human-readable representation."""
440
+ return f"ImageList({self.display_string()})"
441
+
442
+ def __repr__(self) -> str:
443
+ """Developer representation."""
444
+ return f"ImageList(length={len(self._images)}, max={self.MAX_SIZE})"
oop_sorting_teaching/models/step.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ╔══════════════════════════════════════════════════════════════════════════════╗
3
+ ║ Models: step.py ║
4
+ ║ Classes for representing algorithm execution steps ║
5
+ ╚══════════════════════════════════════════════════════════════════════════════╝
6
+
7
+ This module contains:
8
+ • StepType (Enum) - Types of operations (compare, swap, merge, etc.)
9
+ • Step (dataclass) - A single step in algorithm execution
10
+
11
+ 📚 WHY RECORD STEPS?
12
+ To visualize algorithms step-by-step, we need to RECORD what happens.
13
+ Each Step captures:
14
+ - WHAT operation occurred (compare, swap, etc.)
15
+ - WHERE it happened (which indices)
16
+ - WHEN in the process (depth for recursive algorithms)
17
+ - Additional context (metadata)
18
+
19
+ This is the "data" that visualization will render.
20
+ """
21
+
22
+ from dataclasses import dataclass, field
23
+ from enum import Enum, auto
24
+ from typing import List, TYPE_CHECKING
25
+
26
+ # Avoid circular import - only import for type checking
27
+ if TYPE_CHECKING:
28
+ from oop_sorting_teaching.models.gesture import GestureImage
29
+
30
+
31
+ # ==============================================================================
32
+ # ENUM: StepType
33
+ # ==============================================================================
34
+ #
35
+ # 📚 CONCEPT: Enums for Type Safety
36
+ #
37
+ # Instead of using strings like "compare" or "swap", we use an Enum.
38
+ # This prevents bugs from typos:
39
+ # - String: if step_type == "comprae" # Typo! No error, silent bug
40
+ # - Enum: if step_type == StepType.COMPRAE # Python error! Bug caught
41
+ # ==============================================================================
42
+
43
+ class StepType(Enum):
44
+ """
45
+ Types of algorithm steps that can be visualized.
46
+
47
+ Each step in our sorting/searching algorithms will have a type
48
+ that determines how it's displayed in the visualization.
49
+
50
+ 📚 CONCEPT: auto()
51
+
52
+ The auto() function automatically assigns incrementing values.
53
+ We don't care what the actual numbers are - we just need
54
+ unique identifiers for each step type.
55
+ """
56
+ # Comparison operations
57
+ COMPARE = auto() # Comparing two elements
58
+
59
+ # Movement operations
60
+ SWAP = auto() # Swapping two elements (in-place algorithms)
61
+ MOVE = auto() # Moving an element to a new position
62
+
63
+ # Merge sort specific
64
+ SPLIT = auto() # Splitting array into subarrays
65
+ MERGE = auto() # Merging sorted subarrays
66
+
67
+ # Quick sort specific
68
+ PIVOT_SELECT = auto() # Selecting a pivot element
69
+ PARTITION = auto() # Partitioning around pivot
70
+
71
+ # Binary search specific
72
+ SEARCH_RANGE = auto() # Showing current search range
73
+ NARROW_LEFT = auto() # Target is in left half
74
+ NARROW_RIGHT = auto() # Target is in right half
75
+ FOUND = auto() # Target element found
76
+ NOT_FOUND = auto() # Target element not in array
77
+
78
+ # General
79
+ PASS_COMPLETE = auto() # One pass through the data complete (Bubble Sort)
80
+ COMPLETE = auto() # Algorithm finished
81
+ MARK_SORTED = auto() # Mark element(s) as in final position
82
+
83
+ # Stability detection
84
+ INSTABILITY = auto() # Stability violation detected!
85
+ INSTABILITY_WARNING = auto() # Warning for potential instability
86
+
87
+
88
+ # ==============================================================================
89
+ # DATACLASS: Step
90
+ # ==============================================================================
91
+ #
92
+ # 📚 CONCEPT: Recording Algorithm Execution
93
+ #
94
+ # When an algorithm runs, we want to show EVERY step:
95
+ # 1. What operation happened (StepType)
96
+ # 2. Which elements were involved (indices)
97
+ # 3. What the array looks like now (array_state)
98
+ # 4. Any additional info (metadata)
99
+ #
100
+ # By recording steps, we can:
101
+ # - Play back the algorithm visually
102
+ # - Step forward and backward
103
+ # - Analyze algorithm behavior
104
+ # ==============================================================================
105
+
106
+ @dataclass
107
+ class Step:
108
+ """
109
+ Represents a single step in an algorithm's execution.
110
+
111
+ This is used to record what the algorithm is doing at each point,
112
+ so we can visualize it step by step.
113
+
114
+ Think of it like frames in a movie:
115
+ - Each Step is one frame
116
+ - Together they tell the story of the algorithm
117
+
118
+ Attributes:
119
+ step_type: What kind of operation (compare, swap, merge, etc.)
120
+ indices: Which array positions are involved
121
+ description: Human-readable explanation
122
+ depth: Recursion depth (for merge sort / quick sort)
123
+ array_state: Copy of the array at this step
124
+ highlight_indices: Extra indices to highlight (e.g., sorted region)
125
+ metadata: Additional algorithm-specific data
126
+
127
+ Example:
128
+ step = Step(
129
+ step_type=StepType.COMPARE,
130
+ indices=[3, 4],
131
+ description="Comparing elements at positions 3 and 4",
132
+ depth=0,
133
+ array_state=[...],
134
+ metadata={"comparison_count": 5}
135
+ )
136
+ """
137
+ step_type: StepType
138
+ indices: List[int]
139
+ description: str
140
+ depth: int = 0
141
+ array_state: List['GestureImage'] = field(default_factory=list)
142
+ highlight_indices: List[int] = field(default_factory=list)
143
+ metadata: dict = field(default_factory=dict)
144
+
145
+ @property
146
+ def type(self) -> StepType:
147
+ """
148
+ Alias for step_type for cleaner access in renderers.
149
+
150
+ This allows: step.type instead of step.step_type
151
+ Makes the code more readable in visualization code.
152
+ """
153
+ return self.step_type
154
+
155
+ def __str__(self) -> str:
156
+ """Human-readable string for debugging."""
157
+ indices_str = ', '.join(str(i) for i in self.indices)
158
+ return f"[{self.step_type.name}] indices=[{indices_str}] depth={self.depth}: {self.description}"
oop_sorting_teaching/visualization/__init__.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Visualization package for algorithm step rendering.
3
+
4
+ This package provides visualization tools for sorting and searching algorithms:
5
+ - VisualizationState: Enum for visualization states
6
+ - StepRenderer: Abstract base class for renderers
7
+ - Individual renderers for each algorithm
8
+ - RendererFactory: Factory for creating renderers
9
+ - Visualizer: Main controller for visualization
10
+
11
+ 📚 CONCEPT: Strategy Pattern
12
+
13
+ Different algorithms need different visualizations:
14
+ - Bubble Sort: Highlight two adjacent elements being compared
15
+ - Merge Sort: Show depth levels with indentation
16
+ - Quick Sort: Show pivot and partition regions
17
+ - Binary Search: Show search range narrowing
18
+
19
+ Instead of one giant "if algorithm == X then do Y" block,
20
+ we create separate RENDERER classes for each visualization style.
21
+ """
22
+
23
+ from .state import VisualizationState, VisualizationConfig
24
+ from .renderers import (
25
+ StepRenderer,
26
+ BubbleSortRenderer,
27
+ MergeSortRenderer,
28
+ QuickSortRenderer,
29
+ LinearSearchRenderer,
30
+ BinarySearchRenderer,
31
+ )
32
+ from .factory import RendererFactory
33
+ from .visualizer import Visualizer
34
+
35
+ __all__ = [
36
+ 'VisualizationState',
37
+ 'VisualizationConfig',
38
+ 'StepRenderer',
39
+ 'BubbleSortRenderer',
40
+ 'MergeSortRenderer',
41
+ 'QuickSortRenderer',
42
+ 'LinearSearchRenderer',
43
+ 'BinarySearchRenderer',
44
+ 'RendererFactory',
45
+ 'Visualizer',
46
+ ]
oop_sorting_teaching/visualization/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (1.44 kB). View file
 
oop_sorting_teaching/visualization/__pycache__/factory.cpython-313.pyc ADDED
Binary file (4.18 kB). View file
 
oop_sorting_teaching/visualization/__pycache__/state.cpython-313.pyc ADDED
Binary file (2.29 kB). View file
 
oop_sorting_teaching/visualization/__pycache__/visualizer.cpython-313.pyc ADDED
Binary file (15.1 kB). View file
 
oop_sorting_teaching/visualization/factory.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Renderer Factory for creating algorithm-specific renderers.
3
+
4
+ 📚 CONCEPT: Factory Pattern
5
+
6
+ A Factory is a class that creates objects for you.
7
+
8
+ Benefits:
9
+ 1. ENCAPSULATION: Client doesn't need to know about specific classes
10
+ 2. EXTENSIBILITY: Adding a new algorithm just needs a new entry here
11
+ 3. CONSISTENCY: All renderers are created the same way
12
+ """
13
+
14
+ from typing import Dict, Type
15
+
16
+ from .renderers import (
17
+ StepRenderer,
18
+ BubbleSortRenderer,
19
+ MergeSortRenderer,
20
+ QuickSortRenderer,
21
+ LinearSearchRenderer,
22
+ BinarySearchRenderer,
23
+ )
24
+
25
+
26
+ class RendererFactory:
27
+ """
28
+ 📚 CONCEPT: Factory Pattern
29
+
30
+ A Factory is a class that creates objects for you.
31
+
32
+ Benefits:
33
+ 1. ENCAPSULATION: Client doesn't need to know about specific classes
34
+ 2. EXTENSIBILITY: Adding a new algorithm just needs a new entry here
35
+ 3. CONSISTENCY: All renderers are created the same way
36
+
37
+ PROCEDURAL APPROACH:
38
+ # Scattered if-else throughout the code
39
+ # Hard to maintain, easy to miss cases
40
+
41
+ OOP FACTORY APPROACH:
42
+ # Single place to manage all renderer creation
43
+ # Easy to extend with new algorithms
44
+ """
45
+
46
+ # Class-level mapping of algorithm names to renderer classes
47
+ # 📚 CONCEPT: Class Variables
48
+ # These belong to the CLASS, not to instances
49
+ _renderers: Dict[str, Type[StepRenderer]] = {
50
+ "Bubble Sort": BubbleSortRenderer,
51
+ "Bubble Sort (Early Exit)": BubbleSortRenderer,
52
+ "Merge Sort": MergeSortRenderer,
53
+ "Quick Sort": QuickSortRenderer,
54
+ "Linear Search": LinearSearchRenderer,
55
+ "Binary Search": BinarySearchRenderer,
56
+ "Binary Search (Iterative)": BinarySearchRenderer,
57
+ "Binary Search (Recursive)": BinarySearchRenderer,
58
+ }
59
+
60
+ @classmethod
61
+ def create(cls, algorithm_name: str) -> StepRenderer:
62
+ """
63
+ Create the appropriate renderer for an algorithm.
64
+
65
+ 📚 CONCEPT: @classmethod
66
+
67
+ A class method receives the CLASS (cls) instead of an instance (self).
68
+ This is perfect for factories because:
69
+ - We don't need an instance of RendererFactory
70
+ - We're creating OTHER objects, not modifying ourselves
71
+
72
+ Args:
73
+ algorithm_name: Name of the algorithm (from algorithm.name)
74
+
75
+ Returns:
76
+ The appropriate StepRenderer subclass instance
77
+
78
+ Raises:
79
+ ValueError: If no renderer exists for the algorithm
80
+ """
81
+ # Check if we have a renderer for this algorithm
82
+ renderer_class = cls._renderers.get(algorithm_name)
83
+
84
+ if renderer_class is None:
85
+ # Try partial matching (in case of configuration details in name)
86
+ for key, value in cls._renderers.items():
87
+ if key in algorithm_name or algorithm_name in key:
88
+ renderer_class = value
89
+ break
90
+
91
+ if renderer_class is None:
92
+ raise ValueError(
93
+ f"No renderer found for algorithm: {algorithm_name}\n"
94
+ f"Available renderers: {list(cls._renderers.keys())}"
95
+ )
96
+
97
+ # Create and return an instance of the renderer
98
+ return renderer_class()
99
+
100
+ @classmethod
101
+ def register(cls, algorithm_name: str, renderer_class: Type[StepRenderer]) -> None:
102
+ """
103
+ Register a new renderer for an algorithm.
104
+
105
+ 📚 CONCEPT: Open/Closed Principle
106
+
107
+ This method lets us ADD new renderers without MODIFYING
108
+ the factory class itself. The factory is:
109
+ - OPEN for extension (add new renderers)
110
+ - CLOSED for modification (don't change existing code)
111
+
112
+ Example:
113
+ class MyCustomRenderer(StepRenderer):
114
+ ...
115
+
116
+ RendererFactory.register("My Algorithm", MyCustomRenderer)
117
+ """
118
+ cls._renderers[algorithm_name] = renderer_class
119
+
120
+ @classmethod
121
+ def available_renderers(cls) -> list:
122
+ """Return list of available renderer names."""
123
+ return list(cls._renderers.keys())
oop_sorting_teaching/visualization/renderers/__init__.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Step renderers for algorithm visualization.
3
+
4
+ Contains:
5
+ - StepRenderer: Abstract base class
6
+ - BubbleSortRenderer: For Bubble Sort visualization
7
+ - MergeSortRenderer: For Merge Sort visualization
8
+ - QuickSortRenderer: For Quick Sort visualization
9
+ - LinearSearchRenderer: For Linear Search visualization
10
+ - BinarySearchRenderer: For Binary Search visualization
11
+ """
12
+
13
+ from .base import StepRenderer
14
+ from .bubble_renderer import BubbleSortRenderer
15
+ from .merge_renderer import MergeSortRenderer
16
+ from .quick_renderer import QuickSortRenderer
17
+ from .linear_renderer import LinearSearchRenderer
18
+ from .binary_renderer import BinarySearchRenderer
19
+
20
+ __all__ = [
21
+ 'StepRenderer',
22
+ 'BubbleSortRenderer',
23
+ 'MergeSortRenderer',
24
+ 'QuickSortRenderer',
25
+ 'LinearSearchRenderer',
26
+ 'BinarySearchRenderer',
27
+ ]
oop_sorting_teaching/visualization/renderers/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (961 Bytes). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/base.cpython-313.pyc ADDED
Binary file (7.65 kB). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/binary_renderer.cpython-313.pyc ADDED
Binary file (5.53 kB). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/bubble_renderer.cpython-313.pyc ADDED
Binary file (3.93 kB). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/linear_renderer.cpython-313.pyc ADDED
Binary file (4.13 kB). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/merge_renderer.cpython-313.pyc ADDED
Binary file (4.62 kB). View file
 
oop_sorting_teaching/visualization/renderers/__pycache__/quick_renderer.cpython-313.pyc ADDED
Binary file (5.68 kB). View file
 
oop_sorting_teaching/visualization/renderers/base.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Abstract base class for step renderers.
3
+
4
+ 📚 CONCEPT: Strategy Pattern
5
+
6
+ Different algorithms need different visualizations:
7
+ - Bubble Sort: Highlight two adjacent elements being compared
8
+ - Merge Sort: Show depth levels with indentation
9
+ - Quick Sort: Show pivot and partition regions
10
+ - Binary Search: Show search range narrowing
11
+
12
+ The StepRenderer ABC defines the interface that all renderers must follow.
13
+ """
14
+
15
+ from abc import ABC, abstractmethod
16
+ from typing import List, Dict
17
+
18
+ from ...models import GestureImage, Step
19
+
20
+
21
+ class StepRenderer(ABC):
22
+ """
23
+ 📚 CONCEPT: Abstract Base Class for Rendering
24
+
25
+ This defines WHAT a renderer must do, not HOW.
26
+ Each specific renderer (BubbleRenderer, MergeRenderer, etc.)
27
+ implements the HOW for its specific algorithm.
28
+
29
+ BEFORE OOP (procedural):
30
+ def render_step(step, algorithm_type):
31
+ if algorithm_type == "bubble":
32
+ # 50 lines of bubble rendering code
33
+ elif algorithm_type == "merge":
34
+ # 80 lines of merge rendering code
35
+ elif algorithm_type == "quick":
36
+ # 100 lines of quick sort rendering code
37
+ # ... more and more elif blocks
38
+
39
+ AFTER OOP (Strategy Pattern):
40
+ renderer = BubbleSortRenderer() # or MergeSortRenderer()
41
+ html = renderer.render(step, images)
42
+ # Each renderer has clean, focused code
43
+ """
44
+
45
+ # -------------------------------------------------------------------------
46
+ # Abstract Methods - MUST be implemented by subclasses
47
+ # -------------------------------------------------------------------------
48
+
49
+ @abstractmethod
50
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
51
+ """
52
+ Convert a single algorithm step into HTML for display.
53
+
54
+ Args:
55
+ step: The algorithm step to visualize
56
+ images: The current state of the image list
57
+
58
+ Returns:
59
+ HTML string ready to display in Gradio
60
+
61
+ 📚 WHY HTML?
62
+
63
+ Gradio can display HTML directly, giving us full control over:
64
+ - Colors for highlighting
65
+ - Borders for indicating comparisons
66
+ - Spacing and layout
67
+ - Animations with CSS
68
+ """
69
+ pass
70
+
71
+ @abstractmethod
72
+ def get_legend(self) -> str:
73
+ """
74
+ Return HTML explaining the visual indicators used.
75
+
76
+ Each algorithm uses different colors/symbols:
77
+ - Yellow: Currently comparing
78
+ - Green: Already sorted
79
+ - Red: Swapped
80
+
81
+ The legend helps students understand what they're seeing.
82
+ """
83
+ pass
84
+
85
+ # -------------------------------------------------------------------------
86
+ # Shared Helper Methods
87
+ # -------------------------------------------------------------------------
88
+
89
+ def _image_to_html(
90
+ self,
91
+ image: GestureImage,
92
+ highlight: str = "none",
93
+ size: int = 60
94
+ ) -> str:
95
+ """
96
+ 📚 CONCEPT: Template Method Pattern (light version)
97
+
98
+ Common code that all renderers share is in the BASE CLASS.
99
+ Specific rendering logic is in the SUBCLASSES.
100
+
101
+ This prevents code duplication across renderers.
102
+
103
+ Args:
104
+ image: The gesture image to render
105
+ highlight: Type of highlighting ("none", "compare", "swap",
106
+ "sorted", "pivot", "found", "search_range")
107
+ size: Thumbnail size in pixels
108
+
109
+ Returns:
110
+ HTML for a single image card
111
+ """
112
+ # Define colors for different highlight types
113
+ # Using Queen's colors plus semantic colors
114
+ highlight_styles = {
115
+ "none": "border: 2px solid #ddd; background: white;",
116
+ "compare": "border: 3px solid #FABD0F; background: #FFF8E1;", # Gold - comparing
117
+ "swap": "border: 3px solid #dc3545; background: #FFE4E4;", # Red - swapping
118
+ "sorted": "border: 3px solid #28a745; background: #E8F5E9;", # Green - sorted
119
+ "pivot": "border: 3px solid #9B2335; background: #FCE4EC;", # Queen's red - pivot
120
+ "found": "border: 3px solid #28a745; background: #C8E6C9;", # Green - found!
121
+ "search_range": "border: 3px solid #002D62; background: #E3F2FD;", # Queen's blue - search
122
+ "merged": "border: 3px solid #6f42c1; background: #F3E5F5;", # Purple - merging
123
+ "insert": "border: 3px solid #17a2b8; background: #E0F7FA;", # Cyan - inserting
124
+ "mid": "border: 3px solid #fd7e14; background: #FFF3E0;", # Orange - midpoint
125
+ }
126
+
127
+ style = highlight_styles.get(highlight, highlight_styles["none"])
128
+
129
+ # Create the image card HTML
130
+ # Using emoji prominently since actual images may be placeholders
131
+ html = f"""
132
+ <div style="
133
+ display: inline-flex;
134
+ flex-direction: column;
135
+ align-items: center;
136
+ margin: 4px;
137
+ padding: 8px;
138
+ border-radius: 8px;
139
+ {style}
140
+ min-width: {size}px;
141
+ transition: all 0.3s ease;
142
+ ">
143
+ <div style="font-size: {size // 2}px; margin-bottom: 4px;">
144
+ {image.emoji}
145
+ </div>
146
+ <div style="font-size: 10px; color: #666;">
147
+ ₍{image.capture_id}₎
148
+ </div>
149
+ <div style="font-size: 9px; color: #999;">
150
+ rank {image.rank}
151
+ </div>
152
+ </div>
153
+ """
154
+ return html
155
+
156
+ def _create_row(
157
+ self,
158
+ images: List[GestureImage],
159
+ highlights: Dict[int, str] = None,
160
+ size: int = 60
161
+ ) -> str:
162
+ """
163
+ Create a horizontal row of images with optional highlighting.
164
+
165
+ Args:
166
+ images: List of images to display
167
+ highlights: Dict mapping index -> highlight type
168
+ size: Thumbnail size
169
+
170
+ Returns:
171
+ HTML for the complete row
172
+ """
173
+ if highlights is None:
174
+ highlights = {}
175
+
176
+ cards = []
177
+ for i, img in enumerate(images):
178
+ highlight = highlights.get(i, "none")
179
+ cards.append(self._image_to_html(img, highlight, size))
180
+
181
+ return f"""
182
+ <div style="
183
+ display: flex;
184
+ flex-wrap: wrap;
185
+ justify-content: center;
186
+ gap: 4px;
187
+ padding: 10px;
188
+ ">
189
+ {''.join(cards)}
190
+ </div>
191
+ """
192
+
193
+ def _create_indices_row(self, count: int, highlights: Dict[int, str] = None) -> str:
194
+ """
195
+ Create index labels below images (0, 1, 2, ...).
196
+
197
+ Useful for teaching students about array indices.
198
+ """
199
+ if highlights is None:
200
+ highlights = {}
201
+
202
+ indices = []
203
+ for i in range(count):
204
+ style = ""
205
+ if i in highlights:
206
+ if highlights[i] == "compare":
207
+ style = "color: #FABD0F; font-weight: bold;"
208
+ elif highlights[i] == "pivot":
209
+ style = "color: #9B2335; font-weight: bold;"
210
+ elif highlights[i] == "mid":
211
+ style = "color: #fd7e14; font-weight: bold;"
212
+
213
+ indices.append(f"""
214
+ <span style="
215
+ display: inline-block;
216
+ width: 60px;
217
+ text-align: center;
218
+ font-family: monospace;
219
+ font-size: 12px;
220
+ {style}
221
+ ">[{i}]</span>
222
+ """)
223
+
224
+ return f"""
225
+ <div style="
226
+ display: flex;
227
+ justify-content: center;
228
+ gap: 4px;
229
+ padding: 0 10px;
230
+ ">
231
+ {''.join(indices)}
232
+ </div>
233
+ """
oop_sorting_teaching/visualization/renderers/binary_renderer.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Binary Search renderer.
3
+
4
+ Binary Search visualization shows:
5
+ - Current search range (blue brackets)
6
+ - Mid point being checked (orange)
7
+ - Narrowing of search range
8
+ - Found/Not Found final state
9
+ """
10
+
11
+ from typing import List
12
+
13
+ from .base import StepRenderer
14
+ from ...models import GestureImage, Step, StepType
15
+
16
+
17
+ class BinarySearchRenderer(StepRenderer):
18
+ """
19
+ Renderer for Binary Search's divide-and-conquer visualization.
20
+
21
+ 📚 TEACHING FOCUS:
22
+
23
+ Binary Search is a beautiful example of logarithmic efficiency.
24
+ Our visualization emphasizes:
25
+ 1. The SORTED requirement (prerequisite)
26
+ 2. How the search range halves each step
27
+ 3. The mid-point calculation
28
+ 4. Comparison with target: go left or right?
29
+ 5. The dramatic efficiency gain over linear search
30
+ """
31
+
32
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
33
+ """
34
+ Render a single Binary Search step.
35
+
36
+ Step Types:
37
+ - SEARCH_RANGE: Show current [left, right] bounds
38
+ - COMPARE: Compare mid element with target
39
+ - NARROW_LEFT: Target is in left half
40
+ - NARROW_RIGHT: Target is in right half
41
+ - FOUND: Element found!
42
+ - NOT_FOUND: Element not in list
43
+ """
44
+ highlights = {}
45
+ n = len(images)
46
+
47
+ # Extract bounds from metadata if available
48
+ left = step.metadata.get("left", 0)
49
+ right = step.metadata.get("right", n - 1)
50
+ mid = step.metadata.get("mid", (left + right) // 2)
51
+
52
+ if step.type in [StepType.SEARCH_RANGE, StepType.NARROW_LEFT, StepType.NARROW_RIGHT]:
53
+ # Highlight search range
54
+ for i in range(left, right + 1):
55
+ highlights[i] = "search_range"
56
+ # Mid gets special highlight
57
+ if 0 <= mid < n:
58
+ highlights[mid] = "mid"
59
+
60
+ elif step.type == StepType.COMPARE:
61
+ # Highlight mid element being compared
62
+ if step.indices:
63
+ highlights[step.indices[0]] = "mid"
64
+ # Keep search range visible
65
+ for i in range(left, right + 1):
66
+ if i not in highlights:
67
+ highlights[i] = "search_range"
68
+
69
+ elif step.type == StepType.FOUND:
70
+ # Highlight found element
71
+ for idx in step.indices:
72
+ highlights[idx] = "found"
73
+
74
+ elif step.type == StepType.NOT_FOUND:
75
+ # No highlighting - element not found
76
+ pass
77
+
78
+ html = f"""
79
+ <div style="
80
+ background: #f8f9fa;
81
+ border-radius: 12px;
82
+ padding: 15px;
83
+ margin: 10px 0;
84
+ border: 2px solid #002D62;
85
+ ">
86
+ <div style="
87
+ font-weight: bold;
88
+ color: #002D62;
89
+ margin-bottom: 10px;
90
+ font-size: 14px;
91
+ ">
92
+ {step.description}
93
+ </div>
94
+
95
+ {self._create_row(images, highlights)}
96
+ {self._create_indices_row(n, highlights)}
97
+ </div>
98
+ """
99
+
100
+ # Add found/not found banner
101
+ if step.type == StepType.FOUND:
102
+ html += """
103
+ <div style="
104
+ background: #C8E6C9;
105
+ border: 2px solid #28a745;
106
+ border-radius: 8px;
107
+ padding: 15px;
108
+ margin: 10px 0;
109
+ text-align: center;
110
+ color: #28a745;
111
+ font-weight: bold;
112
+ font-size: 16px;
113
+ ">
114
+ ✅ FOUND! Element located successfully.
115
+ </div>
116
+ """
117
+ elif step.type == StepType.NOT_FOUND:
118
+ html += """
119
+ <div style="
120
+ background: #FFE4E4;
121
+ border: 2px solid #dc3545;
122
+ border-radius: 8px;
123
+ padding: 15px;
124
+ margin: 10px 0;
125
+ text-align: center;
126
+ color: #dc3545;
127
+ font-weight: bold;
128
+ font-size: 16px;
129
+ ">
130
+ ❌ NOT FOUND: Element is not in the list.
131
+ </div>
132
+ """
133
+
134
+ return html
135
+
136
+ def get_legend(self) -> str:
137
+ """Return the legend explaining Binary Search visuals."""
138
+ return """
139
+ <div style="
140
+ display: flex;
141
+ gap: 20px;
142
+ justify-content: center;
143
+ padding: 10px;
144
+ background: #f0f0f0;
145
+ border-radius: 8px;
146
+ font-size: 12px;
147
+ ">
148
+ <span>🟦 <b>Search Range</b></span>
149
+ <span>🟧 <b>Mid Point</b></span>
150
+ <span>🟩 <b>Found!</b></span>
151
+ </div>
152
+ """
oop_sorting_teaching/visualization/renderers/bubble_renderer.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Bubble Sort renderer.
3
+
4
+ Bubble Sort visualization shows:
5
+ - Two adjacent elements being compared (yellow highlight)
6
+ - Swap operation (red arrows/highlight)
7
+ - Sorted portion growing from the right (green)
8
+ - Early exit detection
9
+ """
10
+
11
+ from typing import List
12
+
13
+ from .base import StepRenderer
14
+ from ...models import GestureImage, Step, StepType
15
+
16
+
17
+ class BubbleSortRenderer(StepRenderer):
18
+ """
19
+ Renderer specifically designed for Bubble Sort visualization.
20
+
21
+ 📚 TEACHING FOCUS:
22
+
23
+ Bubble Sort is often the first sorting algorithm students learn.
24
+ Our visualization emphasizes:
25
+ 1. The "bubbling" motion of larger elements to the right
26
+ 2. The sorted portion growing from the end
27
+ 3. Why early exit is an optimization
28
+ 4. Stability - equal elements maintain order
29
+ """
30
+
31
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
32
+ """
33
+ Render a single Bubble Sort step.
34
+
35
+ Step Types we handle:
36
+ - COMPARE: Highlight two adjacent elements
37
+ - SWAP: Show elements being exchanged
38
+ - PASS_COMPLETE: End of a pass through the array
39
+ - COMPLETE: Algorithm finished
40
+ """
41
+ n = len(images)
42
+
43
+ # Determine which elements to highlight based on step type
44
+ highlights = {}
45
+
46
+ if step.type == StepType.COMPARE:
47
+ # Highlight the two elements being compared
48
+ for idx in step.indices:
49
+ highlights[idx] = "compare"
50
+
51
+ elif step.type == StepType.SWAP:
52
+ # Highlight swapped elements in red
53
+ for idx in step.indices:
54
+ highlights[idx] = "swap"
55
+
56
+ elif step.type == StepType.PASS_COMPLETE:
57
+ # Mark the newly sorted element
58
+ if step.indices:
59
+ highlights[step.indices[0]] = "sorted"
60
+
61
+ elif step.type == StepType.MARK_SORTED:
62
+ # Mark element in final position
63
+ for idx in step.indices:
64
+ highlights[idx] = "sorted"
65
+
66
+ elif step.type == StepType.COMPLETE:
67
+ # Everything is sorted!
68
+ for i in range(n):
69
+ highlights[i] = "sorted"
70
+
71
+ # Build the visualization HTML
72
+ html = f"""
73
+ <div style="
74
+ background: #f8f9fa;
75
+ border-radius: 12px;
76
+ padding: 15px;
77
+ margin: 10px 0;
78
+ ">
79
+ <div style="
80
+ font-weight: bold;
81
+ color: #002D62;
82
+ margin-bottom: 10px;
83
+ font-size: 14px;
84
+ ">
85
+ {step.description}
86
+ </div>
87
+
88
+ {self._create_row(images, highlights)}
89
+ {self._create_indices_row(n, highlights)}
90
+ </div>
91
+ """
92
+
93
+ return html
94
+
95
+ def get_legend(self) -> str:
96
+ """Return the legend explaining Bubble Sort visuals."""
97
+ return """
98
+ <div style="
99
+ display: flex;
100
+ gap: 20px;
101
+ justify-content: center;
102
+ padding: 10px;
103
+ background: #f0f0f0;
104
+ border-radius: 8px;
105
+ font-size: 12px;
106
+ ">
107
+ <span>🟨 <b>Comparing</b></span>
108
+ <span>🟥 <b>Swapping</b></span>
109
+ <span>🟩 <b>Sorted</b></span>
110
+ </div>
111
+ """
oop_sorting_teaching/visualization/renderers/linear_renderer.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Linear Search renderer.
3
+
4
+ Linear Search visualization shows:
5
+ - Sequential checking of each element
6
+ - Current element being compared
7
+ - Found/Not Found final state
8
+ """
9
+
10
+ from typing import List
11
+
12
+ from .base import StepRenderer
13
+ from ...models import GestureImage, Step, StepType
14
+
15
+
16
+ class LinearSearchRenderer(StepRenderer):
17
+ """
18
+ Renderer for Linear Search visualization.
19
+
20
+ 📚 TEACHING FOCUS:
21
+
22
+ Linear Search is simple but inefficient for large data.
23
+ Our visualization emphasizes:
24
+ 1. Sequential checking of each element
25
+ 2. Why this is O(n) - may need to check every element
26
+ 3. Works on unsorted data (advantage over binary search)
27
+ """
28
+
29
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
30
+ """Render a Linear Search step."""
31
+ highlights = {}
32
+ n = len(images)
33
+
34
+ if step.type == StepType.SEARCH_RANGE:
35
+ # Show all elements in search range
36
+ for idx in step.indices:
37
+ highlights[idx] = "search_range"
38
+
39
+ elif step.type == StepType.COMPARE:
40
+ for idx in step.indices:
41
+ highlights[idx] = "compare"
42
+
43
+ elif step.type == StepType.FOUND:
44
+ for idx in step.indices:
45
+ highlights[idx] = "found"
46
+
47
+ html = f"""
48
+ <div style="
49
+ background: #f8f9fa;
50
+ border-radius: 12px;
51
+ padding: 15px;
52
+ margin: 10px 0;
53
+ ">
54
+ <div style="
55
+ font-weight: bold;
56
+ color: #002D62;
57
+ margin-bottom: 10px;
58
+ font-size: 14px;
59
+ ">
60
+ {step.description}
61
+ </div>
62
+
63
+ {self._create_row(images, highlights)}
64
+ {self._create_indices_row(n, highlights)}
65
+ </div>
66
+ """
67
+
68
+ if step.type == StepType.FOUND:
69
+ html += """
70
+ <div style="
71
+ background: #C8E6C9;
72
+ border: 2px solid #28a745;
73
+ border-radius: 8px;
74
+ padding: 10px;
75
+ text-align: center;
76
+ color: #28a745;
77
+ font-weight: bold;
78
+ ">
79
+ ✅ FOUND!
80
+ </div>
81
+ """
82
+ elif step.type == StepType.NOT_FOUND:
83
+ html += """
84
+ <div style="
85
+ background: #FFE4E4;
86
+ border: 2px solid #dc3545;
87
+ border-radius: 8px;
88
+ padding: 10px;
89
+ text-align: center;
90
+ color: #dc3545;
91
+ font-weight: bold;
92
+ ">
93
+ ❌ NOT FOUND: Checked all elements.
94
+ </div>
95
+ """
96
+
97
+ return html
98
+
99
+ def get_legend(self) -> str:
100
+ return """
101
+ <div style="
102
+ display: flex;
103
+ gap: 20px;
104
+ justify-content: center;
105
+ padding: 10px;
106
+ background: #f0f0f0;
107
+ border-radius: 8px;
108
+ font-size: 12px;
109
+ ">
110
+ <span>🟨 <b>Checking</b></span>
111
+ <span>🟩 <b>Found!</b></span>
112
+ </div>
113
+ """
oop_sorting_teaching/visualization/renderers/merge_renderer.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Merge Sort renderer.
3
+
4
+ Merge Sort visualization shows:
5
+ - Recursive splitting with depth levels (indentation/stacking)
6
+ - Merging operation with elements moving back together
7
+ - Stability preservation
8
+ """
9
+
10
+ from typing import List
11
+
12
+ from .base import StepRenderer
13
+ from ...models import GestureImage, Step, StepType
14
+
15
+
16
+ class MergeSortRenderer(StepRenderer):
17
+ """
18
+ Renderer for Merge Sort's divide-and-conquer visualization.
19
+
20
+ 📚 TEACHING FOCUS:
21
+
22
+ Merge Sort is often the first O(n log n) algorithm students see.
23
+ Our visualization emphasizes:
24
+ 1. The recursive splitting into smaller subproblems
25
+ 2. The merging of sorted subarrays
26
+ 3. Depth of recursion (visually stacked)
27
+ 4. Stability - duplicates stay in original relative order
28
+
29
+ 📚 CONCEPT: Depth Visualization
30
+
31
+ We use indentation/margin to show recursion depth:
32
+ - Depth 0: Full array (no indent)
33
+ - Depth 1: Two halves (slight indent)
34
+ - Depth 2: Four quarters (more indent)
35
+ - etc.
36
+
37
+ This helps students understand the "divide" part of divide-and-conquer.
38
+ """
39
+
40
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
41
+ """
42
+ Render a single Merge Sort step.
43
+
44
+ Step Types:
45
+ - SPLIT: Dividing array into halves
46
+ - MERGE: Combining sorted subarrays
47
+ - COMPARE: Comparing elements during merge
48
+ - COMPLETE: Algorithm finished
49
+ """
50
+ highlights = {}
51
+ depth = step.depth
52
+
53
+ # Calculate indentation based on depth
54
+ indent = depth * 30 # 30px per depth level
55
+
56
+ if step.type == StepType.SPLIT:
57
+ # Highlight the split point
58
+ if step.indices:
59
+ for idx in step.indices:
60
+ highlights[idx] = "compare"
61
+
62
+ elif step.type == StepType.MERGE:
63
+ # Highlight merged elements
64
+ for idx in step.indices:
65
+ highlights[idx] = "merged"
66
+
67
+ elif step.type == StepType.MOVE:
68
+ # Element being placed
69
+ for idx in step.indices:
70
+ highlights[idx] = "insert"
71
+
72
+ elif step.type == StepType.COMPARE:
73
+ for idx in step.indices:
74
+ highlights[idx] = "compare"
75
+
76
+ elif step.type == StepType.MARK_SORTED:
77
+ for idx in step.indices:
78
+ highlights[idx] = "sorted"
79
+
80
+ elif step.type == StepType.COMPLETE:
81
+ for i in range(len(images)):
82
+ highlights[i] = "sorted"
83
+
84
+ # Build HTML with depth-based styling
85
+ depth_color = ["#002D62", "#1a4c8c", "#3366b3", "#4d80cc", "#6699e6"][min(depth, 4)]
86
+
87
+ html = f"""
88
+ <div style="
89
+ background: #f8f9fa;
90
+ border-left: 4px solid {depth_color};
91
+ border-radius: 0 12px 12px 0;
92
+ padding: 15px;
93
+ margin: 10px 0;
94
+ margin-left: {indent}px;
95
+ ">
96
+ <div style="
97
+ font-weight: bold;
98
+ color: {depth_color};
99
+ margin-bottom: 10px;
100
+ font-size: 14px;
101
+ ">
102
+ Depth {depth} | {step.description}
103
+ </div>
104
+
105
+ {self._create_row(images, highlights)}
106
+ </div>
107
+ """
108
+
109
+ return html
110
+
111
+ def get_legend(self) -> str:
112
+ """Return the legend explaining Merge Sort visuals."""
113
+ return """
114
+ <div style="
115
+ display: flex;
116
+ gap: 20px;
117
+ justify-content: center;
118
+ padding: 10px;
119
+ background: #f0f0f0;
120
+ border-radius: 8px;
121
+ font-size: 12px;
122
+ ">
123
+ <span>🟨 <b>Comparing</b></span>
124
+ <span>🟪 <b>Merging</b></span>
125
+ <span>🟩 <b>Sorted</b></span>
126
+ <span>📊 <b>Indent = Depth</b></span>
127
+ </div>
128
+ """
oop_sorting_teaching/visualization/renderers/quick_renderer.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Quick Sort renderer.
3
+
4
+ Quick Sort visualization shows:
5
+ - Pivot selection (special highlight)
6
+ - Partition regions (left/right of pivot)
7
+ - Element movements during partitioning
8
+ - Instability when duplicates are reordered
9
+ """
10
+
11
+ from typing import List
12
+
13
+ from .base import StepRenderer
14
+ from ...models import GestureImage, Step, StepType
15
+
16
+
17
+ class QuickSortRenderer(StepRenderer):
18
+ """
19
+ Renderer for Quick Sort's partition-based visualization.
20
+
21
+ 📚 TEACHING FOCUS:
22
+
23
+ Quick Sort is efficient but tricky to understand.
24
+ Our visualization emphasizes:
25
+ 1. How the pivot is chosen (first, last, median-of-3, random)
26
+ 2. Partitioning: elements < pivot go left, > pivot go right
27
+ 3. Recursion on the partitions
28
+ 4. INSTABILITY: duplicates may swap positions
29
+
30
+ 📚 INSTABILITY VISUALIZATION:
31
+
32
+ When we detect that two equal elements have swapped,
33
+ we highlight this with a red warning. This teaches students
34
+ that Quick Sort is NOT stable.
35
+ """
36
+
37
+ def render_step(self, step: Step, images: List[GestureImage]) -> str:
38
+ """
39
+ Render a single Quick Sort step.
40
+
41
+ Step Types:
42
+ - PIVOT_SELECT: Highlighting the chosen pivot
43
+ - PARTITION: Showing partition boundaries
44
+ - COMPARE: Comparing element with pivot
45
+ - SWAP: Moving elements across partition
46
+ - INSTABILITY_WARNING: Duplicates reordered
47
+ - COMPLETE: Algorithm finished
48
+ """
49
+ highlights = {}
50
+ depth = step.depth
51
+ indent = depth * 30
52
+
53
+ if step.type == StepType.PIVOT_SELECT:
54
+ # Pivot gets special Queen's red highlight
55
+ for idx in step.indices:
56
+ highlights[idx] = "pivot"
57
+
58
+ elif step.type == StepType.PARTITION:
59
+ # Show partition boundaries
60
+ # First index is pivot, others are boundaries
61
+ if step.indices:
62
+ highlights[step.indices[0]] = "pivot"
63
+ for idx in step.indices[1:]:
64
+ highlights[idx] = "search_range"
65
+
66
+ elif step.type == StepType.COMPARE:
67
+ for idx in step.indices:
68
+ highlights[idx] = "compare"
69
+
70
+ elif step.type == StepType.SWAP:
71
+ for idx in step.indices:
72
+ highlights[idx] = "swap"
73
+
74
+ elif step.type == StepType.MOVE:
75
+ for idx in step.indices:
76
+ highlights[idx] = "insert"
77
+
78
+ elif step.type == StepType.MARK_SORTED:
79
+ for idx in step.indices:
80
+ highlights[idx] = "sorted"
81
+
82
+ elif step.type == StepType.INSTABILITY_WARNING:
83
+ # Red warning for stability violation
84
+ for idx in step.indices:
85
+ highlights[idx] = "swap"
86
+
87
+ elif step.type == StepType.COMPLETE:
88
+ for i in range(len(images)):
89
+ highlights[i] = "sorted"
90
+
91
+ # Depth color (Queen's red shades)
92
+ depth_color = ["#9B2335", "#b54555", "#cc6675", "#e08895", "#f0aab5"][min(depth, 4)]
93
+
94
+ html = f"""
95
+ <div style="
96
+ background: #f8f9fa;
97
+ border-left: 4px solid {depth_color};
98
+ border-radius: 0 12px 12px 0;
99
+ padding: 15px;
100
+ margin: 10px 0;
101
+ margin-left: {indent}px;
102
+ ">
103
+ <div style="
104
+ font-weight: bold;
105
+ color: {depth_color};
106
+ margin-bottom: 10px;
107
+ font-size: 14px;
108
+ ">
109
+ Depth {depth} | {step.description}
110
+ </div>
111
+
112
+ {self._create_row(images, highlights)}
113
+ </div>
114
+ """
115
+
116
+ # Add instability warning if applicable
117
+ if step.type == StepType.INSTABILITY_WARNING:
118
+ html += """
119
+ <div style="
120
+ background: #FFE4E4;
121
+ border: 2px solid #dc3545;
122
+ border-radius: 8px;
123
+ padding: 10px;
124
+ margin: 5px 0;
125
+ text-align: center;
126
+ color: #dc3545;
127
+ font-weight: bold;
128
+ ">
129
+ ⚠️ INSTABILITY DETECTED: Equal elements have changed order!
130
+ </div>
131
+ """
132
+
133
+ return html
134
+
135
+ def get_legend(self) -> str:
136
+ """Return the legend explaining Quick Sort visuals."""
137
+ return """
138
+ <div style="
139
+ display: flex;
140
+ gap: 15px;
141
+ justify-content: center;
142
+ flex-wrap: wrap;
143
+ padding: 10px;
144
+ background: #f0f0f0;
145
+ border-radius: 8px;
146
+ font-size: 12px;
147
+ ">
148
+ <span>🔴 <b>Pivot</b></span>
149
+ <span>🟨 <b>Comparing</b></span>
150
+ <span>🟥 <b>Swapping</b></span>
151
+ <span>🟦 <b>Partition</b></span>
152
+ <span>🟩 <b>Sorted</b></span>
153
+ </div>
154
+ """
oop_sorting_teaching/visualization/state.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Visualization state and configuration.
3
+
4
+ Contains:
5
+ - VisualizationState: Enum for tracking visualization state
6
+ - VisualizationConfig: Configuration dataclass for visualization options
7
+ """
8
+
9
+ from dataclasses import dataclass
10
+ from enum import Enum
11
+
12
+
13
+ class VisualizationState(Enum):
14
+ """
15
+ 📚 CONCEPT: Finite State Machine
16
+
17
+ A visualization can only be in ONE of these states at a time.
18
+ This prevents impossible states like "playing AND paused".
19
+
20
+ BEFORE OOP (with strings):
21
+ state = "idle"
22
+ if state == "plying": # Typo! Hard to catch bug
23
+ ...
24
+
25
+ AFTER OOP (with Enum):
26
+ state = VisualizationState.IDLE
27
+ if state == VisualizationState.PLYING: # Python error! Typo caught
28
+ ...
29
+ """
30
+ IDLE = "idle" # No steps loaded
31
+ READY = "ready" # Steps loaded, ready to start
32
+ PLAYING = "playing" # Auto-playing animation
33
+ PAUSED = "paused" # Paused mid-animation
34
+ STEPPING = "stepping" # Manual step-by-step mode
35
+ COMPLETE = "complete" # Reached the end
36
+
37
+
38
+ @dataclass
39
+ class VisualizationConfig:
40
+ """
41
+ Configuration options for the visualizer.
42
+
43
+ 📚 CONCEPT: Configuration Object
44
+
45
+ Instead of passing many parameters to functions, we bundle
46
+ related settings into a configuration object.
47
+
48
+ BEFORE:
49
+ def start_visualization(speed, auto_play, loop, show_stats, ...):
50
+
51
+ AFTER:
52
+ def start_visualization(config: VisualizationConfig):
53
+
54
+ Benefits:
55
+ - Easy to add new options without changing function signatures
56
+ - Can have sensible defaults
57
+ - Can save/load configurations
58
+ """
59
+ animation_speed_ms: int = 1000 # Milliseconds per step
60
+ auto_play: bool = False # Start playing immediately
61
+ loop: bool = False # Restart when finished
62
+ show_statistics: bool = True # Show step count, comparisons, etc.
63
+ show_legend: bool = True # Show color legend
64
+ image_size: int = 60 # Size of image thumbnails