3v324v23 commited on
Commit
67f4ecf
·
0 Parent(s):

Initial commit for MotionScope Pro (Docker deployment)

Browse files
Files changed (7) hide show
  1. .gitignore +10 -0
  2. Dockerfile +26 -0
  3. README.md +77 -0
  4. app.py +340 -0
  5. detector.py +280 -0
  6. live_run.py +70 -0
  7. requirements.txt +4 -0
.gitignore ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ venv/
2
+ __pycache__/
3
+ *.pyc
4
+ .DS_Store
5
+ .env
6
+ output.mp4
7
+ output_recorded.mp4
8
+ motionscope_output.mp4
9
+ motionscope_snapshot.jpg
10
+ hand_landmarker.task
Dockerfile ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies for OpenCV
6
+ RUN apt-get update && apt-get install -y \
7
+ libgl1-mesa-glx \
8
+ libglib2.0-0 \
9
+ wget \
10
+ && rm -rf /var/lib/apt/lists/*
11
+
12
+ # Copy requirements and install Python dependencies
13
+ COPY requirements.txt .
14
+ RUN pip install --no-cache-dir -r requirements.txt
15
+
16
+ # Download the MediaPipe model file
17
+ RUN wget -q -O hand_landmarker.task https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/latest/hand_landmarker.task
18
+
19
+ # Copy the application code
20
+ COPY . .
21
+
22
+ # Expose the port Streamlit runs on (HF Spaces uses 7860)
23
+ EXPOSE 7860
24
+
25
+ # Run the application
26
+ CMD ["streamlit", "run", "app.py", "--server.port", "7860", "--server.address", "0.0.0.0"]
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: MotionScope Pro
3
+ emoji: 🎥
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 7860
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # 🎥 MotionScope Pro — Movement Detector
13
+
14
+ A professional Streamlit application combining **MediaPipe hand tracking** and **background subtraction motion detection**.
15
+
16
+ ## Features
17
+
18
+ | Feature | Description |
19
+ |---|---|
20
+ | 🖐️ Hand Tracking | Real-time MediaPipe hand landmark detection |
21
+ | 🚗 Motion Detection | Background subtraction (MOG2) with contour filtering |
22
+ | ⚡ Combined Mode | Both hand tracking + motion detection simultaneously |
23
+ | 📹 Video Upload | Upload MP4/AVI/MOV/MKV → process → download result |
24
+ | 📷 Webcam Snapshot | Capture a photo and process it instantly |
25
+
26
+ ## How to Download & Run
27
+
28
+ ### Option 1: Run on Hugging Face Spaces
29
+ Click the **App** tab above to use the application directly in your browser!
30
+
31
+ ### Option 2: Run Locally
32
+
33
+ 1. **Clone the repository:**
34
+ ```bash
35
+ git clone https://huggingface.co/spaces/Jack1808/MotionScope-Pro
36
+ cd MotionScope-Pro
37
+ ```
38
+
39
+ 2. **Create a virtual environment (recommended):**
40
+ ```bash
41
+ python -m venv venv
42
+ source venv/bin/activate # On Windows: venv\Scripts\activate
43
+ ```
44
+
45
+ 3. **Install dependencies:**
46
+ ```bash
47
+ pip install -r requirements.txt
48
+ ```
49
+
50
+ 4. **Run the application:**
51
+ - **Web Interface:**
52
+ ```bash
53
+ streamlit run app.py
54
+ ```
55
+ - **Real-time Webcam Window:**
56
+ ```bash
57
+ python live_run.py
58
+ ```
59
+
60
+ 1. **Sidebar** — choose detection mode and tune parameters (threshold, min area, confidence).
61
+ 2. **Video Upload tab** — upload a video, click **Process Video**, watch the live preview, then download the result.
62
+ 3. **Webcam Snapshot tab** — take a photo from your webcam and see the detected landmarks / motion overlay.
63
+
64
+ ## Project Structure
65
+
66
+ ```
67
+ motion_detector/
68
+ ├── app.py # Streamlit UI
69
+ ├── detector.py # Core MovementDetector class
70
+ ├── requirements.txt # Python dependencies
71
+ └── README.md # This file
72
+ ```
73
+
74
+ ## Requirements
75
+
76
+ - Python 3.10+
77
+ - A webcam (for the snapshot tab)
app.py ADDED
@@ -0,0 +1,340 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MotionScope Pro — Streamlit front-end
3
+ Run with: streamlit run app.py
4
+ """
5
+
6
+ import tempfile
7
+ import os
8
+ import cv2
9
+ import numpy as np
10
+ import streamlit as st
11
+ from detector import MovementDetector, DetectionConfig, DetectionMode
12
+
13
+ # ---------------------------------------------------------------------------
14
+ # Page config
15
+ # ---------------------------------------------------------------------------
16
+ st.set_page_config(
17
+ page_title="MotionScope Pro",
18
+ page_icon="🎥",
19
+ layout="wide",
20
+ initial_sidebar_state="expanded",
21
+ )
22
+
23
+ # ---------------------------------------------------------------------------
24
+ # Custom CSS — dark, polished look
25
+ # ---------------------------------------------------------------------------
26
+ st.markdown(
27
+ """
28
+ <style>
29
+ /* ---- Global ---- */
30
+ .stApp {
31
+ background: linear-gradient(135deg, #0f0c29, #302b63, #24243e);
32
+ }
33
+
34
+ /* Hero header */
35
+ .hero {
36
+ text-align: center;
37
+ padding: 1.5rem 0 0.5rem;
38
+ }
39
+ .hero h1 {
40
+ font-size: 2.6rem;
41
+ background: linear-gradient(90deg, #00d2ff, #3a7bd5);
42
+ -webkit-background-clip: text;
43
+ -webkit-text-fill-color: transparent;
44
+ margin-bottom: 0.2rem;
45
+ }
46
+ .hero p {
47
+ color: #b0b0cc;
48
+ font-size: 1.05rem;
49
+ }
50
+
51
+ /* Sidebar */
52
+ section[data-testid="stSidebar"] {
53
+ background: rgba(15, 12, 41, 0.95);
54
+ border-right: 1px solid rgba(58, 123, 213, 0.3);
55
+ }
56
+
57
+ /* Cards */
58
+ .metric-card {
59
+ background: rgba(255,255,255,0.06);
60
+ border: 1px solid rgba(255,255,255,0.08);
61
+ border-radius: 12px;
62
+ padding: 1rem 1.2rem;
63
+ margin-bottom: 0.8rem;
64
+ }
65
+ .metric-card h3 {
66
+ margin: 0 0 0.3rem;
67
+ font-size: 0.95rem;
68
+ color: #7eb8f7;
69
+ }
70
+ .metric-card .val {
71
+ font-size: 1.6rem;
72
+ font-weight: 700;
73
+ color: #fff;
74
+ }
75
+
76
+ /* Feature badges */
77
+ .badge-row {
78
+ display: flex;
79
+ gap: 0.6rem;
80
+ flex-wrap: wrap;
81
+ justify-content: center;
82
+ margin-bottom: 1.2rem;
83
+ }
84
+ .badge {
85
+ background: rgba(58, 123, 213, 0.15);
86
+ border: 1px solid rgba(58, 123, 213, 0.35);
87
+ border-radius: 20px;
88
+ padding: 0.35rem 0.9rem;
89
+ font-size: 0.82rem;
90
+ color: #a0c4ff;
91
+ }
92
+
93
+ /* Hide default Streamlit branding */
94
+ #MainMenu, footer, header {visibility: hidden;}
95
+ </style>
96
+ """,
97
+ unsafe_allow_html=True,
98
+ )
99
+
100
+ # ---------------------------------------------------------------------------
101
+ # Hero header
102
+ # ---------------------------------------------------------------------------
103
+ st.markdown(
104
+ """
105
+ <div class="hero">
106
+ <h1>🎥 MotionScope Pro</h1>
107
+ <p>Advanced Movement Detection &mdash; Hand Tracking &amp; Motion Analysis</p>
108
+ </div>
109
+ """,
110
+ unsafe_allow_html=True,
111
+ )
112
+
113
+ # Feature badges
114
+ st.markdown(
115
+ """
116
+ <div class="badge-row">
117
+ <span class="badge">🖐️ Hand Tracking</span>
118
+ <span class="badge">🚗 Motion Detection</span>
119
+ <span class="badge">⚡ Combined Mode</span>
120
+ <span class="badge">📹 Video Upload</span>
121
+ <span class="badge">📷 Webcam Snapshots</span>
122
+ </div>
123
+ """,
124
+ unsafe_allow_html=True,
125
+ )
126
+
127
+ # ---------------------------------------------------------------------------
128
+ # Sidebar — settings
129
+ # ---------------------------------------------------------------------------
130
+ with st.sidebar:
131
+ st.markdown("## ⚙️ Detection Settings")
132
+
133
+ mode_label = st.selectbox(
134
+ "Detection Mode",
135
+ options=[m.value for m in DetectionMode],
136
+ index=1,
137
+ help="Choose what the detector should look for.",
138
+ )
139
+ mode = DetectionMode(mode_label)
140
+
141
+ st.markdown("---")
142
+ st.markdown("### 🔧 Motion Parameters")
143
+
144
+ motion_threshold = st.slider(
145
+ "Motion threshold",
146
+ min_value=50, max_value=255, value=180, step=5,
147
+ help="Higher → less sensitive (ignores faint motion).",
148
+ )
149
+ min_contour_area = st.slider(
150
+ "Min object area (px²)",
151
+ min_value=100, max_value=10000, value=1000, step=100,
152
+ help="Ignore contours smaller than this area.",
153
+ )
154
+
155
+ st.markdown("---")
156
+ st.markdown("### 🖐️ Hand Parameters")
157
+
158
+ max_hands = st.slider("Max hands to detect", 1, 4, 2)
159
+ det_confidence = st.slider(
160
+ "Detection confidence", 0.1, 1.0, 0.5, 0.05,
161
+ )
162
+ track_confidence = st.slider(
163
+ "Tracking confidence", 0.1, 1.0, 0.5, 0.05,
164
+ )
165
+
166
+ st.markdown("---")
167
+ st.markdown(
168
+ "<small style='color:#666'>Built with OpenCV · MediaPipe · Streamlit</small>",
169
+ unsafe_allow_html=True,
170
+ )
171
+
172
+ # Build config from sidebar values
173
+ config = DetectionConfig(
174
+ min_detection_confidence=det_confidence,
175
+ min_tracking_confidence=track_confidence,
176
+ max_num_hands=max_hands,
177
+ motion_threshold=motion_threshold,
178
+ min_contour_area=min_contour_area,
179
+ )
180
+
181
+ # ---------------------------------------------------------------------------
182
+ # Cached detector (rebuilt when config changes)
183
+ # ---------------------------------------------------------------------------
184
+
185
+ @st.cache_resource
186
+ def get_detector():
187
+ return MovementDetector()
188
+
189
+ detector = get_detector()
190
+ detector.rebuild(config)
191
+
192
+ # ---------------------------------------------------------------------------
193
+ # Tabs — Video Upload | Webcam Snapshot
194
+ # ---------------------------------------------------------------------------
195
+ tab_video, tab_webcam = st.tabs(["📹 Video Upload", "📷 Webcam Snapshot"])
196
+
197
+ # ======================== VIDEO UPLOAD TAB ==============================
198
+ with tab_video:
199
+ uploaded = st.file_uploader(
200
+ "Upload a video file",
201
+ type=["mp4", "avi", "mov", "mkv"],
202
+ help="Supported formats: MP4, AVI, MOV, MKV",
203
+ )
204
+
205
+ if uploaded is not None:
206
+ # Save upload to a temp file
207
+ tfile = tempfile.NamedTemporaryFile(delete=False, suffix=".mp4")
208
+ tfile.write(uploaded.read())
209
+ tfile.flush()
210
+ input_path = tfile.name
211
+
212
+ # Show the original video
213
+ with st.expander("🎬 Original video", expanded=False):
214
+ st.video(input_path)
215
+
216
+ # Process button
217
+ if st.button("🚀 Process Video", type="primary", use_container_width=True):
218
+ output_path = os.path.join(tempfile.gettempdir(), "motionscope_output.mp4")
219
+
220
+ progress_bar = st.progress(0, text="Processing…")
221
+ frame_placeholder = st.empty()
222
+ metrics_placeholder = st.empty()
223
+
224
+ total_objects = 0
225
+ frame_num = 0
226
+
227
+ try:
228
+ for display_frame, result_path, progress in detector.process_video(
229
+ input_path, mode=mode, output_path=output_path,
230
+ ):
231
+ if display_frame is not None:
232
+ frame_num += 1
233
+ # Show every 4th frame for speed
234
+ if frame_num % 4 == 0 or progress >= 1.0:
235
+ frame_placeholder.image(
236
+ display_frame,
237
+ caption=f"Frame {detector.frame_count}",
238
+ use_container_width=True,
239
+ )
240
+ progress_bar.progress(
241
+ progress,
242
+ text=f"Processing… {int(progress * 100)}%",
243
+ )
244
+
245
+ if result_path is not None:
246
+ progress_bar.progress(1.0, text="✅ Done!")
247
+
248
+ st.success(
249
+ f"Processed **{detector.frame_count}** frames successfully!"
250
+ )
251
+
252
+ # Metrics row
253
+ col1, col2, col3 = st.columns(3)
254
+ col1.metric("Total Frames", detector.frame_count)
255
+ col2.metric("Mode", mode.value)
256
+ col3.metric("Status", "✅ Complete")
257
+
258
+ # Download button
259
+ with open(result_path, "rb") as f:
260
+ st.download_button(
261
+ "⬇️ Download Processed Video",
262
+ data=f,
263
+ file_name="motionscope_output.mp4",
264
+ mime="video/mp4",
265
+ use_container_width=True,
266
+ )
267
+
268
+ except Exception as e:
269
+ st.error(f"❌ Error during processing: {e}")
270
+ finally:
271
+ # Cleanup temp input
272
+ try:
273
+ os.unlink(input_path)
274
+ except OSError:
275
+ pass
276
+ else:
277
+ # Empty state
278
+ st.markdown(
279
+ """
280
+ <div style="text-align:center; padding:3rem 0; color:#888;">
281
+ <p style="font-size:3rem; margin-bottom:0.5rem;">📹</p>
282
+ <p>Upload a video above to get started</p>
283
+ </div>
284
+ """,
285
+ unsafe_allow_html=True,
286
+ )
287
+
288
+ # ======================== WEBCAM SNAPSHOT TAB ===========================
289
+ with tab_webcam:
290
+ st.markdown(
291
+ "Take a photo with your webcam and the detector will process it instantly."
292
+ )
293
+
294
+ camera_input = st.camera_input("📷 Take a photo")
295
+
296
+ if camera_input is not None:
297
+ # Decode the image
298
+ file_bytes = np.frombuffer(camera_input.getvalue(), dtype=np.uint8)
299
+ img_bgr = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
300
+
301
+ if img_bgr is not None:
302
+ # Flip for mirror effect
303
+ img_bgr = cv2.flip(img_bgr, 1)
304
+
305
+ # Process
306
+ processed_bgr = detector.process_frame(img_bgr, mode)
307
+ processed_rgb = cv2.cvtColor(processed_bgr, cv2.COLOR_BGR2RGB)
308
+
309
+ col_orig, col_proc = st.columns(2)
310
+ with col_orig:
311
+ st.markdown("**Original**")
312
+ original_rgb = cv2.cvtColor(
313
+ cv2.flip(img_bgr, 1), cv2.COLOR_BGR2RGB # undo our flip for display
314
+ )
315
+ st.image(original_rgb, use_container_width=True)
316
+ with col_proc:
317
+ st.markdown("**Processed**")
318
+ st.image(processed_rgb, use_container_width=True)
319
+
320
+ # Download processed image
321
+ _, buf = cv2.imencode(".jpg", processed_bgr)
322
+ st.download_button(
323
+ "⬇️ Download Processed Image",
324
+ data=buf.tobytes(),
325
+ file_name="motionscope_snapshot.jpg",
326
+ mime="image/jpeg",
327
+ use_container_width=True,
328
+ )
329
+ else:
330
+ st.error("Could not decode the captured image.")
331
+ else:
332
+ st.markdown(
333
+ """
334
+ <div style="text-align:center; padding:3rem 0; color:#888;">
335
+ <p style="font-size:3rem; margin-bottom:0.5rem;">📷</p>
336
+ <p>Click the camera button above to capture a snapshot</p>
337
+ </div>
338
+ """,
339
+ unsafe_allow_html=True,
340
+ )
detector.py ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MotionScope Pro - Core Movement Detection Engine
3
+ Combines MediaPipe HandLandmarker (tasks API) with background subtraction.
4
+ """
5
+
6
+ import os
7
+ import cv2
8
+ import numpy as np
9
+ import mediapipe as mp
10
+ from enum import Enum
11
+ from dataclasses import dataclass
12
+ from typing import Tuple, Generator
13
+
14
+ # MediaPipe tasks API (lazy-loaded via attribute access)
15
+ _BaseOptions = mp.tasks.BaseOptions
16
+ _HandLandmarker = mp.tasks.vision.HandLandmarker
17
+ _HandLandmarkerOptions = mp.tasks.vision.HandLandmarkerOptions
18
+ _RunningMode = mp.tasks.vision.RunningMode
19
+
20
+ # Path to the hand landmarker model (shipped alongside this file)
21
+ _MODEL_PATH = os.path.join(os.path.dirname(__file__), "hand_landmarker.task")
22
+
23
+
24
+ class DetectionMode(Enum):
25
+ """Available detection modes."""
26
+ HAND_TRACKING = "Hand Tracking"
27
+ MOTION_DETECTION = "Motion Detection"
28
+ COMBINED = "Combined"
29
+
30
+
31
+ @dataclass
32
+ class DetectionConfig:
33
+ """Tunable parameters for detection."""
34
+ # MediaPipe hand settings
35
+ min_detection_confidence: float = 0.5
36
+ min_tracking_confidence: float = 0.5
37
+ max_num_hands: int = 2
38
+
39
+ # Motion detection settings
40
+ motion_threshold: int = 180
41
+ min_contour_area: int = 1000
42
+ blur_kernel_size: Tuple[int, int] = (5, 5)
43
+ morph_kernel_size: Tuple[int, int] = (3, 3)
44
+
45
+ # Background subtractor settings
46
+ bg_history: int = 500
47
+ bg_var_threshold: int = 16
48
+ bg_detect_shadows: bool = True
49
+
50
+
51
+ class MovementDetector:
52
+ """
53
+ Professional movement detector combining MediaPipe hands + MOG2
54
+ background subtraction.
55
+ """
56
+
57
+ def __init__(self, config: DetectionConfig | None = None):
58
+ self.config = config or DetectionConfig()
59
+ self.hand_landmarker = self._build_hand_landmarker()
60
+ self.back_sub = self._build_back_sub()
61
+ self.frame_count: int = 0
62
+
63
+ # ------------------------------------------------------------------
64
+ # Builder helpers
65
+ # ------------------------------------------------------------------
66
+
67
+ def _build_hand_landmarker(self):
68
+ options = _HandLandmarkerOptions(
69
+ base_options=_BaseOptions(model_asset_path=_MODEL_PATH),
70
+ running_mode=_RunningMode.IMAGE,
71
+ num_hands=self.config.max_num_hands,
72
+ min_hand_detection_confidence=self.config.min_detection_confidence,
73
+ min_tracking_confidence=self.config.min_tracking_confidence,
74
+ )
75
+ return _HandLandmarker.create_from_options(options)
76
+
77
+ def _build_back_sub(self):
78
+ return cv2.createBackgroundSubtractorMOG2(
79
+ history=self.config.bg_history,
80
+ varThreshold=self.config.bg_var_threshold,
81
+ detectShadows=self.config.bg_detect_shadows,
82
+ )
83
+
84
+ def rebuild(self, config: DetectionConfig):
85
+ """Rebuild internal models when the user changes settings."""
86
+ self.config = config
87
+ self.hand_landmarker.close()
88
+ self.hand_landmarker = self._build_hand_landmarker()
89
+ self.back_sub = self._build_back_sub()
90
+ self.frame_count = 0
91
+
92
+ # ------------------------------------------------------------------
93
+ # Hand detection (new tasks API)
94
+ # ------------------------------------------------------------------
95
+
96
+ def detect_hands(self, frame: np.ndarray) -> np.ndarray:
97
+ """
98
+ Detect hands and draw landmarks + labels on *frame* (BGR).
99
+ Uses MediaPipe tasks API HandLandmarker.
100
+ """
101
+ rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
102
+ mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb)
103
+
104
+ result = self.hand_landmarker.detect(mp_image)
105
+
106
+ h, w, _ = frame.shape
107
+
108
+ if result.hand_landmarks:
109
+ for idx, landmarks in enumerate(result.hand_landmarks):
110
+ # Draw connections manually since draw_landmarks expects
111
+ # NormalizedLandmarkList but we have a list of landmarks
112
+ self._draw_hand_skeleton(frame, landmarks, w, h)
113
+
114
+ # Label near wrist (landmark 0)
115
+ wrist = landmarks[0]
116
+ cx, cy = int(wrist.x * w), int(wrist.y * h)
117
+
118
+ label = "Hand"
119
+ if result.handedness and idx < len(result.handedness):
120
+ label = result.handedness[idx][0].category_name
121
+
122
+ cv2.putText(
123
+ frame, label, (cx - 30, cy - 20),
124
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2,
125
+ )
126
+
127
+ return frame
128
+
129
+ def _draw_hand_skeleton(self, frame, landmarks, w, h):
130
+ """Draw landmark points and connections on *frame*."""
131
+ # Define the 21 hand landmark connections (pairs of indices)
132
+ connections = [
133
+ (0, 1), (1, 2), (2, 3), (3, 4), # Thumb
134
+ (0, 5), (5, 6), (6, 7), (7, 8), # Index
135
+ (0, 9), (9, 10), (10, 11), (11, 12), # Middle
136
+ (0, 13), (13, 14), (14, 15), (15, 16), # Ring
137
+ (0, 17), (17, 18), (18, 19), (19, 20), # Pinky
138
+ (5, 9), (9, 13), (13, 17), # Palm
139
+ ]
140
+
141
+ # Convert normalized landmarks to pixel coordinates
142
+ pts = []
143
+ for lm in landmarks:
144
+ px, py = int(lm.x * w), int(lm.y * h)
145
+ pts.append((px, py))
146
+
147
+ # Draw connections
148
+ for start, end in connections:
149
+ cv2.line(frame, pts[start], pts[end], (0, 255, 0), 2)
150
+
151
+ # Draw landmark dots
152
+ for px, py in pts:
153
+ cv2.circle(frame, (px, py), 5, (255, 0, 128), -1)
154
+ cv2.circle(frame, (px, py), 5, (255, 255, 255), 1)
155
+
156
+ # ------------------------------------------------------------------
157
+ # Motion detection
158
+ # ------------------------------------------------------------------
159
+
160
+ def detect_motion(self, frame: np.ndarray) -> Tuple[np.ndarray, np.ndarray, int]:
161
+ """
162
+ Background-subtraction motion detection.
163
+
164
+ Returns
165
+ -------
166
+ processed : BGR frame with bounding boxes
167
+ mask : cleaned foreground mask
168
+ count : number of detected moving objects
169
+ """
170
+ fg_mask = self.back_sub.apply(frame)
171
+
172
+ _, mask_thresh = cv2.threshold(
173
+ fg_mask, self.config.motion_threshold, 255, cv2.THRESH_BINARY,
174
+ )
175
+
176
+ mask_blur = cv2.GaussianBlur(mask_thresh, self.config.blur_kernel_size, 0)
177
+
178
+ kernel = cv2.getStructuringElement(
179
+ cv2.MORPH_ELLIPSE, self.config.morph_kernel_size,
180
+ )
181
+ mask_clean = cv2.morphologyEx(mask_blur, cv2.MORPH_OPEN, kernel)
182
+ mask_clean = cv2.morphologyEx(mask_clean, cv2.MORPH_CLOSE, kernel)
183
+
184
+ contours, _ = cv2.findContours(
185
+ mask_clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE,
186
+ )
187
+
188
+ valid = []
189
+ for cnt in contours:
190
+ area = cv2.contourArea(cnt)
191
+ if area > self.config.min_contour_area:
192
+ valid.append(cnt)
193
+ x, y, bw, bh = cv2.boundingRect(cnt)
194
+ cv2.rectangle(frame, (x, y), (x + bw, y + bh), (0, 0, 255), 2)
195
+ cv2.putText(
196
+ frame, f"Area: {int(area)}", (x, y - 10),
197
+ cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2,
198
+ )
199
+
200
+ cv2.putText(
201
+ frame, f"Moving objects: {len(valid)}", (10, 30),
202
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2,
203
+ )
204
+ return frame, mask_clean, len(valid)
205
+
206
+ # ------------------------------------------------------------------
207
+ # High-level frame dispatcher
208
+ # ------------------------------------------------------------------
209
+
210
+ def process_frame(self, frame: np.ndarray, mode: DetectionMode) -> np.ndarray:
211
+ """Process a single frame according to the selected *mode*."""
212
+ self.frame_count += 1
213
+ out = frame.copy()
214
+
215
+ if mode == DetectionMode.HAND_TRACKING:
216
+ return self.detect_hands(out)
217
+ elif mode == DetectionMode.MOTION_DETECTION:
218
+ processed, _, _ = self.detect_motion(out)
219
+ return processed
220
+ elif mode == DetectionMode.COMBINED:
221
+ motion_frame, _, _ = self.detect_motion(out)
222
+ return self.detect_hands(motion_frame)
223
+ return out
224
+
225
+ # ------------------------------------------------------------------
226
+ # Full-video processing generator
227
+ # ------------------------------------------------------------------
228
+
229
+ def process_video(
230
+ self,
231
+ source: str,
232
+ mode: DetectionMode = DetectionMode.MOTION_DETECTION,
233
+ output_path: str = "output.mp4",
234
+ ) -> Generator[Tuple[np.ndarray | None, str | None, float], None, None]:
235
+ """
236
+ Iterate over every frame in *source*, yield processed RGB frames.
237
+
238
+ Yields
239
+ ------
240
+ (display_frame_rgb | None, output_path | None, progress)
241
+ """
242
+ self.frame_count = 0
243
+ self.back_sub = self._build_back_sub() # fresh background model
244
+
245
+ cap = cv2.VideoCapture(source)
246
+ if not cap.isOpened():
247
+ raise ValueError(f"Cannot open video: {source}")
248
+
249
+ frame_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
250
+ frame_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
251
+ fps = int(cap.get(cv2.CAP_PROP_FPS)) or 30
252
+ total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) or 1
253
+
254
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
255
+ out = cv2.VideoWriter(output_path, fourcc, fps, (frame_w, frame_h))
256
+
257
+ try:
258
+ while True:
259
+ ret, frame = cap.read()
260
+ if not ret:
261
+ break
262
+
263
+ processed = self.process_frame(frame, mode)
264
+ out.write(processed)
265
+
266
+ display = cv2.cvtColor(processed, cv2.COLOR_BGR2RGB)
267
+ progress = min(self.frame_count / total_frames, 1.0)
268
+ yield display, None, progress
269
+ finally:
270
+ cap.release()
271
+ out.release()
272
+ yield None, output_path, 1.0
273
+
274
+ # ------------------------------------------------------------------
275
+ # Cleanup
276
+ # ------------------------------------------------------------------
277
+
278
+ def release(self):
279
+ """Free resources."""
280
+ self.hand_landmarker.close()
live_run.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MotionScope Pro - Live Webcam Analysis
3
+ Run this script to see real-time updates in a new window.
4
+ Usage: python live_run.py
5
+ """
6
+
7
+ import cv2
8
+ import time
9
+ from detector import MovementDetector, DetectionMode
10
+
11
+ def main():
12
+ print("Initializing...")
13
+ detector = MovementDetector()
14
+
15
+ # Try to open webcam
16
+ cap = cv2.VideoCapture(0)
17
+ if not cap.isOpened():
18
+ print("Error: Could not open webcam.")
19
+ return
20
+
21
+ print("Webcam started.")
22
+ print("Controls:")
23
+ print(" 'q' - Quit")
24
+ print(" 'm' - Switch Mode (Hand -> Motion -> Combined)")
25
+
26
+ mode = DetectionMode.MOTION_DETECTION
27
+ print(f"Current Mode: {mode.value}")
28
+
29
+ try:
30
+ while True:
31
+ ret, frame = cap.read()
32
+ if not ret:
33
+ print("Failed to grab frame.")
34
+ break
35
+
36
+ # Mirror effect
37
+ frame = cv2.flip(frame, 1)
38
+
39
+ # Process frame
40
+ processed = detector.process_frame(frame, mode)
41
+
42
+ # Draw HUD
43
+ cv2.putText(
44
+ processed, f"Mode: {mode.value} (Press 'm' to switch)",
45
+ (10, processed.shape[0] - 20),
46
+ cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 0), 2
47
+ )
48
+
49
+ cv2.imshow("MotionScope Pro (Live)", processed)
50
+
51
+ # Handle keys
52
+ key = cv2.waitKey(1) & 0xFF
53
+ if key == ord('q'):
54
+ break
55
+ elif key == ord('m'):
56
+ # Cycle modes
57
+ modes = list(DetectionMode)
58
+ current_idx = modes.index(mode)
59
+ next_idx = (current_idx + 1) % len(modes)
60
+ mode = modes[next_idx]
61
+ print(f"Switched to: {mode.value}")
62
+
63
+ finally:
64
+ cap.release()
65
+ cv2.destroyAllWindows()
66
+ detector.release()
67
+ print("Resources released.")
68
+
69
+ if __name__ == "__main__":
70
+ main()
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ streamlit>=1.28.0
2
+ opencv-python-headless>=4.8.0
3
+ mediapipe>=0.10.0
4
+ numpy>=1.24.0