BladeSzaSza commited on
Commit
36537b9
Β·
1 Parent(s): 50e3018

added overlay video

Browse files
.gradio/certificate.pem ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
3
+ TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
4
+ cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
5
+ WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
6
+ ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
7
+ MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
8
+ h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
9
+ 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
10
+ A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
11
+ T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
12
+ B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
13
+ B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
14
+ KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
15
+ OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
16
+ jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
17
+ qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
18
+ rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
19
+ HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
20
+ hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
21
+ ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
22
+ 3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
23
+ NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
24
+ ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
25
+ TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
26
+ jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
27
+ oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
28
+ 4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
29
+ mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
30
+ emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
31
+ -----END CERTIFICATE-----
CLAUDE.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ The Laban Movement Analysis project is a Gradio 5 custom component that performs video movement analysis using Laban Movement Analysis (LMA) principles combined with modern pose estimation models. It provides both a web UI and MCP-compatible API for AI agents.
8
+
9
+ ### Core Architecture
10
+
11
+ - **Backend**: Custom Gradio component in `backend/gradio_labanmovementanalysis/`
12
+ - **Frontend**: Svelte components in `frontend/` for the Gradio UI
13
+ - **Demo**: Standalone Gradio app in `demo/` for testing and deployment
14
+ - **Main Entry**: `app.py` serves as the primary entry point for Hugging Face Spaces
15
+
16
+ ### Key Components
17
+
18
+ 1. **LabanMovementAnalysis**: Main Gradio component (`labanmovementanalysis.py`)
19
+ 2. **Pose Estimation**: Multi-model support (MediaPipe, MoveNet, YOLO variants)
20
+ 3. **Notation Engine**: LMA analysis logic (`notation_engine.py`)
21
+ 4. **Visualizer**: Video annotation and overlay generation (`visualizer.py`)
22
+ 5. **Agent API**: MCP-compatible interface for AI agents (`agent_api.py`)
23
+ 6. **Video Processing**: Smart input handling including YouTube/Vimeo downloads (`video_downloader.py`)
24
+
25
+ ## Development Commands
26
+
27
+ ### Running the Application
28
+ ```bash
29
+ # Main application (Hugging Face Spaces compatible)
30
+ python app.py
31
+
32
+ # Demo version
33
+ cd demo && python app.py
34
+
35
+ # Alternative demo with space configuration
36
+ python demo/space.py
37
+ ```
38
+
39
+ ### Package Management
40
+ ```bash
41
+ # Install dependencies
42
+ pip install -r requirements.txt
43
+
44
+ # Install in development mode
45
+ pip install -e .
46
+
47
+ # Build package
48
+ python -m build
49
+
50
+ # Upload to PyPI
51
+ python -m twine upload dist/*
52
+ ```
53
+
54
+ ### Frontend Development
55
+ ```bash
56
+ cd frontend
57
+ npm install
58
+ npm run build
59
+ ```
60
+
61
+ ## Pose Estimation Models
62
+
63
+ The system supports 15+ pose estimation variants:
64
+
65
+ - **MediaPipe**: `mediapipe-lite`, `mediapipe-full`, `mediapipe-heavy`
66
+ - **MoveNet**: `movenet-lightning`, `movenet-thunder`
67
+ - **YOLO v8**: `yolo-v8-n`, `yolo-v8-s`, `yolo-v8-m`, `yolo-v8-l`, `yolo-v8-x`
68
+ - **YOLO v11**: `yolo-v11-n`, `yolo-v11-s`, `yolo-v11-m`, `yolo-v11-l`, `yolo-v11-x`
69
+
70
+ ## API Usage Patterns
71
+
72
+ ### Standard Processing
73
+ ```python
74
+ from gradio_labanmovementanalysis import LabanMovementAnalysis
75
+
76
+ analyzer = LabanMovementAnalysis()
77
+ result = analyzer.process(video_path, model="mediapipe-full")
78
+ ```
79
+
80
+ ### Agent API (MCP Compatible)
81
+ ```python
82
+ from gradio_labanmovementanalysis.agent_api import LabanAgentAPI
83
+
84
+ api = LabanAgentAPI()
85
+ result = await api.analyze_video(video_path, model="mediapipe-full")
86
+ ```
87
+
88
+ ### Enhanced Processing with Visualization
89
+ ```python
90
+ json_result, viz_video = analyzer.process_video(
91
+ video_path,
92
+ model="mediapipe-full",
93
+ enable_visualization=True,
94
+ include_keypoints=True
95
+ )
96
+ ```
97
+
98
+ ## File Organization
99
+
100
+ - **Examples**: JSON output samples in `examples/` (mediapipe.json, yolo*.json, etc.)
101
+ - **Version Info**: `version.py` contains package metadata
102
+ - **Configuration**: `pyproject.toml` for package building and dependencies
103
+ - **Deployment**: Both standalone (`app.py`) and demo (`demo/`) configurations
104
+
105
+ ## Important Implementation Notes
106
+
107
+ - The component inherits from Gradio's base `Component` class
108
+ - Video processing supports both file uploads and URL inputs (YouTube, Vimeo, direct URLs)
109
+ - MCP server capability is enabled via `mcp_server=True` in launch configurations
110
+ - Error handling includes graceful fallbacks when optional features (like Agent API) are unavailable
111
+ - The system uses temporary files for video processing and cleanup
112
+ - JSON output includes both LMA analysis and optional raw keypoint data
113
+
114
+ ## Development Considerations
115
+
116
+ - The codebase maintains backward compatibility between demo and main app versions
117
+ - Component registration follows Gradio 5 patterns with proper export definitions
118
+ - Frontend uses modern Svelte with Gradio's component system
119
+ - Dependencies are managed through both requirements.txt and pyproject.toml
120
+ - The system is designed for both local development and cloud deployment (HF Spaces)
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: Laban Movement Analysis
3
- emoji: πŸƒ
4
  colorFrom: purple
5
  colorTo: green
6
  app_file: app.py
@@ -59,15 +59,10 @@ try:
59
  MovementDirection,
60
  MovementIntensity
61
  )
62
- HAS_AGENT_API = True
63
-
64
- try:
65
- agent_api = LabanAgentAPI()
66
- except Exception as e:
67
- print(f"Warning: Agent API not available: {e}")
68
- agent_api = None
69
- except ImportError:
70
- HAS_AGENT_API = False
71
  # Initialize components
72
  try:
73
  analyzer = LabanMovementAnalysis(
@@ -99,21 +94,38 @@ def process_video_enhanced(video_input, model, enable_viz, include_keypoints):
99
  error_result = {"error": str(e)}
100
  return error_result, None
101
 
102
- def process_video_standard(video, model, enable_viz, include_keypoints):
103
- """Standard video processing function."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  if video is None:
105
- return None, None
106
-
107
  try:
108
- json_output, video_output = analyzer.process_video(
109
  video,
110
  model=model,
111
- enable_visualization=enable_viz,
112
  include_keypoints=include_keypoints
113
  )
114
- return json_output, video_output
115
- except Exception as e:
116
- return {"error": str(e)}, None
 
 
117
 
118
  # ── 4. Build UI ─────────────────────────────────────────────────
119
  def create_demo() -> gr.Blocks:
@@ -122,7 +134,7 @@ def create_demo() -> gr.Blocks:
122
  theme='gstaff/sketch',
123
  fill_width=True,
124
  ) as demo:
125
-
126
  # ── Hero banner ──
127
  gr.Markdown(
128
  """
@@ -214,10 +226,15 @@ def create_demo() -> gr.Blocks:
214
  """
215
  )
216
  return demo
217
-
 
 
 
 
218
  if __name__ == "__main__":
219
  demo = create_demo()
220
  demo.launch(server_name="0.0.0.0",
 
221
  server_port=int(os.getenv("PORT", 7860)),
222
  mcp_server=True)
223
 
 
1
  ---
2
  title: Laban Movement Analysis
3
+ emoji: 🩰
4
  colorFrom: purple
5
  colorTo: green
6
  app_file: app.py
 
59
  MovementDirection,
60
  MovementIntensity
61
  )
62
+ agent_api = LabanAgentAPI()
63
+ except Exception as e:
64
+ print(f"Warning: Agent API not available: {e}")
65
+ agent_api = None
 
 
 
 
 
66
  # Initialize components
67
  try:
68
  analyzer = LabanMovementAnalysis(
 
94
  error_result = {"error": str(e)}
95
  return error_result, None
96
 
97
+ def process_video_standard(video : str, model : str, include_keypoints : bool) -> dict:
98
+ """
99
+ Processes a video file using the specified pose estimation model and returns movement analysis results.
100
+
101
+ Args:
102
+ video (str): Path to the video file to be analyzed.
103
+ model (str): The name of the pose estimation model to use (e.g., "mediapipe-full", "movenet-thunder", etc.).
104
+ include_keypoints (bool): Whether to include raw keypoint data in the output.
105
+
106
+ Returns:
107
+ dict:
108
+ - A dictionary containing the movement analysis results in JSON format, or an error message if processing fails.
109
+
110
+
111
+ Notes:
112
+ - Visualization is disabled in this standard processing function.
113
+ - If the input video is None, both return values will be None.
114
+ - If an error occurs during processing, the first return value will be a dictionary with an "error" key.
115
+ """
116
  if video is None:
117
+ return None
 
118
  try:
119
+ json_output = analyzer.process(
120
  video,
121
  model=model,
 
122
  include_keypoints=include_keypoints
123
  )
124
+
125
+
126
+ return json_output
127
+ except (RuntimeError, ValueError, OSError) as e:
128
+ return {"error": str(e)}
129
 
130
  # ── 4. Build UI ─────────────────────────────────────────────────
131
  def create_demo() -> gr.Blocks:
 
134
  theme='gstaff/sketch',
135
  fill_width=True,
136
  ) as demo:
137
+ # gr.api(process_video_standard, api_name="process_video") # <-- Remove from here
138
  # ── Hero banner ──
139
  gr.Markdown(
140
  """
 
226
  """
227
  )
228
  return demo
229
+
230
+ # Register API endpoint OUTSIDE the UI
231
+
232
+ gr.api(process_video_standard, api_name="process_video")
233
+
234
  if __name__ == "__main__":
235
  demo = create_demo()
236
  demo.launch(server_name="0.0.0.0",
237
+ share=True,
238
  server_port=int(os.getenv("PORT", 7860)),
239
  mcp_server=True)
240
 
backend/gradio_labanmovementanalysis/__pycache__/__init__.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/__init__.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/__init__.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/json_generator.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/json_generator.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/json_generator.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/labanmovementanalysis.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/labanmovementanalysis.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/labanmovementanalysis.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/notation_engine.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/notation_engine.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/notation_engine.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/pose_estimation.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/pose_estimation.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/pose_estimation.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/video_downloader.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/video_downloader.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/video_downloader.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/video_utils.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/video_utils.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/video_utils.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/__pycache__/visualizer.cpython-312.pyc CHANGED
Binary files a/backend/gradio_labanmovementanalysis/__pycache__/visualizer.cpython-312.pyc and b/backend/gradio_labanmovementanalysis/__pycache__/visualizer.cpython-312.pyc differ
 
backend/gradio_labanmovementanalysis/labanmovementanalysis.py CHANGED
@@ -283,7 +283,25 @@ class LabanMovementAnalysis(Component):
283
  """
284
  return self.process_video(video_path, **kwargs)
285
 
 
 
 
 
 
 
 
 
286
 
 
 
 
 
 
 
 
 
 
 
287
 
288
  # SkateFormer methods moved to Version 2 development
289
  # get_skateformer_compatibility() and get_skateformer_status_report()
 
283
  """
284
  return self.process_video(video_path, **kwargs)
285
 
286
+ def process(self, video_input: Union[str, os.PathLike], model: str = DEFAULT_MODEL, include_keypoints: bool = False) -> Dict[str, Any]:
287
+ """
288
+ Processes a video and returns only the JSON analysis result.
289
+
290
+ Args:
291
+ video_input: Path to input video, video URL, or file object
292
+ model: Pose estimation model to use
293
+ include_keypoints: Whether to include keypoints in JSON
294
 
295
+ Returns:
296
+ dict: Movement analysis results in JSON format
297
+ """
298
+ json_output, _ = self.process_video(
299
+ video_input,
300
+ model=model,
301
+ enable_visualization=False,
302
+ include_keypoints=include_keypoints
303
+ )
304
+ return json_output
305
 
306
  # SkateFormer methods moved to Version 2 development
307
  # get_skateformer_compatibility() and get_skateformer_status_report()
backend/gradio_labanmovementanalysis/labanmovementanalysis.pyi ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom Gradio v5 component for video-based pose analysis with LMA-inspired metrics.
3
+ """
4
+
5
+ import gradio as gr
6
+ from gradio.components.base import Component
7
+ from typing import Dict, Any, Optional, Tuple, List, Union
8
+ import tempfile
9
+ import os
10
+ import numpy as np
11
+
12
+ from .video_utils import extract_frames, get_video_info
13
+ from .pose_estimation import get_pose_estimator
14
+ from .notation_engine import analyze_pose_sequence
15
+ from .json_generator import generate_json, format_for_display
16
+ from .visualizer import PoseVisualizer
17
+ from .video_downloader import SmartVideoInput
18
+
19
+ # Advanced features reserved for Version 2
20
+ # SkateFormer AI integration will be available in future release
21
+
22
+
23
+
24
+ # SkateFormerCompatibility class removed for Version 1 stability
25
+ # Will be reimplemented in Version 2 with enhanced AI features
26
+
27
+ from gradio.events import Dependency
28
+
29
+ class LabanMovementAnalysis(Component):
30
+ """
31
+ Gradio component for video-based pose analysis with Laban Movement Analysis metrics.
32
+ """
33
+
34
+ # Component metadata
35
+ COMPONENT_TYPE = "composite"
36
+ DEFAULT_MODEL = "mediapipe"
37
+
38
+ def __init__(self,
39
+ default_model: str = DEFAULT_MODEL,
40
+ enable_visualization: bool = True,
41
+ include_keypoints: bool = False,
42
+
43
+ label: Optional[str] = None,
44
+ every: Optional[float] = None,
45
+ show_label: Optional[bool] = None,
46
+ container: bool = True,
47
+ scale: Optional[int] = None,
48
+ min_width: int = 160,
49
+ interactive: Optional[bool] = None,
50
+ visible: bool = True,
51
+ elem_id: Optional[str] = None,
52
+ elem_classes: Optional[List[str]] = None,
53
+ render: bool = True,
54
+ **kwargs):
55
+ # print("[TRACE] LabanMovementAnalysis.__init__ called")
56
+ """
57
+ Initialize the Laban Movement Analysis component.
58
+
59
+ Args:
60
+ default_model: Default pose estimation model ("mediapipe", "movenet", "yolo")
61
+ enable_visualization: Whether to generate visualization video by default
62
+ include_keypoints: Whether to include raw keypoints in JSON output
63
+
64
+ label: Component label
65
+ ... (other standard Gradio component args)
66
+ """
67
+ super().__init__(
68
+ label=label,
69
+ every=every,
70
+ show_label=show_label,
71
+ container=container,
72
+ scale=scale,
73
+ min_width=min_width,
74
+ interactive=interactive,
75
+ visible=visible,
76
+ elem_id=elem_id,
77
+ elem_classes=elem_classes,
78
+ render=render,
79
+ **kwargs
80
+ )
81
+
82
+ self.default_model = default_model
83
+ self.enable_visualization = enable_visualization
84
+ self.include_keypoints = include_keypoints
85
+ # Cache for pose estimators
86
+ self._estimators = {}
87
+
88
+ # Video input handler for URLs
89
+ self.video_input = SmartVideoInput()
90
+
91
+ # SkateFormer features reserved for Version 2
92
+
93
+ def preprocess(self, payload: Dict[str, Any]) -> Dict[str, Any]:
94
+ # print("[TRACE] LabanMovementAnalysis.preprocess called")
95
+ """
96
+ Preprocess input from the frontend.
97
+
98
+ Args:
99
+ payload: Input data containing video file and options
100
+
101
+ Returns:
102
+ Processed data for analysis
103
+ """
104
+ if not payload:
105
+ return None
106
+
107
+ # Extract video file path
108
+ video_data = payload.get("video")
109
+ if not video_data:
110
+ return None
111
+
112
+ # Handle different input formats
113
+ if isinstance(video_data, str):
114
+ video_path = video_data
115
+ elif isinstance(video_data, dict):
116
+ video_path = video_data.get("path") or video_data.get("name")
117
+ else:
118
+ # Assume it's a file object
119
+ video_path = video_data.name if hasattr(video_data, "name") else str(video_data)
120
+
121
+ # Extract options
122
+ options = {
123
+ "video_path": video_path,
124
+ "model": payload.get("model", self.default_model),
125
+ "enable_visualization": payload.get("enable_visualization", self.enable_visualization),
126
+ "include_keypoints": payload.get("include_keypoints", self.include_keypoints)
127
+ }
128
+
129
+ return options
130
+
131
+ def postprocess(self, value: Any) -> Dict[str, Any]:
132
+ # print("[TRACE] LabanMovementAnalysis.postprocess called")
133
+ """
134
+ Postprocess analysis results for the frontend.
135
+
136
+ Args:
137
+ value: Analysis results
138
+
139
+ Returns:
140
+ Formatted output for display
141
+ """
142
+ if value is None:
143
+ return {"json_output": {}, "video_output": None}
144
+
145
+ # Ensure we have the expected format
146
+ if isinstance(value, tuple) and len(value) == 2:
147
+ json_data, video_path = value
148
+ else:
149
+ json_data = value
150
+ video_path = None
151
+
152
+ return {
153
+ "json_output": json_data,
154
+ "video_output": video_path
155
+ }
156
+
157
+ def process_video(self, video_input: Union[str, os.PathLike], model: str = DEFAULT_MODEL,
158
+ enable_visualization: bool = True,
159
+ include_keypoints: bool = False) -> Tuple[Dict[str, Any], Optional[str]]:
160
+ # print(f"[TRACE] LabanMovementAnalysis.process_video called with model={model}, enable_visualization={enable_visualization}, include_keypoints={include_keypoints}")
161
+ """
162
+ Main processing function that performs pose analysis on a video.
163
+
164
+ Args:
165
+ video_input: Path to input video, video URL (YouTube/Vimeo), or file object
166
+ model: Pose estimation model to use (supports enhanced syntax like "yolo-v11-s")
167
+ enable_visualization: Whether to generate visualization video
168
+ include_keypoints: Whether to include keypoints in JSON
169
+
170
+ Returns:
171
+ Tuple of (analysis_json, visualization_video_path)
172
+ """
173
+ # Handle video input (local file, URL, etc.)
174
+ try:
175
+ video_path, video_metadata = self.video_input.process_input(str(video_input))
176
+ print(f"Processing video: {video_metadata.get('title', 'Unknown')}")
177
+ if video_metadata.get('platform') in ['youtube', 'vimeo']:
178
+ print(f"Downloaded from {video_metadata['platform']}")
179
+ except Exception as e:
180
+ raise ValueError(f"Failed to process video input: {str(e)}")
181
+ # Get video metadata
182
+ frame_count, fps, (width, height) = get_video_info(video_path)
183
+
184
+ # Create or get pose estimator
185
+ if model not in self._estimators:
186
+ self._estimators[model] = get_pose_estimator(model)
187
+ estimator = self._estimators[model]
188
+
189
+ # Process video frame by frame
190
+ print(f"Processing {frame_count} frames with {model} model...")
191
+
192
+ all_frames = []
193
+ all_pose_results = []
194
+
195
+ for i, frame in enumerate(extract_frames(video_path)):
196
+ # Store frame if visualization is needed
197
+ if enable_visualization:
198
+ all_frames.append(frame)
199
+
200
+ # Detect poses
201
+ pose_results = estimator.detect(frame)
202
+
203
+ # Update frame indices
204
+ for result in pose_results:
205
+ result.frame_index = i
206
+
207
+ all_pose_results.append(pose_results)
208
+
209
+ # Progress indicator
210
+ if i % 30 == 0:
211
+ print(f"Processed {i}/{frame_count} frames...")
212
+
213
+ print("Analyzing movement patterns...")
214
+
215
+ # Analyze movement
216
+ movement_metrics = analyze_pose_sequence(all_pose_results, fps=fps)
217
+
218
+ # Enhanced AI analysis reserved for Version 2
219
+ print("LMA analysis complete - advanced AI features coming in Version 2!")
220
+
221
+ # Generate JSON output
222
+ video_metadata = {
223
+ "fps": fps,
224
+ "width": width,
225
+ "height": height,
226
+ "frame_count": frame_count,
227
+ "model_info": {
228
+ "name": model,
229
+ "type": "pose_estimation"
230
+ },
231
+ "input_metadata": video_metadata # Include video source metadata
232
+ }
233
+
234
+ json_output = generate_json(
235
+ movement_metrics,
236
+ all_pose_results if include_keypoints else None,
237
+ video_metadata,
238
+ include_keypoints=include_keypoints
239
+ )
240
+
241
+ # Enhanced AI analysis will be added in Version 2
242
+
243
+ # Generate visualization if requested
244
+ visualization_path = None
245
+ if enable_visualization:
246
+ print("Generating visualization video...")
247
+
248
+ # Create temporary output file
249
+ with tempfile.NamedTemporaryFile(suffix='.mp4', delete=False) as tmp:
250
+ visualization_path = tmp.name
251
+
252
+ # Create visualizer
253
+ visualizer = PoseVisualizer(
254
+ show_trails=True,
255
+ show_skeleton=True,
256
+ show_direction_arrows=True,
257
+ show_metrics=True
258
+ )
259
+
260
+ # Generate overlay video
261
+ visualization_path = visualizer.generate_overlay_video(
262
+ all_frames,
263
+ all_pose_results,
264
+ movement_metrics,
265
+ visualization_path,
266
+ fps
267
+ )
268
+
269
+ print(f"Visualization saved to: {visualization_path}")
270
+
271
+ return json_output, visualization_path
272
+
273
+ def __call__(self, video_path: str, **kwargs) -> Tuple[Dict[str, Any], Optional[str]]:
274
+ # print(f"[TRACE] LabanMovementAnalysis.__call__ called with video_path={video_path}")
275
+ """
276
+ Make the component callable for easy use.
277
+
278
+ Args:
279
+ video_path: Path to video file
280
+ **kwargs: Additional options
281
+
282
+ Returns:
283
+ Analysis results
284
+ """
285
+ return self.process_video(video_path, **kwargs)
286
+
287
+ def process(self, video_input: Union[str, os.PathLike], model: str = DEFAULT_MODEL, include_keypoints: bool = False) -> Dict[str, Any]:
288
+ """
289
+ Processes a video and returns only the JSON analysis result.
290
+
291
+ Args:
292
+ video_input: Path to input video, video URL, or file object
293
+ model: Pose estimation model to use
294
+ include_keypoints: Whether to include keypoints in JSON
295
+
296
+ Returns:
297
+ dict: Movement analysis results in JSON format
298
+ """
299
+ json_output, _ = self.process_video(
300
+ video_input,
301
+ model=model,
302
+ enable_visualization=False,
303
+ include_keypoints=include_keypoints
304
+ )
305
+ return json_output
306
+
307
+ # SkateFormer methods moved to Version 2 development
308
+ # get_skateformer_compatibility() and get_skateformer_status_report()
309
+ # will be available in the next major release
310
+
311
+ def cleanup(self):
312
+ # print("[TRACE] LabanMovementAnalysis.cleanup called")
313
+ """Clean up temporary files and resources."""
314
+ # Clean up video input handler
315
+ if hasattr(self, 'video_input'):
316
+ self.video_input.cleanup()
317
+
318
+ def example_payload(self) -> Dict[str, Any]:
319
+ # print("[TRACE] LabanMovementAnalysis.example_payload called")
320
+ """Example input payload for documentation."""
321
+ return {
322
+ "video": {"path": "/path/to/video.mp4"},
323
+ "model": "mediapipe",
324
+ "enable_visualization": True,
325
+ "include_keypoints": False
326
+ }
327
+
328
+ def example_value(self) -> Dict[str, Any]:
329
+ # print("[TRACE] LabanMovementAnalysis.example_value called")
330
+ """Example output value for documentation."""
331
+ return {
332
+ "json_output": {
333
+ "analysis_metadata": {
334
+ "timestamp": "2024-01-01T00:00:00",
335
+ "version": "1.0.0",
336
+ "model_info": {"name": "mediapipe", "type": "pose_estimation"}
337
+ },
338
+ "video_info": {
339
+ "fps": 30.0,
340
+ "duration_seconds": 5.0,
341
+ "width": 1920,
342
+ "height": 1080,
343
+ "frame_count": 150
344
+ },
345
+ "movement_analysis": {
346
+ "frame_count": 150,
347
+ "frames": [
348
+ {
349
+ "frame_index": 0,
350
+ "timestamp": 0.0,
351
+ "metrics": {
352
+ "direction": "stationary",
353
+ "intensity": "low",
354
+ "speed": "slow",
355
+ "velocity": 0.0,
356
+ "acceleration": 0.0,
357
+ "fluidity": 1.0,
358
+ "expansion": 0.5
359
+ }
360
+ }
361
+ ],
362
+ "summary": {
363
+ "direction": {
364
+ "distribution": {"stationary": 50, "up": 30, "down": 20},
365
+ "dominant": "stationary"
366
+ },
367
+ "intensity": {
368
+ "distribution": {"low": 80, "medium": 15, "high": 5},
369
+ "dominant": "low"
370
+ }
371
+ }
372
+ }
373
+ },
374
+ "video_output": "/tmp/visualization.mp4"
375
+ }
376
+
377
+ def api_info(self) -> Dict[str, Any]:
378
+ # print("[TRACE] LabanMovementAnalysis.api_info called")
379
+ """API information for the component."""
380
+ return {
381
+ "type": "composite",
382
+ "description": "Video-based pose analysis with Laban Movement Analysis metrics",
383
+ "parameters": {
384
+ "video": {"type": "file", "description": "Input video file or URL (YouTube/Vimeo)"},
385
+ "model": {"type": "string", "description": "Pose model: mediapipe, movenet, or yolo variants"},
386
+ "enable_visualization": {"type": "integer", "description": "Generate visualization video (1=yes, 0=no)"},
387
+ "include_keypoints": {"type": "integer", "description": "Include keypoints in JSON (1=yes, 0=no)"}
388
+ },
389
+ "returns": {
390
+ "json_output": {"type": "object", "description": "LMA analysis results"},
391
+ "video_output": {"type": "file", "description": "Visualization video (optional)"}
392
+ },
393
+ "version_2_preview": {
394
+ "planned_features": ["SkateFormer AI integration", "Enhanced movement recognition", "Real-time analysis"],
395
+ "note": "Advanced AI features coming in Version 2!"
396
+ }
397
+ }
398
+ from typing import Callable, Literal, Sequence, Any, TYPE_CHECKING
399
+ from gradio.blocks import Block
400
+ if TYPE_CHECKING:
401
+ from gradio.components import Timer
402
+ from gradio.components.base import Component
backend/gradio_labanmovementanalysis/notation_engine.py CHANGED
@@ -17,8 +17,8 @@ class Direction(Enum):
17
  DOWN = "down"
18
  LEFT = "left"
19
  RIGHT = "right"
20
- FORWARD = "forward"
21
- BACKWARD = "backward"
22
  STATIONARY = "stationary"
23
 
24
 
@@ -64,20 +64,20 @@ class MovementAnalyzer:
64
  """Analyzes pose sequences to extract LMA-style movement metrics."""
65
 
66
  def __init__(self, fps: float = 30.0,
67
- velocity_threshold_slow: float = 0.01,
68
- velocity_threshold_fast: float = 0.1,
69
- intensity_accel_threshold: float = 0.05):
70
  """
71
  Initialize movement analyzer.
72
 
73
  Args:
74
  fps: Frames per second of the video
75
- velocity_threshold_slow: Threshold for slow movement (normalized)
76
- velocity_threshold_fast: Threshold for fast movement (normalized)
77
- intensity_accel_threshold: Acceleration threshold for intensity
78
  """
79
  self.fps = fps
80
- self.frame_duration = 1.0 / fps
81
  self.velocity_threshold_slow = velocity_threshold_slow
82
  self.velocity_threshold_fast = velocity_threshold_fast
83
  self.intensity_accel_threshold = intensity_accel_threshold
@@ -87,43 +87,56 @@ class MovementAnalyzer:
87
  Analyze a sequence of poses to compute movement metrics.
88
 
89
  Args:
90
- pose_sequence: List of pose results per frame
 
91
 
92
  Returns:
93
- List of movement metrics per frame
94
  """
95
  if not pose_sequence:
96
  return []
97
 
98
  metrics = []
99
- prev_centers = None
100
- prev_velocity = None
 
101
 
102
  for frame_idx, frame_poses in enumerate(pose_sequence):
103
  if not frame_poses:
104
  # No pose detected in this frame
105
  metrics.append(MovementMetrics(
106
  frame_index=frame_idx,
107
- timestamp=frame_idx * self.frame_duration
108
  ))
109
  continue
110
 
111
- # For now, analyze first person only
112
- # TODO: Extend to multi-person analysis
113
- pose = frame_poses[0]
114
-
115
- # Compute body center and limb positions
 
 
 
 
 
 
 
 
 
 
 
 
116
  center = self._compute_body_center(pose.keypoints)
117
- limb_positions = self._get_limb_positions(pose.keypoints)
118
 
119
  # Initialize metrics for this frame
120
  frame_metrics = MovementMetrics(
121
  frame_index=frame_idx,
122
- timestamp=frame_idx * self.frame_duration
123
  )
124
 
125
- if prev_centers is not None and frame_idx > 0:
126
- # Compute displacement and velocity
127
  displacement = (
128
  center[0] - prev_centers[0],
129
  center[1] - prev_centers[1]
@@ -133,102 +146,86 @@ class MovementAnalyzer:
133
  displacement[0]**2 + displacement[1]**2
134
  )
135
 
136
- # Velocity (normalized units per second)
137
  frame_metrics.velocity = frame_metrics.total_displacement * self.fps
138
-
139
- # Direction
140
  frame_metrics.direction = self._compute_direction(displacement)
141
-
142
- # Speed category
143
  frame_metrics.speed = self._categorize_speed(frame_metrics.velocity)
144
 
145
- # Acceleration and intensity
146
  if prev_velocity is not None:
147
- frame_metrics.acceleration = abs(
148
- frame_metrics.velocity - prev_velocity
149
- ) * self.fps
 
150
  frame_metrics.intensity = self._compute_intensity(
151
  frame_metrics.acceleration,
152
  frame_metrics.velocity
153
  )
154
-
155
- # Fluidity (based on acceleration smoothness)
156
- frame_metrics.fluidity = self._compute_fluidity(
157
- frame_metrics.acceleration
158
- )
159
 
160
- # Expansion (how spread out the pose is)
161
  frame_metrics.expansion = self._compute_expansion(pose.keypoints)
162
-
163
  metrics.append(frame_metrics)
164
 
165
- # Update previous values
166
  prev_centers = center
167
  prev_velocity = frame_metrics.velocity
168
 
169
- # Post-process to smooth metrics if needed
170
  metrics = self._smooth_metrics(metrics)
171
-
172
  return metrics
173
 
174
  def _compute_body_center(self, keypoints: List[Keypoint]) -> Tuple[float, float]:
175
  """Compute the center of mass of the body."""
176
- # Use major body joints for center calculation
177
  major_joints = ["left_hip", "right_hip", "left_shoulder", "right_shoulder"]
178
 
179
  x_coords = []
180
  y_coords = []
181
 
182
  for kp in keypoints:
183
- if kp.name in major_joints and kp.confidence > 0.5:
184
  x_coords.append(kp.x)
185
  y_coords.append(kp.y)
186
 
187
- if not x_coords:
188
- # Fallback to all keypoints
189
  x_coords = [kp.x for kp in keypoints if kp.confidence > 0.3]
190
  y_coords = [kp.y for kp in keypoints if kp.confidence > 0.3]
191
 
192
- if x_coords:
193
  return (np.mean(x_coords), np.mean(y_coords))
194
- return (0.5, 0.5) # Default center
195
 
196
  def _get_limb_positions(self, keypoints: List[Keypoint]) -> Dict[str, Tuple[float, float]]:
197
- """Get positions of major limbs."""
198
  positions = {}
199
  for kp in keypoints:
200
- if kp.confidence > 0.3:
201
  positions[kp.name] = (kp.x, kp.y)
202
  return positions
203
 
204
  def _compute_direction(self, displacement: Tuple[float, float]) -> Direction:
205
  """Compute movement direction from displacement vector."""
206
  dx, dy = displacement
207
-
208
- # Threshold for considering movement
209
- threshold = 0.005
210
 
211
  if abs(dx) < threshold and abs(dy) < threshold:
212
  return Direction.STATIONARY
213
 
214
- # Determine primary direction
215
  if abs(dx) > abs(dy):
216
  return Direction.RIGHT if dx > 0 else Direction.LEFT
217
  else:
218
- return Direction.DOWN if dy > 0 else Direction.UP
219
 
220
  def _categorize_speed(self, velocity: float) -> Speed:
221
- """Categorize velocity into speed levels."""
222
  if velocity < self.velocity_threshold_slow:
223
  return Speed.SLOW
224
  elif velocity < self.velocity_threshold_fast:
225
- return Speed.FAST
226
  else:
227
  return Speed.FAST
228
 
229
  def _compute_intensity(self, acceleration: float, velocity: float) -> Intensity:
230
- """Compute movement intensity based on acceleration and velocity."""
231
- # High acceleration or high velocity indicates high intensity
232
  if acceleration > self.intensity_accel_threshold * 2 or velocity > self.velocity_threshold_fast:
233
  return Intensity.HIGH
234
  elif acceleration > self.intensity_accel_threshold or velocity > self.velocity_threshold_slow:
@@ -238,14 +235,12 @@ class MovementAnalyzer:
238
 
239
  def _compute_fluidity(self, acceleration: float) -> float:
240
  """
241
- Compute fluidity score (0-1) based on acceleration.
242
- Lower acceleration = higher fluidity (smoother movement).
243
  """
244
- # Normalize acceleration to 0-1 range
245
- max_accel = 0.2 # Expected maximum acceleration
246
- norm_accel = min(acceleration / max_accel, 1.0)
247
-
248
- # Invert so low acceleration = high fluidity
249
  return 1.0 - norm_accel
250
 
251
  def _compute_expansion(self, keypoints: List[Keypoint]) -> float:
@@ -253,52 +248,62 @@ class MovementAnalyzer:
253
  Compute how expanded/contracted the pose is.
254
  Returns 0-1 where 1 is fully expanded.
255
  """
256
- # Calculate distances between opposite limbs
257
  limb_pairs = [
258
  ("left_wrist", "right_wrist"),
259
  ("left_ankle", "right_ankle"),
260
- ("left_wrist", "left_ankle"),
261
- ("right_wrist", "right_ankle")
 
262
  ]
263
 
264
- kp_dict = {kp.name: kp for kp in keypoints if kp.confidence > 0.3}
265
-
 
266
  distances = []
267
- for limb1, limb2 in limb_pairs:
268
- if limb1 in kp_dict and limb2 in kp_dict:
269
- kp1 = kp_dict[limb1]
270
- kp2 = kp_dict[limb2]
271
- dist = np.sqrt((kp1.x - kp2.x)**2 + (kp1.y - kp2.y)**2)
272
- distances.append(dist)
 
 
273
 
274
  if distances:
275
- # Normalize by expected maximum distance
276
  avg_dist = np.mean(distances)
277
- max_expected = 1.4 # Diagonal of normalized space
278
- return min(avg_dist / max_expected, 1.0)
 
 
279
 
280
- return 0.5 # Default neutral expansion
281
 
282
- def _smooth_metrics(self, metrics: List[MovementMetrics]) -> List[MovementMetrics]:
283
- """Apply smoothing to reduce noise in metrics."""
284
- # Simple moving average for numeric values
285
  window_size = 3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
286
 
287
- if len(metrics) <= window_size:
288
- return metrics
289
-
290
- # Smooth velocity and acceleration
291
- for i in range(window_size, len(metrics)):
292
- velocities = [m.velocity for m in metrics[i-window_size:i+1]]
293
- metrics[i].velocity = np.mean(velocities)
294
-
295
- accels = [m.acceleration for m in metrics[i-window_size:i+1]]
296
- metrics[i].acceleration = np.mean(accels)
297
-
298
- fluidities = [m.fluidity for m in metrics[i-window_size:i+1]]
299
- metrics[i].fluidity = np.mean(fluidities)
300
-
301
- return metrics
302
 
303
 
304
  def analyze_pose_sequence(pose_sequence: List[List[PoseResult]],
@@ -314,4 +319,4 @@ def analyze_pose_sequence(pose_sequence: List[List[PoseResult]],
314
  List of movement metrics
315
  """
316
  analyzer = MovementAnalyzer(fps=fps)
317
- return analyzer.analyze_movement(pose_sequence)
 
17
  DOWN = "down"
18
  LEFT = "left"
19
  RIGHT = "right"
20
+ FORWARD = "forward" # Note: Not fully implemented with 2D keypoints
21
+ BACKWARD = "backward" # Note: Not fully implemented with 2D keypoints
22
  STATIONARY = "stationary"
23
 
24
 
 
64
  """Analyzes pose sequences to extract LMA-style movement metrics."""
65
 
66
  def __init__(self, fps: float = 30.0,
67
+ velocity_threshold_slow: float = 0.01, # Normalized units / frame; will be scaled by fps
68
+ velocity_threshold_fast: float = 0.1, # Normalized units / frame; will be scaled by fps
69
+ intensity_accel_threshold: float = 0.05): # Normalized units / frame^2; will be scaled by fps^2 (approx)
70
  """
71
  Initialize movement analyzer.
72
 
73
  Args:
74
  fps: Frames per second of the video
75
+ velocity_threshold_slow: Threshold for slow movement (normalized units per second)
76
+ velocity_threshold_fast: Threshold for fast movement (normalized units per second)
77
+ intensity_accel_threshold: Acceleration threshold for intensity (normalized units per second^2)
78
  """
79
  self.fps = fps
80
+ self.frame_duration = 1.0 / fps if fps > 0 else 0.0
81
  self.velocity_threshold_slow = velocity_threshold_slow
82
  self.velocity_threshold_fast = velocity_threshold_fast
83
  self.intensity_accel_threshold = intensity_accel_threshold
 
87
  Analyze a sequence of poses to compute movement metrics.
88
 
89
  Args:
90
+ pose_sequence: List of pose results per frame. Each inner list can contain
91
+ multiple PoseResult if multiple people are detected.
92
 
93
  Returns:
94
+ List of movement metrics per frame (currently for the first detected person).
95
  """
96
  if not pose_sequence:
97
  return []
98
 
99
  metrics = []
100
+ # For multi-person, these would need to be dictionaries mapping person_id to values
101
+ prev_centers = None # Store as {person_id: center_coords} for multi-person
102
+ prev_velocity = None # Store as {person_id: velocity_value} for multi-person
103
 
104
  for frame_idx, frame_poses in enumerate(pose_sequence):
105
  if not frame_poses:
106
  # No pose detected in this frame
107
  metrics.append(MovementMetrics(
108
  frame_index=frame_idx,
109
+ timestamp=frame_idx * self.frame_duration if self.frame_duration else None
110
  ))
111
  continue
112
 
113
+ # --- CURRENT: Analyze first person only ---
114
+ # TODO: Extend to multi-person analysis. This would involve iterating
115
+ # through frame_poses and tracking metrics for each person_id.
116
+ pose = frame_poses[0]
117
+ # --- END CURRENT ---
118
+
119
+ if not pose.keypoints: # Ensure pose object has keypoints
120
+ metrics.append(MovementMetrics(
121
+ frame_index=frame_idx,
122
+ timestamp=frame_idx * self.frame_duration if self.frame_duration else None
123
+ ))
124
+ # Reset for next frame if this person was being tracked
125
+ prev_centers = None # Or prev_centers.pop(person_id, None)
126
+ prev_velocity = None # Or prev_velocity.pop(person_id, None)
127
+ continue
128
+
129
+ # Compute body center
130
  center = self._compute_body_center(pose.keypoints)
 
131
 
132
  # Initialize metrics for this frame
133
  frame_metrics = MovementMetrics(
134
  frame_index=frame_idx,
135
+ timestamp=frame_idx * self.frame_duration if self.frame_duration else None
136
  )
137
 
138
+ # Displacement, velocity, etc. can only be computed if there's a previous frame's center
139
+ if prev_centers is not None and frame_idx > 0 and self.fps > 0:
140
  displacement = (
141
  center[0] - prev_centers[0],
142
  center[1] - prev_centers[1]
 
146
  displacement[0]**2 + displacement[1]**2
147
  )
148
 
 
149
  frame_metrics.velocity = frame_metrics.total_displacement * self.fps
 
 
150
  frame_metrics.direction = self._compute_direction(displacement)
 
 
151
  frame_metrics.speed = self._categorize_speed(frame_metrics.velocity)
152
 
 
153
  if prev_velocity is not None:
154
+ # Acceleration (change in velocity per second)
155
+ delta_velocity = frame_metrics.velocity - prev_velocity
156
+ frame_metrics.acceleration = delta_velocity * self.fps # (units/s)/s = units/s^2
157
+
158
  frame_metrics.intensity = self._compute_intensity(
159
  frame_metrics.acceleration,
160
  frame_metrics.velocity
161
  )
162
+ frame_metrics.fluidity = self._compute_fluidity(
163
+ frame_metrics.acceleration
164
+ )
 
 
165
 
 
166
  frame_metrics.expansion = self._compute_expansion(pose.keypoints)
 
167
  metrics.append(frame_metrics)
168
 
169
+ # Update for next iteration (for the currently tracked person)
170
  prev_centers = center
171
  prev_velocity = frame_metrics.velocity
172
 
 
173
  metrics = self._smooth_metrics(metrics)
 
174
  return metrics
175
 
176
  def _compute_body_center(self, keypoints: List[Keypoint]) -> Tuple[float, float]:
177
  """Compute the center of mass of the body."""
 
178
  major_joints = ["left_hip", "right_hip", "left_shoulder", "right_shoulder"]
179
 
180
  x_coords = []
181
  y_coords = []
182
 
183
  for kp in keypoints:
184
+ if kp.confidence > 0.5 and kp.name in major_joints: # Ensure kp.name is not None
185
  x_coords.append(kp.x)
186
  y_coords.append(kp.y)
187
 
188
+ if not x_coords or not y_coords: # If no major joints, try all keypoints
 
189
  x_coords = [kp.x for kp in keypoints if kp.confidence > 0.3]
190
  y_coords = [kp.y for kp in keypoints if kp.confidence > 0.3]
191
 
192
+ if x_coords and y_coords: # Check if lists are non-empty
193
  return (np.mean(x_coords), np.mean(y_coords))
194
+ return (0.5, 0.5) # Default center if no reliable keypoints
195
 
196
  def _get_limb_positions(self, keypoints: List[Keypoint]) -> Dict[str, Tuple[float, float]]:
197
+ """Get positions of major limbs. (Currently not heavily used beyond potential debugging)"""
198
  positions = {}
199
  for kp in keypoints:
200
+ if kp.confidence > 0.3 and kp.name:
201
  positions[kp.name] = (kp.x, kp.y)
202
  return positions
203
 
204
  def _compute_direction(self, displacement: Tuple[float, float]) -> Direction:
205
  """Compute movement direction from displacement vector."""
206
  dx, dy = displacement
207
+ threshold = 0.005 # Normalized units per frame
 
 
208
 
209
  if abs(dx) < threshold and abs(dy) < threshold:
210
  return Direction.STATIONARY
211
 
 
212
  if abs(dx) > abs(dy):
213
  return Direction.RIGHT if dx > 0 else Direction.LEFT
214
  else:
215
+ return Direction.DOWN if dy > 0 else Direction.UP # dy positive is typically down in image coords
216
 
217
  def _categorize_speed(self, velocity: float) -> Speed:
218
+ """Categorize velocity into speed levels (velocity is in normalized units/sec)."""
219
  if velocity < self.velocity_threshold_slow:
220
  return Speed.SLOW
221
  elif velocity < self.velocity_threshold_fast:
222
+ return Speed.MODERATE # Corrected from Speed.FAST
223
  else:
224
  return Speed.FAST
225
 
226
  def _compute_intensity(self, acceleration: float, velocity: float) -> Intensity:
227
+ """Compute movement intensity (accel in norm_units/sec^2, vel in norm_units/sec)."""
228
+ # Thresholds are relative to normalized space and per-second metrics
229
  if acceleration > self.intensity_accel_threshold * 2 or velocity > self.velocity_threshold_fast:
230
  return Intensity.HIGH
231
  elif acceleration > self.intensity_accel_threshold or velocity > self.velocity_threshold_slow:
 
235
 
236
  def _compute_fluidity(self, acceleration: float) -> float:
237
  """
238
+ Compute fluidity score (0-1) based on acceleration (norm_units/sec^2).
239
+ Lower acceleration = higher fluidity.
240
  """
241
+ max_expected_accel = 0.2 # This is an assumption for normalization, might need tuning.
242
+ # Represents a fairly high acceleration in normalized units/sec^2.
243
+ norm_accel = min(abs(acceleration) / max_expected_accel, 1.0) if max_expected_accel > 0 else 0.0
 
 
244
  return 1.0 - norm_accel
245
 
246
  def _compute_expansion(self, keypoints: List[Keypoint]) -> float:
 
248
  Compute how expanded/contracted the pose is.
249
  Returns 0-1 where 1 is fully expanded.
250
  """
 
251
  limb_pairs = [
252
  ("left_wrist", "right_wrist"),
253
  ("left_ankle", "right_ankle"),
254
+ ("left_wrist", "left_ankle"),
255
+ ("right_wrist", "right_ankle"),
256
+ # Could add torso diagonals like ("left_shoulder", "right_hip")
257
  ]
258
 
259
+ kp_dict = {kp.name: kp for kp in keypoints if kp.confidence > 0.3 and kp.name}
260
+ if not kp_dict: return 0.5 # No reliable keypoints
261
+
262
  distances = []
263
+ for name1, name2 in limb_pairs:
264
+ if name1 in kp_dict and name2 in kp_dict:
265
+ kp1 = kp_dict[name1]
266
+ kp2 = kp_dict[name2]
267
+ # Ensure coordinates are not NaN before calculation
268
+ if not (np.isnan(kp1.x) or np.isnan(kp1.y) or np.isnan(kp2.x) or np.isnan(kp2.y)):
269
+ dist = np.sqrt((kp1.x - kp2.x)**2 + (kp1.y - kp2.y)**2)
270
+ distances.append(dist)
271
 
272
  if distances:
 
273
  avg_dist = np.mean(distances)
274
+ # Max expected distance (e.g., diagonal of normalized 1x1 space is sqrt(2) approx 1.414)
275
+ # This assumes keypoints are normalized.
276
+ max_possible_dist_heuristic = 1.0 # A more conservative heuristic than 1.4, as limbs rarely span the full diagonal.
277
+ return min(avg_dist / max_possible_dist_heuristic, 1.0) if max_possible_dist_heuristic > 0 else 0.0
278
 
279
+ return 0.5 # Default neutral expansion if no valid pairs
280
 
281
+ def _smooth_metrics(self, metrics_list: List[MovementMetrics]) -> List[MovementMetrics]:
282
+ """Apply smoothing to reduce noise in metrics using a simple moving average."""
 
283
  window_size = 3
284
+ num_metrics = len(metrics_list)
285
+
286
+ if num_metrics <= window_size:
287
+ return metrics_list
288
+
289
+ smoothed_metrics_list = metrics_list[:] # Work on a copy
290
+
291
+ # Fields to smooth
292
+ fields_to_smooth = ["velocity", "acceleration", "fluidity", "expansion"]
293
+
294
+ for i in range(num_metrics):
295
+ start_idx = max(0, i - window_size // 2)
296
+ end_idx = min(num_metrics, i + window_size // 2 + 1)
297
+ window = metrics_list[start_idx:end_idx]
298
+
299
+ if not window: continue
300
+
301
+ for field in fields_to_smooth:
302
+ values = [getattr(m, field) for m in window if hasattr(m, field)]
303
+ if values:
304
+ setattr(smoothed_metrics_list[i], field, np.mean(values))
305
 
306
+ return smoothed_metrics_list
 
 
 
 
 
 
 
 
 
 
 
 
 
 
307
 
308
 
309
  def analyze_pose_sequence(pose_sequence: List[List[PoseResult]],
 
319
  List of movement metrics
320
  """
321
  analyzer = MovementAnalyzer(fps=fps)
322
+ return analyzer.analyze_movement(pose_sequence)
cookies.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Netscape HTTP Cookie File
2
+ # This file is generated by yt-dlp. Do not edit.
3
+
4
+ .youtube.com TRUE / FALSE 0 PREF hl=en&tz=UTC
5
+ .youtube.com TRUE / TRUE 0 SOCS CAI
6
+ .youtube.com TRUE / TRUE 1749571718 GPS 1
7
+ .youtube.com TRUE / TRUE 0 YSC Lynn9Gphl18
8
+ .youtube.com TRUE / TRUE 1765122596 VISITOR_INFO1_LIVE pOBV-yb5TwA
9
+ .youtube.com TRUE / TRUE 1765122596 VISITOR_PRIVACY_METADATA CgJTSxIhEh0SGwsMDg8QERITFBUWFxgZGhscHR4fICEiIyQlJiA5
10
+ .youtube.com TRUE / TRUE 1765121919 __Secure-ROLLOUT_TOKEN CIvW7bfurM-MgAEQm8aF7JfnjQMYuv2n7JfnjQM%3D
11
+ .youtube.com TRUE / TRUE 1812642596 __Secure-YT_TVFAS t=485991&s=2
12
+ .youtube.com TRUE / TRUE 1765122596 DEVICE_INFO ChxOelV4TkRNME5UVTROVGsyTWpnME1UazBNZz09EKSoocIGGP+iocIG
demo/app.py CHANGED
@@ -3,11 +3,11 @@
3
  Laban Movement Analysis – modernised Gradio Space
4
  Author: Csaba (BladeSzaSza)
5
  """
6
-
7
  import gradio as gr
8
  import os
9
- from backend.gradio_labanmovementanalysis import LabanMovementAnalysis
10
- # from gradio_labanmovementanalysis import LabanMovementAnalysis
 
11
 
12
  # Import agent API if available
13
  # Initialize agent API if available
@@ -19,15 +19,10 @@ try:
19
  MovementDirection,
20
  MovementIntensity
21
  )
22
- HAS_AGENT_API = True
23
-
24
- try:
25
- agent_api = LabanAgentAPI()
26
- except Exception as e:
27
- print(f"Warning: Agent API not available: {e}")
28
- agent_api = None
29
- except ImportError:
30
- HAS_AGENT_API = False
31
  # Initialize components
32
  try:
33
  analyzer = LabanMovementAnalysis(
@@ -59,21 +54,37 @@ def process_video_enhanced(video_input, model, enable_viz, include_keypoints):
59
  error_result = {"error": str(e)}
60
  return error_result, None
61
 
62
- def process_video_standard(video, model, enable_viz, include_keypoints):
63
- """Standard video processing function."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  if video is None:
65
- return None, None
66
-
67
  try:
68
- json_output, video_output = analyzer.process_video(
69
  video,
70
  model=model,
71
- enable_visualization=enable_viz,
72
  include_keypoints=include_keypoints
73
  )
74
- return json_output, video_output
75
- except Exception as e:
76
- return {"error": str(e)}, None
77
 
78
  # ── 4. Build UI ─────────────────────────────────────────────────
79
  def create_demo() -> gr.Blocks:
@@ -82,7 +93,7 @@ def create_demo() -> gr.Blocks:
82
  theme='gstaff/sketch',
83
  fill_width=True,
84
  ) as demo:
85
-
86
  # ── Hero banner ──
87
  gr.Markdown(
88
  """
@@ -130,8 +141,8 @@ def create_demo() -> gr.Blocks:
130
  )
131
 
132
  with gr.Accordion("Analysis Options", open=False):
133
- enable_viz = gr.Radio([("Yes", 1), ("No", 0)], value=1, label="Visualization")
134
- include_kp = gr.Radio([("Yes", 1), ("No", 0)], value=0, label="Raw Keypoints")
135
 
136
  gr.Examples(
137
  examples=[
@@ -156,7 +167,9 @@ def create_demo() -> gr.Blocks:
156
  def process_enhanced_input(file_input, url_input, model, enable_viz, include_keypoints):
157
  """Process either file upload or URL input."""
158
  video_source = file_input if file_input else url_input
159
- return process_video_enhanced(video_source, model, enable_viz, include_keypoints)
 
 
160
 
161
  analyze_btn_enh.click(
162
  fn=process_enhanced_input,
@@ -165,18 +178,48 @@ def create_demo() -> gr.Blocks:
165
  api_name="analyze_enhanced"
166
  )
167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
168
  # Footer
169
  with gr.Row():
170
  gr.Markdown(
171
  """
172
  **Built by Csaba BolyΓ³s**
173
- [GitHub](https://github.com/bladeszasza) β€’ [HF](https://huggingface.co/BladeSzaSza)
174
  """
175
  )
176
  return demo
177
-
 
178
  if __name__ == "__main__":
179
  demo = create_demo()
180
  demo.launch(server_name="0.0.0.0",
 
181
  server_port=int(os.getenv("PORT", 7860)),
182
  mcp_server=True)
 
3
  Laban Movement Analysis – modernised Gradio Space
4
  Author: Csaba (BladeSzaSza)
5
  """
 
6
  import gradio as gr
7
  import os
8
+ # from backend.gradio_labanmovementanalysis import LabanMovementAnalysis
9
+ from gradio_labanmovementanalysis import LabanMovementAnalysis
10
+ from gradio_overlay_video import OverlayVideo
11
 
12
  # Import agent API if available
13
  # Initialize agent API if available
 
19
  MovementDirection,
20
  MovementIntensity
21
  )
22
+ agent_api = LabanAgentAPI()
23
+ except Exception as e:
24
+ print(f"Warning: Agent API not available: {e}")
25
+ agent_api = None
 
 
 
 
 
26
  # Initialize components
27
  try:
28
  analyzer = LabanMovementAnalysis(
 
54
  error_result = {"error": str(e)}
55
  return error_result, None
56
 
57
+ def process_video_standard(video : str, model : str, include_keypoints : bool) -> dict:
58
+ """
59
+ Processes a video file using the specified pose estimation model and returns movement analysis results.
60
+
61
+ Args:
62
+ video (str): Path to the video file to be analyzed.
63
+ model (str): The name of the pose estimation model to use (e.g., "mediapipe-full", "movenet-thunder", etc.).
64
+ include_keypoints (bool): Whether to include raw keypoint data in the output.
65
+
66
+ Returns:
67
+ dict:
68
+ - A dictionary containing the movement analysis results in JSON format, or an error message if processing fails.
69
+
70
+
71
+ Notes:
72
+ - Visualization is disabled in this standard processing function.
73
+ - If the input video is None, both return values will be None.
74
+ - If an error occurs during processing, the first return value will be a dictionary with an "error" key.
75
+ """
76
  if video is None:
77
+ return None
 
78
  try:
79
+ json_output, _ = analyzer.process_video(
80
  video,
81
  model=model,
82
+ enable_visualization=False,
83
  include_keypoints=include_keypoints
84
  )
85
+ return json_output
86
+ except (RuntimeError, ValueError, OSError) as e:
87
+ return {"error": str(e)}
88
 
89
  # ── 4. Build UI ─────────────────────────────────────────────────
90
  def create_demo() -> gr.Blocks:
 
93
  theme='gstaff/sketch',
94
  fill_width=True,
95
  ) as demo:
96
+ # gr.api(process_video_standard, api_name="process_video")
97
  # ── Hero banner ──
98
  gr.Markdown(
99
  """
 
141
  )
142
 
143
  with gr.Accordion("Analysis Options", open=False):
144
+ enable_viz = gr.Radio([("Create", 1), ("Dismiss", 0)], value=1, label="Visualization")
145
+ include_kp = gr.Radio([("Include", 1), ("Exclude", 0)], value=1, label="Raw Keypoints")
146
 
147
  gr.Examples(
148
  examples=[
 
167
  def process_enhanced_input(file_input, url_input, model, enable_viz, include_keypoints):
168
  """Process either file upload or URL input."""
169
  video_source = file_input if file_input else url_input
170
+ [json_out, viz_out] = process_video_enhanced(video_source, model, enable_viz, include_keypoints)
171
+ overlay_video.value = (None, json_out)
172
+ return [json_out, viz_out]
173
 
174
  analyze_btn_enh.click(
175
  fn=process_enhanced_input,
 
178
  api_name="analyze_enhanced"
179
  )
180
 
181
+ with gr.Tab("🎬 Overlayed Visualisation"):
182
+ gr.Markdown(
183
+ "# 🩰 Interactive Pose Visualization\n"
184
+ "## See the movement analysis in action with an interactive overlay. "
185
+ "Analyze video @ 🎬 Standard Analysis tab"
186
+ )
187
+ with gr.Row(equal_height=True, min_height=400):
188
+ overlay_video = OverlayVideo(
189
+ value=(None, json_out),
190
+ autoplay=True,
191
+ interactive=False
192
+ )
193
+
194
+
195
+ # Update overlay when JSON changes
196
+ def update_overlay(json_source):
197
+ """Update overlay video with JSON data from analysis or upload."""
198
+ if json_source:
199
+ return OverlayVideo(value=("", json_source), autoplay=True, interactive=False)
200
+ return OverlayVideo(value=("", None), autoplay=True, interactive=False)
201
+
202
+ # Connect JSON output from analysis to overlay
203
+ json_out.change(
204
+ fn=update_overlay,
205
+ inputs=[json_out],
206
+ outputs=[overlay_video]
207
+ )
208
+
209
  # Footer
210
  with gr.Row():
211
  gr.Markdown(
212
  """
213
  **Built by Csaba BolyΓ³s**
214
+ [GitHub](https://github.com/bladeszasza) β€’ [HF](https://huggingface.co/BladeSzaSza) β€’ [LinkedIn](https://www.linkedin.com/in/csaba-bolyΓ³s-00a11767/)
215
  """
216
  )
217
  return demo
218
+
219
+
220
  if __name__ == "__main__":
221
  demo = create_demo()
222
  demo.launch(server_name="0.0.0.0",
223
+ share=True,
224
  server_port=int(os.getenv("PORT", 7860)),
225
  mcp_server=True)
demo/space.py CHANGED
@@ -59,15 +59,10 @@ try:
59
  MovementDirection,
60
  MovementIntensity
61
  )
62
- HAS_AGENT_API = True
63
-
64
- try:
65
- agent_api = LabanAgentAPI()
66
- except Exception as e:
67
- print(f"Warning: Agent API not available: {e}")
68
- agent_api = None
69
- except ImportError:
70
- HAS_AGENT_API = False
71
  # Initialize components
72
  try:
73
  analyzer = LabanMovementAnalysis(
@@ -99,21 +94,37 @@ def process_video_enhanced(video_input, model, enable_viz, include_keypoints):
99
  error_result = {"error": str(e)}
100
  return error_result, None
101
 
102
- def process_video_standard(video, model, enable_viz, include_keypoints):
103
- \"\"\"Standard video processing function.\"\"\"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  if video is None:
105
- return None, None
106
-
107
  try:
108
- json_output, video_output = analyzer.process_video(
109
  video,
110
  model=model,
111
- enable_visualization=enable_viz,
112
  include_keypoints=include_keypoints
113
  )
114
- return json_output, video_output
115
- except Exception as e:
116
- return {"error": str(e)}, None
117
 
118
  # ── 4. Build UI ─────────────────────────────────────────────────
119
  def create_demo() -> gr.Blocks:
@@ -122,7 +133,7 @@ def create_demo() -> gr.Blocks:
122
  theme='gstaff/sketch',
123
  fill_width=True,
124
  ) as demo:
125
-
126
  # ── Hero banner ──
127
  gr.Markdown(
128
  \"\"\"
@@ -214,10 +225,12 @@ def create_demo() -> gr.Blocks:
214
  \"\"\"
215
  )
216
  return demo
217
-
 
218
  if __name__ == "__main__":
219
  demo = create_demo()
220
  demo.launch(server_name="0.0.0.0",
 
221
  server_port=int(os.getenv("PORT", 7860)),
222
  mcp_server=True)
223
 
 
59
  MovementDirection,
60
  MovementIntensity
61
  )
62
+ agent_api = LabanAgentAPI()
63
+ except Exception as e:
64
+ print(f"Warning: Agent API not available: {e}")
65
+ agent_api = None
 
 
 
 
 
66
  # Initialize components
67
  try:
68
  analyzer = LabanMovementAnalysis(
 
94
  error_result = {"error": str(e)}
95
  return error_result, None
96
 
97
+ def process_video_standard(video : str, model : str, include_keypoints : bool) -> dict:
98
+ \"\"\"
99
+ Processes a video file using the specified pose estimation model and returns movement analysis results.
100
+
101
+ Args:
102
+ video (str): Path to the video file to be analyzed.
103
+ model (str): The name of the pose estimation model to use (e.g., "mediapipe-full", "movenet-thunder", etc.).
104
+ include_keypoints (bool): Whether to include raw keypoint data in the output.
105
+
106
+ Returns:
107
+ dict:
108
+ - A dictionary containing the movement analysis results in JSON format, or an error message if processing fails.
109
+
110
+
111
+ Notes:
112
+ - Visualization is disabled in this standard processing function.
113
+ - If the input video is None, both return values will be None.
114
+ - If an error occurs during processing, the first return value will be a dictionary with an "error" key.
115
+ \"\"\"
116
  if video is None:
117
+ return None
 
118
  try:
119
+ json_output, _ = analyzer.process_video(
120
  video,
121
  model=model,
122
+ enable_visualization=False,
123
  include_keypoints=include_keypoints
124
  )
125
+ return json_output
126
+ except (RuntimeError, ValueError, OSError) as e:
127
+ return {"error": str(e)}
128
 
129
  # ── 4. Build UI ─────────────────────────────────────────────────
130
  def create_demo() -> gr.Blocks:
 
133
  theme='gstaff/sketch',
134
  fill_width=True,
135
  ) as demo:
136
+ gr.api(process_video_standard, api_name="process_video")
137
  # ── Hero banner ──
138
  gr.Markdown(
139
  \"\"\"
 
225
  \"\"\"
226
  )
227
  return demo
228
+
229
+
230
  if __name__ == "__main__":
231
  demo = create_demo()
232
  demo.launch(server_name="0.0.0.0",
233
+ share=True,
234
  server_port=int(os.getenv("PORT", 7860)),
235
  mcp_server=True)
236
 
examples/mediapipe_full.json ADDED
The diff for this file is too large to render. See raw diff
 
pyproject.toml CHANGED
@@ -8,7 +8,7 @@ build-backend = "hatchling.build"
8
 
9
  [project]
10
  name = "gradio_labanmovementanalysis"
11
- version = "0.0.4"
12
  description = "A Gradio 5 component for video movement analysis using Laban Movement Analysis (LMA) with MCP support for AI agents"
13
  readme = "README.md"
14
  license = "apache-2.0"
@@ -24,7 +24,8 @@ dependencies = [
24
  "ultralytics>=8.0.0",
25
  "tensorflow>=2.8.0",
26
  "tensorflow-hub>=0.12.0",
27
- "yt-dlp>=2025.05.22"
 
28
  ]
29
 
30
  [project.optional-dependencies]
 
8
 
9
  [project]
10
  name = "gradio_labanmovementanalysis"
11
+ version = "0.0.5"
12
  description = "A Gradio 5 component for video movement analysis using Laban Movement Analysis (LMA) with MCP support for AI agents"
13
  readme = "README.md"
14
  license = "apache-2.0"
 
24
  "ultralytics>=8.0.0",
25
  "tensorflow>=2.8.0",
26
  "tensorflow-hub>=0.12.0",
27
+ "yt-dlp>=2025.05.22",
28
+ "gradio_overlay_video>=0.0.9"
29
  ]
30
 
31
  [project.optional-dependencies]