Csaba Bolyos
commited on
Commit
·
b0b5357
1
Parent(s):
2f589a5
purged unused files
Browse files- MCP_README.md +0 -704
- backend/requirements-mcp.txt +0 -27
- backend/requirements.txt +0 -5
- demo/app.py +1 -6
- demo/space.py +0 -85
- mcp.json +0 -57
- pyproject.toml +2 -18
- requirements.txt +2 -40
- run_mcp_server.bat +0 -28
- run_mcp_server.sh +0 -29
MCP_README.md
DELETED
|
@@ -1,704 +0,0 @@
|
|
| 1 |
-
# MCP & Agent Integration for Laban Movement Analysis
|
| 2 |
-
|
| 3 |
-
This project provides comprehensive MCP (Model Context Protocol) integration and agent-ready APIs for professional movement analysis with pose estimation, AI action recognition, and automation capabilities.
|
| 4 |
-
|
| 5 |
-
## 🚀 Quick Start
|
| 6 |
-
|
| 7 |
-
### 1. Install All Dependencies
|
| 8 |
-
|
| 9 |
-
```bash
|
| 10 |
-
# Clone the repository
|
| 11 |
-
git clone https://github.com/[your-repo]/labanmovementanalysis
|
| 12 |
-
cd labanmovementanalysis
|
| 13 |
-
|
| 14 |
-
# Install core dependencies
|
| 15 |
-
pip install -r backend/requirements.txt
|
| 16 |
-
|
| 17 |
-
# Install MCP and enhanced features
|
| 18 |
-
pip install -r backend/requirements-mcp.txt
|
| 19 |
-
```
|
| 20 |
-
|
| 21 |
-
### 2. Start the MCP Server
|
| 22 |
-
|
| 23 |
-
```bash
|
| 24 |
-
# Start MCP server for AI assistants
|
| 25 |
-
python -m backend.mcp_server
|
| 26 |
-
```
|
| 27 |
-
|
| 28 |
-
### 3. Configure Your AI Assistant
|
| 29 |
-
|
| 30 |
-
Add to your Claude Desktop or other MCP-compatible assistant configuration:
|
| 31 |
-
|
| 32 |
-
```json
|
| 33 |
-
{
|
| 34 |
-
"mcpServers": {
|
| 35 |
-
"laban-movement-analysis": {
|
| 36 |
-
"command": "python",
|
| 37 |
-
"args": ["-m", "backend.mcp_server"],
|
| 38 |
-
"env": {
|
| 39 |
-
"PYTHONPATH": "/path/to/labanmovementanalysis"
|
| 40 |
-
}
|
| 41 |
-
}
|
| 42 |
-
}
|
| 43 |
-
}
|
| 44 |
-
```
|
| 45 |
-
|
| 46 |
-
## 🛠️ Enhanced MCP Tools
|
| 47 |
-
|
| 48 |
-
### 1. `analyze_video`
|
| 49 |
-
Comprehensive video analysis with enhanced features including SkateFormer AI and multiple pose models.
|
| 50 |
-
|
| 51 |
-
**Parameters:**
|
| 52 |
-
- `video_path` (string): Path or URL to video (supports YouTube, Vimeo, local files)
|
| 53 |
-
- `model` (string, optional): Advanced pose model selection:
|
| 54 |
-
- **MediaPipe**: `mediapipe-lite`, `mediapipe-full`, `mediapipe-heavy`
|
| 55 |
-
- **MoveNet**: `movenet-lightning`, `movenet-thunder`
|
| 56 |
-
- **YOLO**: `yolo-v8-n/s/m/l`, `yolo-v11-n/s/m/l`
|
| 57 |
-
|
| 58 |
-
- `enable_visualization` (boolean, optional): Generate annotated video
|
| 59 |
-
- `include_keypoints` (boolean, optional): Include raw keypoint data
|
| 60 |
-
- `use_skateformer` (boolean, optional): Enable AI action recognition
|
| 61 |
-
|
| 62 |
-
**Examples:**
|
| 63 |
-
```
|
| 64 |
-
Analyze the dance video at https://youtube.com/watch?v=dQw4w9WgXcQ using SkateFormer AI
|
| 65 |
-
Analyze movement in video.mp4 using yolo-v11-s model with visualization
|
| 66 |
-
Process the exercise video with mediapipe-full and include keypoints
|
| 67 |
-
```
|
| 68 |
-
|
| 69 |
-
### 2. `get_analysis_summary`
|
| 70 |
-
Get human-readable summaries with enhanced AI insights.
|
| 71 |
-
|
| 72 |
-
**Parameters:**
|
| 73 |
-
- `analysis_id` (string): ID from previous analysis
|
| 74 |
-
|
| 75 |
-
**Enhanced Output Includes:**
|
| 76 |
-
- SkateFormer action recognition results
|
| 77 |
-
- Movement quality metrics (rhythm, complexity, symmetry)
|
| 78 |
-
- Temporal action segmentation
|
| 79 |
-
- Video source metadata (YouTube/Vimeo titles, etc.)
|
| 80 |
-
|
| 81 |
-
**Example:**
|
| 82 |
-
```
|
| 83 |
-
Get a detailed summary of analysis dance_2024-01-01T12:00:00 including AI insights
|
| 84 |
-
```
|
| 85 |
-
|
| 86 |
-
### 3. `list_available_models`
|
| 87 |
-
Comprehensive list of all 20+ pose estimation models with detailed specifications.
|
| 88 |
-
|
| 89 |
-
**Enhanced Model Information:**
|
| 90 |
-
- Performance characteristics (speed, accuracy, memory usage)
|
| 91 |
-
- Recommended use cases (real-time, research, production)
|
| 92 |
-
- Hardware requirements (CPU, GPU, memory)
|
| 93 |
-
- Keypoint specifications (17 COCO, 33 MediaPipe)
|
| 94 |
-
|
| 95 |
-
**Example:**
|
| 96 |
-
```
|
| 97 |
-
What pose estimation models are available for real-time processing?
|
| 98 |
-
List all YOLO v11 model variants with their specifications
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
### 4. `batch_analyze`
|
| 102 |
-
Enhanced batch processing with parallel execution and progress tracking.
|
| 103 |
-
|
| 104 |
-
**Parameters:**
|
| 105 |
-
- `video_paths` (array): List of video paths/URLs (supports mixed sources)
|
| 106 |
-
- `model` (string, optional): Pose estimation model for all videos
|
| 107 |
-
- `parallel` (boolean, optional): Enable parallel processing
|
| 108 |
-
- `use_skateformer` (boolean, optional): Enable AI analysis for all videos
|
| 109 |
-
- `output_format` (string, optional): Output format ("summary", "structured", "full")
|
| 110 |
-
|
| 111 |
-
**Enhanced Features:**
|
| 112 |
-
- Mixed source support (local files + YouTube URLs)
|
| 113 |
-
- Progress tracking and partial results
|
| 114 |
-
- Resource management and optimization
|
| 115 |
-
- Failure recovery and retry logic
|
| 116 |
-
|
| 117 |
-
**Examples:**
|
| 118 |
-
```
|
| 119 |
-
Analyze all dance videos in the playlist with SkateFormer AI
|
| 120 |
-
Batch process exercise videos using yolo-v11-s with parallel execution
|
| 121 |
-
```
|
| 122 |
-
|
| 123 |
-
### 5. `compare_movements`
|
| 124 |
-
Advanced movement comparison with AI-powered insights.
|
| 125 |
-
|
| 126 |
-
**Parameters:**
|
| 127 |
-
- `analysis_id1` (string): First analysis ID
|
| 128 |
-
- `analysis_id2` (string): Second analysis ID
|
| 129 |
-
- `comparison_type` (string, optional): Type of comparison ("basic", "detailed", "ai_enhanced")
|
| 130 |
-
|
| 131 |
-
**Enhanced Comparison Features:**
|
| 132 |
-
- SkateFormer action similarity analysis
|
| 133 |
-
- Movement quality comparisons (rhythm, complexity, symmetry)
|
| 134 |
-
- Temporal pattern matching
|
| 135 |
-
- Statistical significance testing
|
| 136 |
-
|
| 137 |
-
**Example:**
|
| 138 |
-
```
|
| 139 |
-
Compare the movement patterns between the two dance analyses with AI insights
|
| 140 |
-
Detailed comparison of exercise form between beginner and expert videos
|
| 141 |
-
```
|
| 142 |
-
|
| 143 |
-
### 6. `real_time_analysis` (New)
|
| 144 |
-
Start/stop real-time WebRTC analysis.
|
| 145 |
-
|
| 146 |
-
**Parameters:**
|
| 147 |
-
- `action` (string): "start" or "stop"
|
| 148 |
-
- `model` (string, optional): Real-time optimized model
|
| 149 |
-
- `stream_config` (object, optional): WebRTC configuration
|
| 150 |
-
|
| 151 |
-
**Example:**
|
| 152 |
-
```
|
| 153 |
-
Start real-time movement analysis using mediapipe-lite
|
| 154 |
-
```
|
| 155 |
-
|
| 156 |
-
### 7. `filter_videos_advanced` (New)
|
| 157 |
-
Advanced video filtering with AI-powered criteria.
|
| 158 |
-
|
| 159 |
-
**Parameters:**
|
| 160 |
-
- `video_paths` (array): List of video paths/URLs
|
| 161 |
-
- `criteria` (object): Enhanced filtering criteria including:
|
| 162 |
-
- Traditional LMA metrics (direction, intensity, fluidity)
|
| 163 |
-
- SkateFormer actions (dancing, jumping, etc.)
|
| 164 |
-
- Movement qualities (rhythm, complexity, symmetry)
|
| 165 |
-
- Temporal characteristics (duration, segment count)
|
| 166 |
-
|
| 167 |
-
**Example:**
|
| 168 |
-
```
|
| 169 |
-
Filter videos for high-energy dance movements with good rhythm
|
| 170 |
-
Find exercise videos with proper form (high fluidity and symmetry)
|
| 171 |
-
```
|
| 172 |
-
|
| 173 |
-
## 🤖 Enhanced Agent API
|
| 174 |
-
|
| 175 |
-
### Comprehensive Python Agent API
|
| 176 |
-
|
| 177 |
-
```python
|
| 178 |
-
from gradio_labanmovementanalysis import LabanMovementAnalysis
|
| 179 |
-
from gradio_labanmovementanalysis.agent_api import (
|
| 180 |
-
LabanAgentAPI,
|
| 181 |
-
PoseModel,
|
| 182 |
-
MovementDirection,
|
| 183 |
-
MovementIntensity,
|
| 184 |
-
analyze_and_summarize
|
| 185 |
-
)
|
| 186 |
-
|
| 187 |
-
# Initialize with all features enabled
|
| 188 |
-
analyzer = LabanMovementAnalysis(
|
| 189 |
-
enable_skateformer=True,
|
| 190 |
-
enable_webrtc=True,
|
| 191 |
-
enable_visualization=True
|
| 192 |
-
)
|
| 193 |
-
|
| 194 |
-
agent_api = LabanAgentAPI(analyzer=analyzer)
|
| 195 |
-
```
|
| 196 |
-
|
| 197 |
-
### Advanced Analysis Workflows
|
| 198 |
-
|
| 199 |
-
```python
|
| 200 |
-
# YouTube video analysis with AI
|
| 201 |
-
result = agent_api.analyze(
|
| 202 |
-
"https://youtube.com/watch?v=...",
|
| 203 |
-
model=PoseModel.YOLO_V11_S,
|
| 204 |
-
use_skateformer=True,
|
| 205 |
-
generate_visualization=True
|
| 206 |
-
)
|
| 207 |
-
|
| 208 |
-
# Enhanced batch processing
|
| 209 |
-
results = agent_api.batch_analyze(
|
| 210 |
-
["video1.mp4", "https://youtube.com/watch?v=...", "https://vimeo.com/..."],
|
| 211 |
-
model=PoseModel.YOLO_V11_S,
|
| 212 |
-
parallel=True,
|
| 213 |
-
use_skateformer=True
|
| 214 |
-
)
|
| 215 |
-
|
| 216 |
-
# AI-powered movement filtering
|
| 217 |
-
filtered = agent_api.filter_by_movement_advanced(
|
| 218 |
-
video_paths,
|
| 219 |
-
skateformer_actions=["dancing", "jumping"],
|
| 220 |
-
movement_qualities={"rhythm": 0.8, "complexity": 0.6},
|
| 221 |
-
traditional_criteria={
|
| 222 |
-
"direction": MovementDirection.UP,
|
| 223 |
-
"intensity": MovementIntensity.HIGH,
|
| 224 |
-
"min_fluidity": 0.7
|
| 225 |
-
}
|
| 226 |
-
)
|
| 227 |
-
|
| 228 |
-
# Real-time analysis control
|
| 229 |
-
agent_api.start_realtime_analysis(model=PoseModel.MEDIAPIPE_LITE)
|
| 230 |
-
live_metrics = agent_api.get_realtime_metrics()
|
| 231 |
-
agent_api.stop_realtime_analysis()
|
| 232 |
-
```
|
| 233 |
-
|
| 234 |
-
### Enhanced Quick Functions
|
| 235 |
-
|
| 236 |
-
```python
|
| 237 |
-
from gradio_labanmovementanalysis import (
|
| 238 |
-
quick_analyze_enhanced,
|
| 239 |
-
analyze_and_summarize_with_ai,
|
| 240 |
-
compare_videos_detailed
|
| 241 |
-
)
|
| 242 |
-
|
| 243 |
-
# Enhanced analysis with AI
|
| 244 |
-
data = quick_analyze_enhanced(
|
| 245 |
-
"https://youtube.com/watch?v=...",
|
| 246 |
-
model="yolo-v11-s",
|
| 247 |
-
use_skateformer=True
|
| 248 |
-
)
|
| 249 |
-
|
| 250 |
-
# AI-powered summary
|
| 251 |
-
summary = analyze_and_summarize_with_ai(
|
| 252 |
-
"dance_video.mp4",
|
| 253 |
-
include_skateformer=True,
|
| 254 |
-
detail_level="comprehensive"
|
| 255 |
-
)
|
| 256 |
-
|
| 257 |
-
# Detailed video comparison
|
| 258 |
-
comparison = compare_videos_detailed(
|
| 259 |
-
"video1.mp4",
|
| 260 |
-
"video2.mp4",
|
| 261 |
-
include_ai_analysis=True
|
| 262 |
-
)
|
| 263 |
-
```
|
| 264 |
-
|
| 265 |
-
## 🌐 Enhanced Gradio 5 Agent Features
|
| 266 |
-
|
| 267 |
-
### Comprehensive API Endpoints
|
| 268 |
-
|
| 269 |
-
The unified Gradio 5 app exposes these endpoints optimized for agents:
|
| 270 |
-
|
| 271 |
-
1. **`/analyze_standard`** - Basic LMA analysis
|
| 272 |
-
2. **`/analyze_enhanced`** - Advanced analysis with all features
|
| 273 |
-
3. **`/analyze_agent`** - Agent-optimized structured output
|
| 274 |
-
4. **`/batch_analyze`** - Efficient multiple video processing
|
| 275 |
-
5. **`/filter_videos`** - Movement-based filtering
|
| 276 |
-
6. **`/compare_models`** - Model performance comparison
|
| 277 |
-
7. **`/real_time_start`** - Start WebRTC real-time analysis
|
| 278 |
-
8. **`/real_time_stop`** - Stop WebRTC real-time analysis
|
| 279 |
-
|
| 280 |
-
### Enhanced Gradio Client Usage
|
| 281 |
-
|
| 282 |
-
```python
|
| 283 |
-
from gradio_client import Client
|
| 284 |
-
|
| 285 |
-
# Connect to unified demo
|
| 286 |
-
client = Client("http://localhost:7860")
|
| 287 |
-
|
| 288 |
-
# Enhanced single analysis
|
| 289 |
-
result = client.predict(
|
| 290 |
-
video_input="https://youtube.com/watch?v=...",
|
| 291 |
-
model="yolo-v11-s",
|
| 292 |
-
enable_viz=True,
|
| 293 |
-
use_skateformer=True,
|
| 294 |
-
include_keypoints=False,
|
| 295 |
-
api_name="/analyze_enhanced"
|
| 296 |
-
)
|
| 297 |
-
|
| 298 |
-
# Agent-optimized batch processing
|
| 299 |
-
batch_results = client.predict(
|
| 300 |
-
files=["video1.mp4", "video2.mp4"],
|
| 301 |
-
model="yolo-v11-s",
|
| 302 |
-
api_name="/batch_analyze"
|
| 303 |
-
)
|
| 304 |
-
|
| 305 |
-
# Advanced movement filtering
|
| 306 |
-
filtered_results = client.predict(
|
| 307 |
-
files=video_list,
|
| 308 |
-
direction_filter="up",
|
| 309 |
-
intensity_filter="high",
|
| 310 |
-
fluidity_threshold=0.7,
|
| 311 |
-
expansion_threshold=0.5,
|
| 312 |
-
api_name="/filter_videos"
|
| 313 |
-
)
|
| 314 |
-
|
| 315 |
-
# Model comparison analysis
|
| 316 |
-
comparison = client.predict(
|
| 317 |
-
video="test_video.mp4",
|
| 318 |
-
model1="mediapipe-full",
|
| 319 |
-
model2="yolo-v11-s",
|
| 320 |
-
api_name="/compare_models"
|
| 321 |
-
)
|
| 322 |
-
```
|
| 323 |
-
|
| 324 |
-
## 📊 Enhanced Output Formats
|
| 325 |
-
|
| 326 |
-
### AI-Enhanced Summary Format
|
| 327 |
-
```
|
| 328 |
-
🎭 Movement Analysis Summary for "Dance Performance"
|
| 329 |
-
Source: YouTube (10.5 seconds, 30fps)
|
| 330 |
-
Model: YOLO-v11-S with SkateFormer AI
|
| 331 |
-
|
| 332 |
-
📊 Traditional LMA Metrics:
|
| 333 |
-
• Primary direction: up (65% of frames)
|
| 334 |
-
• Movement intensity: high (80% of frames)
|
| 335 |
-
• Average speed: fast (2.3 units/frame)
|
| 336 |
-
• Fluidity score: 0.85/1.00 (very smooth)
|
| 337 |
-
• Expansion score: 0.72/1.00 (moderately extended)
|
| 338 |
-
|
| 339 |
-
🤖 SkateFormer AI Analysis:
|
| 340 |
-
• Detected actions: dancing (95% confidence), jumping (78% confidence)
|
| 341 |
-
• Movement qualities:
|
| 342 |
-
- Rhythm: 0.89/1.00 (highly rhythmic)
|
| 343 |
-
- Complexity: 0.76/1.00 (moderately complex)
|
| 344 |
-
- Symmetry: 0.68/1.00 (slightly asymmetric)
|
| 345 |
-
- Smoothness: 0.85/1.00 (very smooth)
|
| 346 |
-
- Energy: 0.88/1.00 (high energy)
|
| 347 |
-
|
| 348 |
-
���️ Temporal Analysis:
|
| 349 |
-
• 7 movement segments identified
|
| 350 |
-
• Average segment duration: 1.5 seconds
|
| 351 |
-
• Transition quality: smooth (0.82/1.00)
|
| 352 |
-
|
| 353 |
-
🎯 Overall Assessment: Excellent dance performance with high energy,
|
| 354 |
-
good rhythm, and smooth transitions. Slightly asymmetric but shows
|
| 355 |
-
advanced movement complexity.
|
| 356 |
-
```
|
| 357 |
-
|
| 358 |
-
### Enhanced Structured Format
|
| 359 |
-
```json
|
| 360 |
-
{
|
| 361 |
-
"success": true,
|
| 362 |
-
"video_metadata": {
|
| 363 |
-
"source": "youtube",
|
| 364 |
-
"title": "Dance Performance",
|
| 365 |
-
"duration": 10.5,
|
| 366 |
-
"platform_id": "dQw4w9WgXcQ"
|
| 367 |
-
},
|
| 368 |
-
"model_info": {
|
| 369 |
-
"pose_model": "yolo-v11-s",
|
| 370 |
-
"ai_enhanced": true,
|
| 371 |
-
"skateformer_enabled": true
|
| 372 |
-
},
|
| 373 |
-
"lma_metrics": {
|
| 374 |
-
"direction": "up",
|
| 375 |
-
"intensity": "high",
|
| 376 |
-
"speed": "fast",
|
| 377 |
-
"fluidity": 0.85,
|
| 378 |
-
"expansion": 0.72
|
| 379 |
-
},
|
| 380 |
-
"skateformer_analysis": {
|
| 381 |
-
"actions": [
|
| 382 |
-
{"type": "dancing", "confidence": 0.95, "duration": 8.2},
|
| 383 |
-
{"type": "jumping", "confidence": 0.78, "duration": 2.3}
|
| 384 |
-
],
|
| 385 |
-
"movement_qualities": {
|
| 386 |
-
"rhythm": 0.89,
|
| 387 |
-
"complexity": 0.76,
|
| 388 |
-
"symmetry": 0.68,
|
| 389 |
-
"smoothness": 0.85,
|
| 390 |
-
"energy": 0.88
|
| 391 |
-
},
|
| 392 |
-
"temporal_segments": 7,
|
| 393 |
-
"transition_quality": 0.82
|
| 394 |
-
},
|
| 395 |
-
"performance_metrics": {
|
| 396 |
-
"processing_time": 12.3,
|
| 397 |
-
"frames_analyzed": 315,
|
| 398 |
-
"keypoints_detected": 24
|
| 399 |
-
}
|
| 400 |
-
}
|
| 401 |
-
```
|
| 402 |
-
|
| 403 |
-
### Comprehensive JSON Format
|
| 404 |
-
Complete analysis including frame-by-frame data, SkateFormer attention maps, movement trajectories, and statistical summaries.
|
| 405 |
-
|
| 406 |
-
## 🏗️ Enhanced Architecture
|
| 407 |
-
|
| 408 |
-
```
|
| 409 |
-
┌─────────────────────────────────────────────────────────────┐
|
| 410 |
-
│ AI Assistant Integration │
|
| 411 |
-
│ (Claude, GPT, Local Models via MCP) │
|
| 412 |
-
└─────────────────────┬───────────────────────────────────────┘
|
| 413 |
-
│
|
| 414 |
-
┌─────────────────────▼───────────────────────────────────────┐
|
| 415 |
-
│ MCP Server │
|
| 416 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐│
|
| 417 |
-
│ │ Video │ │ Enhanced │ │ Real-time ││
|
| 418 |
-
│ │ Analysis │ │ Batch │ │ WebRTC ││
|
| 419 |
-
│ │ Tools │ │ Processing │ │ Analysis ││
|
| 420 |
-
│ └─────────────┘ └─────────────┘ └─────────────────────────┘│
|
| 421 |
-
└─────────────────────┬───────────────────────────────────────┘
|
| 422 |
-
│
|
| 423 |
-
┌─────────────────────▼───────────────────────────────────────┐
|
| 424 |
-
│ Enhanced Agent API Layer │
|
| 425 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐│
|
| 426 |
-
│ │ Movement │ │ AI-Enhanced │ │ Advanced ││
|
| 427 |
-
│ │ Filtering │ │ Comparisons │ │ Workflows ││
|
| 428 |
-
│ └─────────────┘ └─────────────┘ └─────────────────────────┘│
|
| 429 |
-
└─────────────────────┬───────────────────────────────────────┘
|
| 430 |
-
│
|
| 431 |
-
┌─────────────────────▼───────────────────────────────────────┐
|
| 432 |
-
│ Core Analysis Engine │
|
| 433 |
-
│ │
|
| 434 |
-
│ 📹 Video Input 🤖 Pose Models 🎭 SkateFormer AI │
|
| 435 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐│
|
| 436 |
-
│ │Local Files │ │MediaPipe(3) │ │ Action Recognition ││
|
| 437 |
-
│ │YouTube URLs │ │MoveNet(2) │ │Movement Qualities ││
|
| 438 |
-
│ │Vimeo URLs │ │YOLO(8) │ │Temporal Segments ││
|
| 439 |
-
│ │Direct URLs │ │ │ │Attention Analysis ││
|
| 440 |
-
│ └─────────────┘ └─────────────┘ └─────────────────────┘│
|
| 441 |
-
│ │
|
| 442 |
-
│ 📊 LMA Engine 📹 WebRTC 🎨 Visualization │
|
| 443 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐│
|
| 444 |
-
│ │Direction │ │Live Camera │ │ Pose Overlays ││
|
| 445 |
-
│ │Intensity │ │Real-time │ │ Motion Trails ││
|
| 446 |
-
│ │Speed/Flow │ │Sub-100ms │ │ Metric Displays ││
|
| 447 |
-
│ │Expansion │ │Adaptive FPS │ │ AI Visualizations ││
|
| 448 |
-
│ └─────────────┘ └─────────────┘ └─────────────────────┘│
|
| 449 |
-
└─────────────────────────────────────────────────────────────┘
|
| 450 |
-
```
|
| 451 |
-
|
| 452 |
-
## 📝 Advanced Agent Workflows
|
| 453 |
-
|
| 454 |
-
### 1. Comprehensive Dance Analysis Pipeline
|
| 455 |
-
```python
|
| 456 |
-
# Multi-source dance video analysis
|
| 457 |
-
videos = [
|
| 458 |
-
"local_dance.mp4",
|
| 459 |
-
"https://youtube.com/watch?v=dance1",
|
| 460 |
-
"https://vimeo.com/dance2"
|
| 461 |
-
]
|
| 462 |
-
|
| 463 |
-
# Batch analyze with AI
|
| 464 |
-
results = agent_api.batch_analyze(
|
| 465 |
-
videos,
|
| 466 |
-
model=PoseModel.YOLO_V11_S,
|
| 467 |
-
use_skateformer=True,
|
| 468 |
-
parallel=True
|
| 469 |
-
)
|
| 470 |
-
|
| 471 |
-
# Filter for high-quality performances
|
| 472 |
-
excellent_dances = agent_api.filter_by_movement_advanced(
|
| 473 |
-
videos,
|
| 474 |
-
skateformer_actions=["dancing"],
|
| 475 |
-
movement_qualities={
|
| 476 |
-
"rhythm": 0.8,
|
| 477 |
-
"complexity": 0.7,
|
| 478 |
-
"energy": 0.8
|
| 479 |
-
},
|
| 480 |
-
traditional_criteria={
|
| 481 |
-
"intensity": MovementIntensity.HIGH,
|
| 482 |
-
"min_fluidity": 0.75
|
| 483 |
-
}
|
| 484 |
-
)
|
| 485 |
-
|
| 486 |
-
# Generate comprehensive report
|
| 487 |
-
report = agent_api.generate_analysis_report(
|
| 488 |
-
results,
|
| 489 |
-
include_comparisons=True,
|
| 490 |
-
include_recommendations=True
|
| 491 |
-
)
|
| 492 |
-
```
|
| 493 |
-
|
| 494 |
-
### 2. Real-time Exercise Form Checker
|
| 495 |
-
```python
|
| 496 |
-
# Start real-time analysis
|
| 497 |
-
agent_api.start_realtime_analysis(
|
| 498 |
-
model=PoseModel.MEDIAPIPE_FULL,
|
| 499 |
-
enable_skateformer=True
|
| 500 |
-
)
|
| 501 |
-
|
| 502 |
-
# Monitor form in real-time
|
| 503 |
-
while exercise_in_progress:
|
| 504 |
-
metrics = agent_api.get_realtime_metrics()
|
| 505 |
-
|
| 506 |
-
# Check form quality
|
| 507 |
-
if metrics["fluidity"] < 0.6:
|
| 508 |
-
send_feedback("Improve movement smoothness")
|
| 509 |
-
|
| 510 |
-
if metrics["symmetry"] < 0.7:
|
| 511 |
-
send_feedback("Balance left and right movements")
|
| 512 |
-
|
| 513 |
-
time.sleep(0.1) # 10Hz monitoring
|
| 514 |
-
|
| 515 |
-
# Stop and get session summary
|
| 516 |
-
agent_api.stop_realtime_analysis()
|
| 517 |
-
session_summary = agent_api.get_session_summary()
|
| 518 |
-
```
|
| 519 |
-
|
| 520 |
-
### 3. Movement Pattern Research Workflow
|
| 521 |
-
```python
|
| 522 |
-
# Large-scale analysis for research
|
| 523 |
-
research_videos = get_research_dataset()
|
| 524 |
-
|
| 525 |
-
# Batch process with comprehensive analysis
|
| 526 |
-
results = agent_api.batch_analyze(
|
| 527 |
-
research_videos,
|
| 528 |
-
model=PoseModel.YOLO_V11_L, # High accuracy for research
|
| 529 |
-
use_skateformer=True,
|
| 530 |
-
include_keypoints=True, # Full data for research
|
| 531 |
-
parallel=True
|
| 532 |
-
)
|
| 533 |
-
|
| 534 |
-
# Statistical analysis
|
| 535 |
-
patterns = agent_api.extract_movement_patterns(
|
| 536 |
-
results,
|
| 537 |
-
pattern_types=["temporal", "spatial", "quality"],
|
| 538 |
-
clustering_method="hierarchical"
|
| 539 |
-
)
|
| 540 |
-
|
| 541 |
-
# Generate research insights
|
| 542 |
-
insights = agent_api.generate_research_insights(
|
| 543 |
-
patterns,
|
| 544 |
-
include_visualizations=True,
|
| 545 |
-
statistical_tests=True
|
| 546 |
-
)
|
| 547 |
-
```
|
| 548 |
-
|
| 549 |
-
## 🔧 Advanced Configuration & Customization
|
| 550 |
-
|
| 551 |
-
### Environment Variables
|
| 552 |
-
|
| 553 |
-
```bash
|
| 554 |
-
# Core configuration
|
| 555 |
-
export LABAN_DEFAULT_MODEL="mediapipe-full"
|
| 556 |
-
export LABAN_CACHE_DIR="/path/to/cache"
|
| 557 |
-
export LABAN_MAX_WORKERS=4
|
| 558 |
-
|
| 559 |
-
# Enhanced features
|
| 560 |
-
export LABAN_ENABLE_SKATEFORMER=true
|
| 561 |
-
export LABAN_ENABLE_WEBRTC=true
|
| 562 |
-
export LABAN_SKATEFORMER_MODEL_PATH="/path/to/skateformer"
|
| 563 |
-
|
| 564 |
-
# Performance tuning
|
| 565 |
-
export LABAN_GPU_ENABLED=true
|
| 566 |
-
export LABAN_BATCH_SIZE=8
|
| 567 |
-
export LABAN_REALTIME_FPS=30
|
| 568 |
-
|
| 569 |
-
# Video download configuration
|
| 570 |
-
export LABAN_YOUTUBE_QUALITY="720p"
|
| 571 |
-
export LABAN_MAX_DOWNLOAD_SIZE="500MB"
|
| 572 |
-
export LABAN_TEMP_DIR="/tmp/laban_downloads"
|
| 573 |
-
```
|
| 574 |
-
|
| 575 |
-
### Custom MCP Tools
|
| 576 |
-
|
| 577 |
-
```python
|
| 578 |
-
# Add custom MCP tool
|
| 579 |
-
from backend.mcp_server import server
|
| 580 |
-
|
| 581 |
-
@server.tool("custom_movement_analysis")
|
| 582 |
-
async def custom_analysis(
|
| 583 |
-
video_path: str,
|
| 584 |
-
custom_params: dict
|
| 585 |
-
) -> dict:
|
| 586 |
-
"""Custom movement analysis with specific parameters."""
|
| 587 |
-
# Your custom implementation
|
| 588 |
-
return results
|
| 589 |
-
|
| 590 |
-
# Register enhanced filters
|
| 591 |
-
@server.tool("filter_by_sport_type")
|
| 592 |
-
async def filter_by_sport(
|
| 593 |
-
videos: list,
|
| 594 |
-
sport_type: str
|
| 595 |
-
) -> dict:
|
| 596 |
-
"""Filter videos by detected sport type using SkateFormer."""
|
| 597 |
-
# Implementation using SkateFormer sport classification
|
| 598 |
-
return filtered_videos
|
| 599 |
-
```
|
| 600 |
-
|
| 601 |
-
### WebRTC Configuration
|
| 602 |
-
|
| 603 |
-
```python
|
| 604 |
-
# Custom WebRTC configuration
|
| 605 |
-
webrtc_config = {
|
| 606 |
-
"video_constraints": {
|
| 607 |
-
"width": 1280,
|
| 608 |
-
"height": 720,
|
| 609 |
-
"frameRate": 30
|
| 610 |
-
},
|
| 611 |
-
"processing_config": {
|
| 612 |
-
"max_latency_ms": 100,
|
| 613 |
-
"quality_adaptation": True,
|
| 614 |
-
"model_switching": True
|
| 615 |
-
}
|
| 616 |
-
}
|
| 617 |
-
|
| 618 |
-
agent_api.configure_webrtc(webrtc_config)
|
| 619 |
-
```
|
| 620 |
-
|
| 621 |
-
## 🤝 Contributing to Agent Features
|
| 622 |
-
|
| 623 |
-
### Adding New MCP Tools
|
| 624 |
-
|
| 625 |
-
1. Define tool in `backend/mcp_server.py`
|
| 626 |
-
2. Implement core logic in agent API
|
| 627 |
-
3. Add comprehensive documentation
|
| 628 |
-
4. Include usage examples
|
| 629 |
-
5. Write integration tests
|
| 630 |
-
|
| 631 |
-
### Extending Agent API
|
| 632 |
-
|
| 633 |
-
1. Add methods to `LabanAgentAPI` class
|
| 634 |
-
2. Ensure compatibility with existing workflows
|
| 635 |
-
3. Add structured output formats
|
| 636 |
-
4. Include error handling and validation
|
| 637 |
-
5. Update documentation
|
| 638 |
-
|
| 639 |
-
### Enhancing SkateFormer Integration
|
| 640 |
-
|
| 641 |
-
1. Extend action recognition types
|
| 642 |
-
2. Add custom movement quality metrics
|
| 643 |
-
3. Implement temporal analysis features
|
| 644 |
-
4. Add visualization components
|
| 645 |
-
5. Validate with research datasets
|
| 646 |
-
|
| 647 |
-
## 📚 Resources & References
|
| 648 |
-
|
| 649 |
-
- [MCP Specification](https://github.com/anthropics/mcp)
|
| 650 |
-
- [SkateFormer Research Paper](https://kaist-viclab.github.io/SkateFormer_site/)
|
| 651 |
-
- [Gradio 5 Documentation](https://www.gradio.app/docs)
|
| 652 |
-
- [Unified Demo Application](demo/app.py)
|
| 653 |
-
- [Core Component Code](backend/gradio_labanmovementanalysis/)
|
| 654 |
-
|
| 655 |
-
## 🎯 Production Deployment
|
| 656 |
-
|
| 657 |
-
### Docker Deployment
|
| 658 |
-
|
| 659 |
-
```dockerfile
|
| 660 |
-
FROM python:3.9-slim
|
| 661 |
-
|
| 662 |
-
COPY . /app
|
| 663 |
-
WORKDIR /app
|
| 664 |
-
|
| 665 |
-
RUN pip install -r backend/requirements.txt
|
| 666 |
-
RUN pip install -r backend/requirements-mcp.txt
|
| 667 |
-
|
| 668 |
-
EXPOSE 7860 8080
|
| 669 |
-
|
| 670 |
-
CMD ["python", "-m", "backend.mcp_server"]
|
| 671 |
-
```
|
| 672 |
-
|
| 673 |
-
### Kubernetes Configuration
|
| 674 |
-
|
| 675 |
-
```yaml
|
| 676 |
-
apiVersion: apps/v1
|
| 677 |
-
kind: Deployment
|
| 678 |
-
metadata:
|
| 679 |
-
name: laban-mcp-server
|
| 680 |
-
spec:
|
| 681 |
-
replicas: 3
|
| 682 |
-
selector:
|
| 683 |
-
matchLabels:
|
| 684 |
-
app: laban-mcp
|
| 685 |
-
template:
|
| 686 |
-
metadata:
|
| 687 |
-
labels:
|
| 688 |
-
app: laban-mcp
|
| 689 |
-
spec:
|
| 690 |
-
containers:
|
| 691 |
-
- name: mcp-server
|
| 692 |
-
image: laban-movement-analysis:latest
|
| 693 |
-
ports:
|
| 694 |
-
- containerPort: 8080
|
| 695 |
-
env:
|
| 696 |
-
- name: LABAN_MAX_WORKERS
|
| 697 |
-
value: "2"
|
| 698 |
-
- name: LABAN_ENABLE_SKATEFORMER
|
| 699 |
-
value: "true"
|
| 700 |
-
```
|
| 701 |
-
|
| 702 |
-
---
|
| 703 |
-
|
| 704 |
-
**🤖 Transform your AI assistant into a movement analysis expert with comprehensive MCP integration and agent-ready automation.**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
backend/requirements-mcp.txt
DELETED
|
@@ -1,27 +0,0 @@
|
|
| 1 |
-
# MCP Server Dependencies
|
| 2 |
-
mcp>=1.0.0
|
| 3 |
-
aiofiles>=23.0.0
|
| 4 |
-
httpx>=0.24.0
|
| 5 |
-
|
| 6 |
-
# Core dependencies (include from main requirements)
|
| 7 |
-
gradio>=5.0,<6.0
|
| 8 |
-
opencv-python>=4.8.0
|
| 9 |
-
numpy>=1.24.0,<2.0.0 # Pin to 1.x for compatibility with MediaPipe/pandas
|
| 10 |
-
mediapipe>=0.10.0
|
| 11 |
-
tensorflow>=2.13.0 # For MoveNet
|
| 12 |
-
tensorflow-hub>=0.14.0 # For MoveNet models
|
| 13 |
-
ultralytics>=8.0.0 # For YOLO v8/v11
|
| 14 |
-
torch>=2.0.0
|
| 15 |
-
torchvision>=0.15.0
|
| 16 |
-
|
| 17 |
-
# Video platform support
|
| 18 |
-
yt-dlp>=2023.7.6 # YouTube/Vimeo downloads
|
| 19 |
-
requests>=2.31.0 # Direct video downloads
|
| 20 |
-
|
| 21 |
-
# Enhanced model support
|
| 22 |
-
transformers>=4.35.0
|
| 23 |
-
accelerate>=0.24.0 # For model optimization
|
| 24 |
-
|
| 25 |
-
# WebRTC support (Official Gradio approach)
|
| 26 |
-
gradio-webrtc # Official WebRTC component for Gradio
|
| 27 |
-
twilio>=8.2.0 # TURN servers for cloud deployment (optional)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
backend/requirements.txt
CHANGED
|
@@ -1,8 +1,3 @@
|
|
| 1 |
-
# Core Gradio and UI (Updated to latest stable version)
|
| 2 |
-
pydantic==2.10.6
|
| 3 |
-
gradio[oauth]>=5.23.2
|
| 4 |
-
gradio-webrtc>=0.0.31
|
| 5 |
-
|
| 6 |
# Computer Vision and Pose Estimation
|
| 7 |
opencv-python>=4.8.0
|
| 8 |
mediapipe>=0.10.21
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Computer Vision and Pose Estimation
|
| 2 |
opencv-python>=4.8.0
|
| 3 |
mediapipe>=0.10.21
|
demo/app.py
CHANGED
|
@@ -4,8 +4,7 @@ Laban Movement Analysis – modernised Gradio Space
|
|
| 4 |
Author: Csaba (BladeSzaSza)
|
| 5 |
"""
|
| 6 |
|
| 7 |
-
import
|
| 8 |
-
from pathlib import Path
|
| 9 |
|
| 10 |
|
| 11 |
# ── 3. Dummy backend for local dev (replace with real fn) ───────
|
|
@@ -70,7 +69,3 @@ def create_demo() -> gr.Blocks:
|
|
| 70 |
)
|
| 71 |
return demo
|
| 72 |
|
| 73 |
-
# ── 5. Launch ───────────────────────────────────────────────────
|
| 74 |
-
if __name__ == "__main__":
|
| 75 |
-
demo = create_demo()
|
| 76 |
-
demo.launch(mcp_server=True, show_api=True)
|
|
|
|
| 4 |
Author: Csaba (BladeSzaSza)
|
| 5 |
"""
|
| 6 |
|
| 7 |
+
import gradio as gr
|
|
|
|
| 8 |
|
| 9 |
|
| 10 |
# ── 3. Dummy backend for local dev (replace with real fn) ───────
|
|
|
|
| 69 |
)
|
| 70 |
return demo
|
| 71 |
|
|
|
|
|
|
|
|
|
|
|
|
demo/space.py
DELETED
|
@@ -1,85 +0,0 @@
|
|
| 1 |
-
import os, gradio as gr
|
| 2 |
-
from pathlib import Path
|
| 3 |
-
from typing import Dict, Any, Optional, Tuple
|
| 4 |
-
|
| 5 |
-
# —————————————————— backend hooks (no change) ——————————————————
|
| 6 |
-
from space_backend import (
|
| 7 |
-
process_video_standard, # your existing processing functions
|
| 8 |
-
)
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
def create_unified_demo() -> gr.Blocks:
|
| 12 |
-
"""Modernised UI using latest Gradio layout primitives."""
|
| 13 |
-
with gr.Blocks(
|
| 14 |
-
title="Laban Movement Analysis – Complete Suite",
|
| 15 |
-
theme='gstaff/sketch',
|
| 16 |
-
fill_width=True, # use full viewport width
|
| 17 |
-
) as demo:
|
| 18 |
-
|
| 19 |
-
# ─── Hero ──────────────────────────────────────────────────
|
| 20 |
-
gr.HTML(
|
| 21 |
-
"""
|
| 22 |
-
<div class="main-header">
|
| 23 |
-
<h1>🎭 Laban Movement Analysis – Complete Suite</h1>
|
| 24 |
-
<p>Professional movement analysis with pose estimation, AI action recognition & real-time processing.</p>
|
| 25 |
-
<p style="font-size:0.85rem;opacity:0.9">v0.01-beta • WebRTC • 20+ Pose Models • MCP Integration</p>
|
| 26 |
-
</div>
|
| 27 |
-
"""
|
| 28 |
-
)
|
| 29 |
-
|
| 30 |
-
# ─── Single-screen workspace ──────────────────────────────
|
| 31 |
-
with gr.Row(equal_height=True):
|
| 32 |
-
|
| 33 |
-
# ——— Input column ——————————————————————
|
| 34 |
-
with gr.Column(scale=1, min_width=260, elem_id="input-pane"):
|
| 35 |
-
video_in = gr.Video(label="Upload Video", sources=["upload"], format="mp4")
|
| 36 |
-
model_sel = gr.Dropdown(
|
| 37 |
-
["mediapipe", "movenet", "yolo"], value="mediapipe", label="Pose Model"
|
| 38 |
-
)
|
| 39 |
-
with gr.Accordion("Options", open=False):
|
| 40 |
-
enable_viz = gr.Radio(
|
| 41 |
-
[("Yes", 1), ("No", 0)], value=1, label="Visualization"
|
| 42 |
-
)
|
| 43 |
-
include_kp = gr.Radio(
|
| 44 |
-
[("Yes", 1), ("No", 0)], value=0, label="Raw Keypoints"
|
| 45 |
-
)
|
| 46 |
-
analyze_btn = gr.Button("Analyze Movement", variant="primary")
|
| 47 |
-
gr.Examples(
|
| 48 |
-
[["examples/balette.mp4"]],
|
| 49 |
-
inputs=video_in,
|
| 50 |
-
label="Example Video"
|
| 51 |
-
)
|
| 52 |
-
|
| 53 |
-
# ——— Output column ——————————————————————
|
| 54 |
-
with gr.Column(scale=2, min_width=320, elem_id="output-pane"):
|
| 55 |
-
viz_out = gr.Video(label="Annotated Video")
|
| 56 |
-
with gr.Accordion("Raw JSON", open=False):
|
| 57 |
-
json_out = gr.JSON(label="Movement Analysis", elem_classes=["json-output"])
|
| 58 |
-
|
| 59 |
-
# wire up
|
| 60 |
-
analyze_btn.click(
|
| 61 |
-
fn=process_video_standard,
|
| 62 |
-
inputs=[video_in, model_sel, enable_viz, include_kp],
|
| 63 |
-
outputs=[json_out, viz_out],
|
| 64 |
-
api_name="analyze_standard"
|
| 65 |
-
)
|
| 66 |
-
|
| 67 |
-
# ─── Footer —──────────────────────────────────────────────
|
| 68 |
-
gr.HTML(
|
| 69 |
-
"""
|
| 70 |
-
<div class="author-info">
|
| 71 |
-
Created by Csaba Bolyós •
|
| 72 |
-
<a href="https://github.com/bladeszasza">GitHub</a> •
|
| 73 |
-
<a href="https://huggingface.co/BladeSzaSza">Hugging Face</a>
|
| 74 |
-
</div>
|
| 75 |
-
"""
|
| 76 |
-
)
|
| 77 |
-
|
| 78 |
-
return demo
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
# --- Launch when run directly -------------------------------------------------
|
| 82 |
-
if __name__ == "__main__":
|
| 83 |
-
demo = create_unified_demo()
|
| 84 |
-
demo.launch(server_name="0.0.0.0",
|
| 85 |
-
mcp_server=True, server_port=int(os.getenv("PORT", 7860)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
mcp.json
DELETED
|
@@ -1,57 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"mcpServers": {
|
| 3 |
-
"laban-movement-analysis": {
|
| 4 |
-
"command": "python",
|
| 5 |
-
"args": ["-m", "backend.mcp_server"],
|
| 6 |
-
"env": {
|
| 7 |
-
"PYTHONPATH": "."
|
| 8 |
-
},
|
| 9 |
-
"schema": {
|
| 10 |
-
"name": "Laban Movement Analysis",
|
| 11 |
-
"description": "Analyze human movement in videos using pose estimation and Laban Movement Analysis metrics",
|
| 12 |
-
"version": "1.0.0",
|
| 13 |
-
"tools": [
|
| 14 |
-
{
|
| 15 |
-
"name": "analyze_video",
|
| 16 |
-
"description": "Analyze movement in a video file",
|
| 17 |
-
"parameters": {
|
| 18 |
-
"video_path": "string",
|
| 19 |
-
"model": "string (optional)",
|
| 20 |
-
"enable_visualization": "boolean (optional)",
|
| 21 |
-
"include_keypoints": "boolean (optional)"
|
| 22 |
-
}
|
| 23 |
-
},
|
| 24 |
-
{
|
| 25 |
-
"name": "get_analysis_summary",
|
| 26 |
-
"description": "Get human-readable summary of analysis",
|
| 27 |
-
"parameters": {
|
| 28 |
-
"analysis_id": "string"
|
| 29 |
-
}
|
| 30 |
-
},
|
| 31 |
-
{
|
| 32 |
-
"name": "list_available_models",
|
| 33 |
-
"description": "List available pose estimation models",
|
| 34 |
-
"parameters": {}
|
| 35 |
-
},
|
| 36 |
-
{
|
| 37 |
-
"name": "batch_analyze",
|
| 38 |
-
"description": "Analyze multiple videos in batch",
|
| 39 |
-
"parameters": {
|
| 40 |
-
"video_paths": "array of strings",
|
| 41 |
-
"model": "string (optional)",
|
| 42 |
-
"parallel": "boolean (optional)"
|
| 43 |
-
}
|
| 44 |
-
},
|
| 45 |
-
{
|
| 46 |
-
"name": "compare_movements",
|
| 47 |
-
"description": "Compare movement patterns between videos",
|
| 48 |
-
"parameters": {
|
| 49 |
-
"analysis_id1": "string",
|
| 50 |
-
"analysis_id2": "string"
|
| 51 |
-
}
|
| 52 |
-
}
|
| 53 |
-
]
|
| 54 |
-
}
|
| 55 |
-
}
|
| 56 |
-
}
|
| 57 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
pyproject.toml
CHANGED
|
@@ -4,28 +4,12 @@ version = "0.0.3"
|
|
| 4 |
description = "A Gradio 5 component for video movement analysis using Laban Movement Analysis (LMA) with MCP support for AI agents"
|
| 5 |
readme = "README.md"
|
| 6 |
license = "apache-2.0"
|
| 7 |
-
requires-python = ">=3.12"
|
| 8 |
authors = [{ name = "Csaba Bolyós", email = "bladeszasza@gmail.com" }]
|
| 9 |
keywords = ["gradio-custom-component", "gradio-5", "laban-movement-analysis", "LMA", "pose-estimation", "movement-analysis", "mcp", "ai-agents", "webrtc"]
|
| 10 |
# Core dependencies
|
| 11 |
-
requires-python = ">=3.
|
| 12 |
dependencies = [
|
| 13 |
-
"gradio[mcp]>=5.
|
| 14 |
"mcp>=1.9.0"
|
| 15 |
]
|
| 16 |
-
classifiers = [
|
| 17 |
-
'Development Status :: 4 - Beta',
|
| 18 |
-
'Operating System :: OS Independent',
|
| 19 |
-
'Programming Language :: Python :: 3',
|
| 20 |
-
'Programming Language :: Python :: 3 :: Only',
|
| 21 |
-
'Programming Language :: Python :: 3.12',
|
| 22 |
-
'Topic :: Scientific/Engineering',
|
| 23 |
-
'Topic :: Scientific/Engineering :: Artificial Intelligence',
|
| 24 |
-
'Topic :: Scientific/Engineering :: Visualization',
|
| 25 |
-
]
|
| 26 |
-
|
| 27 |
-
[tool.hatch.build]
|
| 28 |
-
artifacts = ["/backend/gradio_labanmovementanalysis/templates", "*.pyi"]
|
| 29 |
|
| 30 |
-
[tool.hatch.build.targets.wheel]
|
| 31 |
-
packages = ["/backend/gradio_labanmovementanalysis"]
|
|
|
|
| 4 |
description = "A Gradio 5 component for video movement analysis using Laban Movement Analysis (LMA) with MCP support for AI agents"
|
| 5 |
readme = "README.md"
|
| 6 |
license = "apache-2.0"
|
|
|
|
| 7 |
authors = [{ name = "Csaba Bolyós", email = "bladeszasza@gmail.com" }]
|
| 8 |
keywords = ["gradio-custom-component", "gradio-5", "laban-movement-analysis", "LMA", "pose-estimation", "movement-analysis", "mcp", "ai-agents", "webrtc"]
|
| 9 |
# Core dependencies
|
| 10 |
+
requires-python = ">=3.10"
|
| 11 |
dependencies = [
|
| 12 |
+
"gradio[mcp]>=5.33.0",
|
| 13 |
"mcp>=1.9.0"
|
| 14 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
|
|
|
|
|
requirements.txt
CHANGED
|
@@ -3,48 +3,10 @@
|
|
| 3 |
# Heavy Beta Version
|
| 4 |
|
| 5 |
# Core Gradio and UI (Updated to latest stable version)
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
gradio-webrtc>=0.0.31
|
| 9 |
|
| 10 |
# Computer Vision and Pose Estimation
|
| 11 |
opencv-python>=4.8.0
|
| 12 |
mediapipe>=0.10.21
|
| 13 |
ultralytics>=8.0.0
|
| 14 |
-
|
| 15 |
-
# Scientific Computing
|
| 16 |
-
numpy>=1.21.0,<2.0.0
|
| 17 |
-
scipy>=1.7.0
|
| 18 |
-
pandas>=1.3.0
|
| 19 |
-
|
| 20 |
-
# Image and Video Processing
|
| 21 |
-
Pillow>=8.3.0
|
| 22 |
-
imageio>=2.19.0
|
| 23 |
-
imageio-ffmpeg>=0.4.7
|
| 24 |
-
moviepy>=1.0.3
|
| 25 |
-
|
| 26 |
-
# Machine Learning
|
| 27 |
-
torch>=2.0.0
|
| 28 |
-
torchvision>=0.15.0
|
| 29 |
-
tensorflow>=2.10.0
|
| 30 |
-
|
| 31 |
-
# WebRTC and Streaming
|
| 32 |
-
twilio>=8.2.0
|
| 33 |
-
aiortc>=1.4.0
|
| 34 |
-
av>=10.0.0
|
| 35 |
-
|
| 36 |
-
# Utilities
|
| 37 |
-
requests>=2.28.0
|
| 38 |
-
yt-dlp>=2023.1.6
|
| 39 |
-
tqdm>=4.64.0
|
| 40 |
-
matplotlib>=3.5.0
|
| 41 |
-
seaborn>=0.11.0
|
| 42 |
-
|
| 43 |
-
# Development and Deployment
|
| 44 |
-
python-multipart>=0.0.5
|
| 45 |
-
uvicorn>=0.18.0
|
| 46 |
-
fastapi>=0.95.0
|
| 47 |
-
|
| 48 |
-
# Optional WebRTC dependencies
|
| 49 |
-
aiohttp>=3.8.0
|
| 50 |
-
websockets>=10.0
|
|
|
|
| 3 |
# Heavy Beta Version
|
| 4 |
|
| 5 |
# Core Gradio and UI (Updated to latest stable version)
|
| 6 |
+
gradio[mcp]>=5.23.2
|
| 7 |
+
mcp>=1.9.0
|
|
|
|
| 8 |
|
| 9 |
# Computer Vision and Pose Estimation
|
| 10 |
opencv-python>=4.8.0
|
| 11 |
mediapipe>=0.10.21
|
| 12 |
ultralytics>=8.0.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
run_mcp_server.bat
DELETED
|
@@ -1,28 +0,0 @@
|
|
| 1 |
-
@echo off
|
| 2 |
-
REM Run script for Laban Movement Analysis MCP Server (Windows)
|
| 3 |
-
|
| 4 |
-
echo 🎭 Starting Laban Movement Analysis MCP Server...
|
| 5 |
-
echo.
|
| 6 |
-
|
| 7 |
-
REM Check if virtual environment exists
|
| 8 |
-
if exist "venv\Scripts\activate.bat" (
|
| 9 |
-
echo Activating virtual environment...
|
| 10 |
-
call venv\Scripts\activate.bat
|
| 11 |
-
) else if exist ".venv\Scripts\activate.bat" (
|
| 12 |
-
echo Activating virtual environment...
|
| 13 |
-
call .venv\Scripts\activate.bat
|
| 14 |
-
)
|
| 15 |
-
|
| 16 |
-
REM Install dependencies if needed
|
| 17 |
-
echo Checking dependencies...
|
| 18 |
-
pip install -q -r backend\requirements-mcp.txt
|
| 19 |
-
|
| 20 |
-
REM Set Python path
|
| 21 |
-
set PYTHONPATH=%PYTHONPATH%;%cd%
|
| 22 |
-
|
| 23 |
-
REM Run MCP server
|
| 24 |
-
echo.
|
| 25 |
-
echo Starting MCP server...
|
| 26 |
-
echo Use Ctrl+C to stop the server
|
| 27 |
-
echo.
|
| 28 |
-
python -m backend.mcp_server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
run_mcp_server.sh
DELETED
|
@@ -1,29 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
|
| 3 |
-
# Run script for Laban Movement Analysis MCP Server
|
| 4 |
-
|
| 5 |
-
echo "🎭 Starting Laban Movement Analysis MCP Server..."
|
| 6 |
-
echo ""
|
| 7 |
-
|
| 8 |
-
# Check if virtual environment exists
|
| 9 |
-
if [ -d "venv" ]; then
|
| 10 |
-
echo "Activating virtual environment..."
|
| 11 |
-
source venv/bin/activate
|
| 12 |
-
elif [ -d ".venv" ]; then
|
| 13 |
-
echo "Activating virtual environment..."
|
| 14 |
-
source .venv/bin/activate
|
| 15 |
-
fi
|
| 16 |
-
|
| 17 |
-
# Install dependencies if needed
|
| 18 |
-
echo "Checking dependencies..."
|
| 19 |
-
pip install -q -r backend/requirements-mcp.txt
|
| 20 |
-
|
| 21 |
-
# Set Python path
|
| 22 |
-
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
|
| 23 |
-
|
| 24 |
-
# Run MCP server
|
| 25 |
-
echo ""
|
| 26 |
-
echo "Starting MCP server..."
|
| 27 |
-
echo "Use Ctrl+C to stop the server"
|
| 28 |
-
echo ""
|
| 29 |
-
python -m backend.mcp_server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|