trajectory-tracker / README.md
Redfire-1234's picture
Create README.md
f83b3ae verified
metadata
title: 3D Trajectory Tracker
emoji: 🎯
colorFrom: blue
colorTo: purple
sdk: docker
app_file: app.py
pinned: false

3D Trajectory Tracker

A web application that tracks objects in videos and visualizes their movement in 3D space using computer vision and Three.js.

Features

  • πŸŽ₯ Video upload and processing
  • 🎯 Automatic object detection and tracking
  • πŸ“Š Real-time trajectory visualization in 3D
  • 🎬 Playback controls for trajectory animation
  • πŸ“ˆ Statistics and analytics dashboard

Technology Stack

  • Backend: FastAPI, OpenCV, NumPy
  • Frontend: Vanilla JavaScript, Three.js
  • Computer Vision: Background subtraction for object detection

Project Structure

trajectory-tracker/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app.py              # FastAPI main application
β”‚   β”œβ”€β”€ requirements.txt    # Python dependencies
β”‚   β”œβ”€β”€ tracker.py         # Object tracking logic
β”‚   └── utils.py           # Helper functions
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ index.html         # Main HTML file
β”‚   β”œβ”€β”€ app.js             # Frontend logic
β”‚   β”œβ”€β”€ styles.css         # Styling
β”‚   └── visualizer.js      # 3D visualization with Three.js
β”œβ”€β”€ uploads/                # Temporary video uploads (auto-created)
β”œβ”€β”€ outputs/                # Processed trajectory data (auto-created)
└── README.md              # Documentation

Installation & Setup

Local Development (VS Code)

  1. Clone or create the project structure

    mkdir trajectory-tracker
    cd trajectory-tracker
    
  2. Set up backend

    # Create directories
    mkdir backend frontend uploads outputs
    
    # Install Python dependencies
    pip install -r backend/requirements.txt
    
  3. Run the application

    # From the project root directory
    python backend/app.py
    
  4. Access the application

    • Open browser to: http://localhost:7860

Hugging Face Spaces Deployment

  1. Create a new Space

  2. Create Dockerfile in project root

    FROM python:3.9
    
    WORKDIR /app
    
    # Install system dependencies
    RUN apt-get update && apt-get install -y \
        libgl1-mesa-glx \
        libglib2.0-0 \
        && rm -rf /var/lib/apt/lists/*
    
    # Copy requirements and install
    COPY backend/requirements.txt .
    RUN pip install --no-cache-dir -r requirements.txt
    
    # Copy application files
    COPY backend/ ./backend/
    COPY frontend/ ./frontend/
    
    # Create directories
    RUN mkdir -p uploads outputs
    
    # Expose port
    EXPOSE 7860
    
    # Run the application
    CMD ["python", "backend/app.py"]
    
  3. Push to Hugging Face

    git init
    git add .
    git commit -m "Initial commit"
    git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
    git push -u origin main
    

Usage

  1. Upload Video

    • Click the upload box or drag and drop a video file
    • Supported formats: MP4, AVI, MOV, MKV
  2. Process Video

    • Click "Process Video" button
    • Wait for processing to complete (may take a few moments)
  3. View Results

    • See tracking statistics (objects tracked, frames, FPS, duration)
    • View 3D visualization of trajectories
    • Use playback controls:
      • β–Ά Play/Pause: Animate the trajectories
      • ↻ Reset: Return to start
      • Show Trails: Toggle trajectory lines
  4. Interact with 3D View

    • Click and drag to rotate camera
    • Scroll to zoom in/out

Configuration

Detection Methods

In tracker.py, you can choose detection methods:

# Background subtraction (default)
tracker = VideoTracker('video.mp4', detection_method='background')

# Color-based detection
tracker = VideoTracker('video.mp4', detection_method='color')

Tracking Parameters

Adjust in tracker.py:

  • area > 500: Minimum object size (pixels)
  • max_distance=50: Maximum movement between frames (pixels)
  • len(points) > 5: Minimum trajectory length (frames)

API Endpoints

  • GET / - Main application interface
  • POST /api/upload - Upload and process video
  • GET /api/trajectories/{filename} - Get trajectory data
  • GET /api/list - List all processed files
  • DELETE /api/clear - Clear all data

Troubleshooting

Video Processing Fails

  • Ensure video format is supported
  • Check video file is not corrupted
  • Verify OpenCV can read the codec

No Objects Detected

  • Adjust detection parameters in tracker.py
  • Try different detection methods
  • Ensure objects have sufficient movement/contrast

3D Visualization Issues

  • Check browser console for JavaScript errors
  • Ensure Three.js CDN is accessible
  • Try refreshing the page

Development

Add New Features

  1. Custom Detection Methods: Add to tracker.py
  2. UI Enhancements: Modify frontend/ files
  3. Export Options: Add endpoints in app.py

Testing

# Test with sample video
python backend/app.py

# Access http://localhost:7860 and upload test video

Performance Tips

  • Use smaller videos for faster processing
  • Reduce video resolution before upload
  • Adjust detection parameters for accuracy vs. speed

License

MIT License

Contributing

Pull requests welcome! Please ensure:

  • Code follows existing style
  • Add tests for new features
  • Update documentation

Support

For issues and questions:

  • Open an issue on the repository
  • Check existing issues for solutions

Credits

  • FastAPI for backend framework
  • Three.js for 3D visualization
  • OpenCV for computer vision