SEMA / webapp /README.md
yunyixuan's picture
Initial clean SEMA Space package
bbd5d13
metadata
title: SEMA Web Demo
sdk: docker
app_port: 7860
pinned: false

SEMA Web Demo

This folder contains a Hugging Face Docker Space-ready web demo for the current SEMA pipeline.

What it does

  • Accepts a user-uploaded archery video.
  • Saves it only to a temporary file inside the container.
  • Runs the existing SEMA analysis flow:
    • Tokenize_SearchKeyword(...)
    • get_response(...)
    • answer_archery_question(...)
  • Deletes the uploaded file after analysis finishes.
  • Keeps the evaluation context only in process memory for follow-up Q&A.

Runtime model

  • No persistent user data.
  • No video archive.
  • No login or settings storage.
  • Single-container deployment.
  • Best fit: small-scale research demo.

Files

  • backend/app.py: FastAPI backend, job queue, temp upload handling, follow-up chat.
  • static/: upload/result/chat frontend.
  • Dockerfile: container build for Hugging Face Docker Space.
  • requirements.txt: web-only Python dependencies.

Environment variables

  • ALI_API_KEY: required, used by DashScope-compatible OpenAI endpoints.
  • SEMA_MAX_UPLOAD_MB: optional, defaults to 80.
  • SEMA_JOB_TTL_SECONDS: optional, defaults to 7200.
  • SEMA_MAX_WORKERS: optional, defaults to 1.
  • SEMA_SUBPIPELINE: optional, defaults to 4.

Deploy to a Hugging Face Docker Space

  1. Create a new Space and choose Docker.
  2. Use the current repository as the source, or copy the repo contents into the Space repository.
  3. Place the contents of this webapp/ folder at the Space repo root if you want Hugging Face to use the included README.md and Dockerfile directly.
  4. In Space settings, add ALI_API_KEY as a secret.
  5. Push and wait for the container build to finish.

Local run

From the repository root:

docker build -f webapp/Dockerfile -t sema-web .
docker run --rm -p 7860:7860 -e ALI_API_KEY=your_key sema-web

Then open http://localhost:7860.