metadata
title: SEMA Web Demo
sdk: docker
app_port: 7860
pinned: false
SEMA Web Demo
This folder contains a Hugging Face Docker Space-ready web demo for the current SEMA pipeline.
What it does
- Accepts a user-uploaded archery video.
- Saves it only to a temporary file inside the container.
- Runs the existing SEMA analysis flow:
Tokenize_SearchKeyword(...)get_response(...)answer_archery_question(...)
- Deletes the uploaded file after analysis finishes.
- Keeps the evaluation context only in process memory for follow-up Q&A.
Runtime model
- No persistent user data.
- No video archive.
- No login or settings storage.
- Single-container deployment.
- Best fit: small-scale research demo.
Files
backend/app.py: FastAPI backend, job queue, temp upload handling, follow-up chat.static/: upload/result/chat frontend.Dockerfile: container build for Hugging Face Docker Space.requirements.txt: web-only Python dependencies.
Environment variables
ALI_API_KEY: required, used by DashScope-compatible OpenAI endpoints.SEMA_MAX_UPLOAD_MB: optional, defaults to80.SEMA_JOB_TTL_SECONDS: optional, defaults to7200.SEMA_MAX_WORKERS: optional, defaults to1.SEMA_SUBPIPELINE: optional, defaults to4.
Deploy to a Hugging Face Docker Space
- Create a new Space and choose
Docker. - Use the current repository as the source, or copy the repo contents into the Space repository.
- Place the contents of this
webapp/folder at the Space repo root if you want Hugging Face to use the includedREADME.mdandDockerfiledirectly. - In Space settings, add
ALI_API_KEYas a secret. - Push and wait for the container build to finish.
Local run
From the repository root:
docker build -f webapp/Dockerfile -t sema-web .
docker run --rm -p 7860:7860 -e ALI_API_KEY=your_key sema-web
Then open http://localhost:7860.