File size: 3,364 Bytes
a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 6fe70f4 a54f9b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# SHARP Web: Frontend (Gradio) + Backend (FastAPI)
This directory provides a separated deployment:
- Backend API (`api_server.py`) runs on a GPU cloud with FastAPI
- Frontend UI (`app.py`) runs on Hugging Face Spaces with Gradio
The UI calls the API via HTTP; the API performs model inference and returns PLY results.
## Repository Layout
- `src/sharp/web/api_server.py` — FastAPI backend hosting inference endpoints
- `src/sharp/web/app.py` — Gradio frontend calling the backend
- `requirements_api.txt` — Backend dependencies (GPU cloud)
- `requirements.txt` — Frontend dependencies (HF Spaces)
## Backend (GPU Cloud)
### Install
On your GPU cloud instance:
```bash
# From repository root
pip install -r requirements_api.txt
```
Notes:
- Ensure CUDA is available if using NVIDIA GPUs. The Torch version in `requirements_api.txt` is compiled for CUDA 12 on Linux.
- On macOS, MPS (Apple Silicon) may be detected; otherwise CPU fallback is used.
### Run
From repository root:
```bash
python src/sharp/web/api_server.py
```
or with Uvicorn:
```bash
uvicorn src.sharp.web.api_server:app --host 0.0.0.0 --port 8000
```
### Endpoints
- `GET /health` — Basic health check, device info, and model-loaded flag
- `POST /predict` — Multipart upload of one or more images (`files` field); returns JSON with per-image metadata and PLY contents base64-encoded
- `POST /predict/download` — Multipart upload of one or more images; returns a ZIP stream containing PLY files
CORS is enabled by default to allow calls from the Hugging Face Space. For production, set `allow_origins` to your specific Space domain.
## Frontend (Hugging Face Spaces)
### Install
On HF Spaces:
```bash
pip install -r requirements.txt
```
This installs only Gradio and Requests.
### Configure
Set environment variable `API_BASE_URL` in your Space to point to the public backend URL, for example:
```
API_BASE_URL=https://your-api.example.com
```
If running locally for testing, `API_BASE_URL` defaults to `http://localhost:8000`.
### Run
Locally:
```bash
python src/sharp/web/app.py
```
Gradio will start on port `7860` by default (configured to `0.0.0.0` in the script).
On HF Spaces, simply setting the Space’s “Run” command to `python src/sharp/web/app.py` is sufficient.
### Usage (Frontend)
- Single Image tab: upload one image and click Predict to download its PLY.
- Batch tab: upload multiple images and click Predict Batch to download a ZIP containing PLY files.
- The frontend calls the backend `POST /predict` and assembles results for user download.
## Quick Local Test
1. Start backend:
```bash
uvicorn src.sharp.web.api_server:app --host 0.0.0.0 --port 8000
```
2. Start frontend (in another terminal):
```bash
API_BASE_URL=http://localhost:8000 python src/sharp/web/app.py
```
3. Open the Gradio UI (http://localhost:7860), upload images, and verify outputs.
## Notes & Troubleshooting
- If imports like `fastapi` or `gradio` show unresolved in your IDE, ensure the correct environment is selected and dependencies installed via the respective requirements file.
- Network access from HF Spaces to the GPU API must be allowed; ensure your API endpoint is accessible over HTTPS where possible.
- For security, consider locking down CORS to your Space origin and adding authentication (e.g., an API key header) if needed.
|