Spaces:
Paused
Paused
| title: Quantum | |
| emoji: π | |
| colorFrom: gray | |
| colorTo: red | |
| sdk: docker | |
| hf_oauth: false | |
| pinned: false | |
| Check out the configuration reference at <https://huggingface.co/docs/hub/spaces-config-reference> | |
| ## Running locally | |
| ```powershell | |
| python app.py | |
| ``` | |
| By default the UI listens on `http://127.0.0.1:7860`. The EM and QLBM pages start in the | |
| background and expose themselves on `http://127.0.0.1:8701` and `http://127.0.0.1:8702`. | |
| ## Single-port / Hugging Face deployment | |
| Spaces (and other single-port hosts) now work by running everything behind an internal | |
| reverse proxy. Set the following environment variables (the Dockerfile already does this): | |
| | Variable | Purpose | | |
| | --- | --- | | |
| | `APP_HOST` / `APP_PORT` | Internal address that `app.py` binds to (defaults: `127.0.0.1:8700`). | | |
| | `EM_APP_PORT` / `QLBM_APP_PORT` | Ports for the EM and QLBM subprocesses (defaults: `8701` and `8702`). | | |
| | `EM_IFRAME_SRC` / `QLBM_IFRAME_SRC` | Public paths served by the proxy (e.g., `/em/` and `/qlbm/`). | | |
| The bundled `docker/nginx.conf` proxies incoming traffic on `${PORT:-7860}` to the | |
| individual services so the browser never tries to contact `127.0.0.1` directly. | |
| ## CUDA-Q backend on CPU-only hosts | |
| The 3D lattice Boltzmann solver relies on CUDA-Q, which expects a CUDA-capable GPU. | |
| CPU-only runtimes such as Hugging Face Spaces automatically fall back to a lightweight | |
| "CPU demo" mode so the UI and preview still run. The plots update with an approximate | |
| synthetic evolution and clearly indicate the active backend. Use these environment | |
| variables to control the behavior: | |
| | Variable | Effect | | |
| | --- | --- | | |
| | `DISABLE_CUDAQ=1` | Skip loading the CUDA-Q backend (the CPU demo remains available). | | |
| | `FORCE_ENABLE_CUDAQ=1` | Attempt to load the CUDA-Q backend even if the platform was auto-detected as CPU-only. | | |
| On GPU-enabled machines, leave both variables unset to run the full CUDA-Q solver. On | |
| CPU-only hosts you can still explore the workflow in demo mode, or deploy to a GPU- | |
| backed Space/local workstation and set `FORCE_ENABLE_CUDAQ=1` to activate the quantum | |
| backend once hardware is available. | |