final / ui /README.md
k22056537
evaluation: channel ablation script + feature importance LOPO
e69e3a3
# ui/
Live OpenCV demo and inference pipelines used by the app.
**Files:** `pipeline.py` (FaceMesh, MLP, XGBoost, Hybrid pipelines), `live_demo.py` (webcam window with mesh + focus label).
**Pipelines:** FaceMesh = rule-based head/eye; MLP = 10 features → PyTorch MLP (checkpoints/mlp_best.pt + scaler); XGBoost = same 10 features → xgboost_face_orientation_best.json. Hybrid combines ML/XGB with geometric scores.
**Run demo:**
```bash
python ui/live_demo.py
python ui/live_demo.py --xgb
```
`m` = cycle mesh, `p` = switch pipeline, `q` = quit. Same pipelines back the FastAPI WebSocket video in `main.py`.