# FocusGuard Webcam-based focus detection: MediaPipe face mesh → 17 features (EAR, gaze, head pose, PERCLOS, etc.) → MLP or XGBoost for focused/unfocused. React + FastAPI app with WebSocket video. ## Project layout ``` ├── data/ collected_/*.npz ├── data_preparation/ loaders, split, scale ├── notebooks/ MLP/XGB training + LOPO ├── models/ face_mesh, head_pose, eye_scorer, train scripts ├── checkpoints/ mlp_best.pt, xgboost_*_best.json, scalers ├── evaluation/ logs, plots, justify_thresholds ├── ui/ pipeline.py, live_demo.py ├── src/ React frontend ├── static/ built frontend (after npm run build) ├── main.py, app.py FastAPI backend ├── requirements.txt └── package.json ``` ## Setup ```bash python -m venv venv source venv/bin/activate pip install -r requirements.txt ``` To rebuild the frontend after changes: ```bash npm install npm run build mkdir -p static && cp -r dist/* static/ ``` ## Run **Web app:** Use the venv and run uvicorn via Python so it picks up your deps (otherwise you get `ModuleNotFoundError: aiosqlite`): ```bash source venv/bin/activate python -m uvicorn main:app --host 0.0.0.0 --port 7860 ``` Then open http://localhost:7860. **OpenCV demo:** ```bash python ui/live_demo.py python ui/live_demo.py --xgb ``` **Train:** ```bash python -m models.mlp.train python -m models.xgboost.train ``` ## Data 9 participants, 144,793 samples, 10 features, binary labels. Collect with `python -m models.collect_features --name `. Data lives in `data/collected_/`. ## Model numbers (15% test split) | Model | Accuracy | F1 | ROC-AUC | |-------|----------|-----|---------| | XGBoost (600 trees, depth 8) | 95.87% | 0.959 | 0.991 | | MLP (64→32) | 92.92% | 0.929 | 0.971 | ## Pipeline 1. Face mesh (MediaPipe 478 pts) 2. Head pose → yaw, pitch, roll, scores, gaze offset 3. Eye scorer → EAR, gaze ratio, MAR 4. Temporal → PERCLOS, blink rate, yawn 5. 10-d vector → MLP or XGBoost → focused / unfocused **Stack:** FastAPI, aiosqlite, React/Vite, PyTorch, XGBoost, MediaPipe, OpenCV.