Spaces:
Running
Running
Upload folder using huggingface_hub
Browse files- .env.example +3 -0
- CLEANUP_COMPLETION_REPORT.md +0 -252
- README.md +1 -1
- exemples/GUIDE_DEMO.txt +0 -89
- exemples/README.md +0 -138
- exemples/demo_batch.py +121 -127
- exemples/demo_batch_hf.py +136 -101
- exemples/demo_unitaire.py +175 -108
- exemples/demo_unitaire_hf.py +191 -112
- exemples/lancer_api.sh +0 -44
- exemples/{predictions_batch_20260111_235739.csv → predictions_batch_20260112_043228.csv} +0 -0
- exemples/predictions_batch_hf_20260112_043238.csv +11 -0
- mkdocs.yml +6 -10
- requirements_prod.txt +123 -0
- src/Dockerfile +11 -13
- src/gradio_ui.py +13 -4
- test_deployment.sh +166 -0
.env.example
CHANGED
|
@@ -28,6 +28,9 @@ API_PORT=8000
|
|
| 28 |
# Mode debug (True/False)
|
| 29 |
DEBUG=False
|
| 30 |
|
|
|
|
|
|
|
|
|
|
| 31 |
# ===== LOGGING =====
|
| 32 |
# Niveau de log (DEBUG, INFO, WARNING, ERROR)
|
| 33 |
LOG_LEVEL=INFO
|
|
|
|
| 28 |
# Mode debug (True/False)
|
| 29 |
DEBUG=False
|
| 30 |
|
| 31 |
+
# Activer l'interface Gradio
|
| 32 |
+
GRADIO_ENABLED=True
|
| 33 |
+
|
| 34 |
# ===== LOGGING =====
|
| 35 |
# Niveau de log (DEBUG, INFO, WARNING, ERROR)
|
| 36 |
LOG_LEVEL=INFO
|
CLEANUP_COMPLETION_REPORT.md
DELETED
|
@@ -1,252 +0,0 @@
|
|
| 1 |
-
# 🔄 Cleanup Completion Report - PR dev→main
|
| 2 |
-
|
| 3 |
-
## Executive Summary
|
| 4 |
-
**All 8 sub-steps of the comprehensive project cleanup have been completed successfully.** The project is now clean, well-organized, and evaluator-ready with 100% test pass rate and 75.63% code coverage maintained throughout.
|
| 5 |
-
|
| 6 |
-
---
|
| 7 |
-
|
| 8 |
-
## Cleanup Completion Status
|
| 9 |
-
|
| 10 |
-
### ✅ Sub-Step 1: Audit & Backup Branch
|
| 11 |
-
- Created backup-post-audit branch with safety checkpoint
|
| 12 |
-
- Generated pre-cleanup structure snapshot (docs/structure_pre_clean.txt)
|
| 13 |
-
- Documented all findings in 7 professional audit reports
|
| 14 |
-
|
| 15 |
-
### ✅ Sub-Step 2: Validation Phase 1
|
| 16 |
-
- Verified .gitignore completeness
|
| 17 |
-
- Validated 86/86 tests passing (100% pass rate)
|
| 18 |
-
- Confirmed 70.27%+ code coverage
|
| 19 |
-
- Black/Flake8 linting successful
|
| 20 |
-
|
| 21 |
-
### ✅ Sub-Step 3: Clean Root Files
|
| 22 |
-
- Merged README_HF.md content → README.md (new HuggingFace section)
|
| 23 |
-
- Renamed requirements_full.txt → requirements_dev.txt (prod/dev distinction)
|
| 24 |
-
- Archived etapes.txt → docs/etapes_archive.txt (preserved educational context)
|
| 25 |
-
- Updated .gitignore accordingly
|
| 26 |
-
|
| 27 |
-
### ✅ Sub-Step 4: Documentation Consolidation
|
| 28 |
-
- **API Docs**: 3 sources (API.md, api/guide.md, API_GUIDE.md) → **1 source (API_GUIDE.md)**
|
| 29 |
-
- **Model Docs**: 2 sources (model/technical.md, MODEL_TECHNICAL.md) → **1 source (MODEL_TECHNICAL.md)**
|
| 30 |
-
- Removed redundant directories: docs/api/ and docs/model/
|
| 31 |
-
- Updated mkdocs.yml navigation
|
| 32 |
-
- Result: **-883 lines duplicated** (-39% reduction)
|
| 33 |
-
|
| 34 |
-
### ✅ Sub-Step 5: Optimize docs/ Navigation
|
| 35 |
-
- Enhanced docs/index.md with comprehensive "📚 Navigation Documentation" hub
|
| 36 |
-
- Organized 18 documents into 8 categories
|
| 37 |
-
- Generated pytest coverage HTML report (docs/coverage_report/)
|
| 38 |
-
- Added clear navigation tips for users/developers/evaluators
|
| 39 |
-
- MkDocs builds successfully (0.81s)
|
| 40 |
-
|
| 41 |
-
### ✅ Sub-Step 6: Refine src/tests Structure
|
| 42 |
-
- Reorganized tests/ from flat → hierarchical structure:
|
| 43 |
-
- test_api/ (5 test files: auth, demo, health, predict, validation)
|
| 44 |
-
- test_database/ (database operations tests)
|
| 45 |
-
- test_functional/ (end-to-end tests)
|
| 46 |
-
- test_model/ (ML model tests)
|
| 47 |
-
- Added __init__.py to each subdirectory (Python packages)
|
| 48 |
-
- Fixed monkeypatch reference in test_functional.py (import path update)
|
| 49 |
-
- Created tests/README.md with structure & fixture documentation
|
| 50 |
-
- Result: **86/86 tests passing**, 75.63% coverage maintained
|
| 51 |
-
|
| 52 |
-
### ✅ Sub-Step 7: Clean Other Folders
|
| 53 |
-
- Removed redundant root files (README_HF.md, etapes.txt duplicate)
|
| 54 |
-
- Removed .vscode/ directory (personal IDE config)
|
| 55 |
-
- Archived logs/ → docs/logs_archive/ (api.log, error.log preserved)
|
| 56 |
-
- Result: **Cleaner root directory** with only essential files
|
| 57 |
-
|
| 58 |
-
### ✅ Sub-Step 8: Finalize CI/CD & Prepare Merge
|
| 59 |
-
- Created composite GitHub Action (.github/actions/setup-poetry/action.yml)
|
| 60 |
-
- Refactored CI/CD workflow to eliminate duplicate setup steps (-60% duplication)
|
| 61 |
-
- Added MkDocs documentation build validation before HF deployments
|
| 62 |
-
- Optimized job dependencies (cleaner DAG)
|
| 63 |
-
- Improved job naming for clarity
|
| 64 |
-
- Result: **Production-ready CI/CD** with enhanced reliability
|
| 65 |
-
|
| 66 |
-
---
|
| 67 |
-
|
| 68 |
-
## Quantified Impact
|
| 69 |
-
|
| 70 |
-
| Metric | Before | After | Change |
|
| 71 |
-
|--------|--------|-------|--------|
|
| 72 |
-
| **Python Files** | 24 | 24 | No loss of function ✅ |
|
| 73 |
-
| **Root Files** | 11 | 8 | -3 (cleaner) |
|
| 74 |
-
| **Tests** | 86/86 passed | 86/86 passed | 100% maintained ✅ |
|
| 75 |
-
| **Coverage** | 75.63% | 75.63% | Maintained ✅ |
|
| 76 |
-
| **Documentation Files** | 18 | 18 | Consolidated (no loss) |
|
| 77 |
-
| **Duplicate Lines** | 883 | 0 | -883 (-39%) |
|
| 78 |
-
| **CI/CD Setup Duplication** | 60% | 20% | -67% (optimized) |
|
| 79 |
-
| **Root Folders** | 13 | 12 | -1 (logs archived) |
|
| 80 |
-
|
| 81 |
-
---
|
| 82 |
-
|
| 83 |
-
## Testing & Quality Assurance
|
| 84 |
-
|
| 85 |
-
### Final Validation
|
| 86 |
-
```
|
| 87 |
-
✅ 86 tests passed
|
| 88 |
-
✅ 11 tests skipped (expected - API integration & rate limiting)
|
| 89 |
-
✅ 75.63% code coverage (exceeds 70% requirement)
|
| 90 |
-
✅ Black linting: OK
|
| 91 |
-
✅ Flake8 linting: OK
|
| 92 |
-
✅ MkDocs build: 0.81s (successful)
|
| 93 |
-
✅ Import integrity: All modules loading correctly
|
| 94 |
-
✅ Git history: Clean, pedagogical commits throughout
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
### Test Results Summary
|
| 98 |
-
- **Total**: 97 tests
|
| 99 |
-
- **Passed**: 86 ✅
|
| 100 |
-
- **Skipped**: 11 (intentional)
|
| 101 |
-
- **Failed**: 0 ✅
|
| 102 |
-
- **Pass Rate**: 100%
|
| 103 |
-
|
| 104 |
-
---
|
| 105 |
-
|
| 106 |
-
## Git Commit History
|
| 107 |
-
|
| 108 |
-
### Cleanup Commits (dev branch)
|
| 109 |
-
```
|
| 110 |
-
21d4cb3 ci: optimize CI/CD pipeline with composite action and documentation build
|
| 111 |
-
d46bcee chore: clean root and archive non-essential folders
|
| 112 |
-
92ff10b refactor: reorganize tests directory into modular structure
|
| 113 |
-
a6460c0 docs: optimize docs/ with comprehensive navigation index
|
| 114 |
-
8ce38b2 docs: rapport sous-étape 4 - consolidation documentation
|
| 115 |
-
941a4dd docs: consolidate API and Model documentation
|
| 116 |
-
727d10c docs: rapport sous-étape 3 - clean racine complété
|
| 117 |
-
9aa0dbb refactor: clean root files while keeping history visible
|
| 118 |
-
cd0bc36 docs: sous-étape 2 - validations phase 1 complétées
|
| 119 |
-
debc614 docs: ajoute état pré-sous-étape-2 pour continuité du cleanup
|
| 120 |
-
```
|
| 121 |
-
|
| 122 |
-
**Total commits in cleanup**: 10 (from backup-post-audit)
|
| 123 |
-
|
| 124 |
-
---
|
| 125 |
-
|
| 126 |
-
## Project Structure (Final State)
|
| 127 |
-
|
| 128 |
-
```
|
| 129 |
-
OC_P5/
|
| 130 |
-
├── docs/ # ✅ Optimized documentation
|
| 131 |
-
│ ├── index.md # Navigation hub (8 categories)
|
| 132 |
-
│ ├── API_GUIDE.md # Consolidated API docs
|
| 133 |
-
│ ├── MODEL_TECHNICAL.md # Consolidated model docs
|
| 134 |
-
│ ├── etapes_archive.txt # Educational context
|
| 135 |
-
│ ├── logs_archive/ # Archived logs
|
| 136 |
-
│ └── coverage_report/ # Pytest coverage HTML
|
| 137 |
-
├── src/ # ✅ Core modules (unchanged)
|
| 138 |
-
│ ├── __init__.py
|
| 139 |
-
│ ├── auth.py
|
| 140 |
-
│ ├── config.py
|
| 141 |
-
│ ├── models.py
|
| 142 |
-
│ ├── schemas.py
|
| 143 |
-
│ ├── preprocessing.py
|
| 144 |
-
│ ├── logger.py
|
| 145 |
-
│ ├── rate_limit.py
|
| 146 |
-
│ └── gradio_ui.py
|
| 147 |
-
├── tests/ # ✅ Reorganized hierarchy
|
| 148 |
-
│ ├── conftest.py
|
| 149 |
-
│ ├── README.md # Structure documentation
|
| 150 |
-
│ ├── test_api/ # 5 API test files
|
| 151 |
-
│ ├── test_database/ # Database tests
|
| 152 |
-
│ ├── test_functional/ # End-to-end tests
|
| 153 |
-
│ └── test_model/ # ML model tests
|
| 154 |
-
├── ml_model/ # ✅ Training scripts (preserved)
|
| 155 |
-
├── scripts/ # ✅ Utilities (preserved)
|
| 156 |
-
├── .github/ # ✅ CI/CD optimized
|
| 157 |
-
│ ├── workflows/ci-cd.yml # Optimized pipeline
|
| 158 |
-
│ └── actions/setup-poetry/ # Reusable composite action
|
| 159 |
-
├── README.md # ✅ Enriched (HF integration)
|
| 160 |
-
├── pyproject.toml # ✅ Dependency management
|
| 161 |
-
├── mkdocs.yml # ✅ Documentation config
|
| 162 |
-
└── .gitignore # ✅ Complete
|
| 163 |
-
```
|
| 164 |
-
|
| 165 |
-
---
|
| 166 |
-
|
| 167 |
-
## Key Achievements
|
| 168 |
-
|
| 169 |
-
### 🎯 Code Quality
|
| 170 |
-
- ✅ Zero functional loss - all tests passing
|
| 171 |
-
- ✅ Zero regressions detected
|
| 172 |
-
- ✅ Code coverage maintained above requirement
|
| 173 |
-
- ✅ Clean, pedagogical commit messages
|
| 174 |
-
|
| 175 |
-
### 🎯 Organization
|
| 176 |
-
- ✅ Eliminated 883 lines of duplication (-39%)
|
| 177 |
-
- ✅ Single source of truth for each documentation topic
|
| 178 |
-
- ✅ Hierarchical test organization for clarity
|
| 179 |
-
- ✅ Clean root directory (removed non-essential files)
|
| 180 |
-
|
| 181 |
-
### 🎯 DevOps & CI/CD
|
| 182 |
-
- ✅ Composite GitHub Action created (DRY principle)
|
| 183 |
-
- ✅ 60% reduction in setup code duplication
|
| 184 |
-
- ✅ Automatic documentation validation before deployment
|
| 185 |
-
- ✅ Production-ready pipeline
|
| 186 |
-
|
| 187 |
-
### 🎯 Evaluator Experience
|
| 188 |
-
- ✅ Clear navigation (docs/index.md with 8 categories)
|
| 189 |
-
- ✅ Comprehensive audit trail (git history)
|
| 190 |
-
- ✅ Before/after documentation
|
| 191 |
-
- ✅ Educational context preserved (etapes_archive.txt)
|
| 192 |
-
- ✅ Repo structure optimized for understanding
|
| 193 |
-
|
| 194 |
-
---
|
| 195 |
-
|
| 196 |
-
## Recommendations for Merge
|
| 197 |
-
|
| 198 |
-
### ✅ Pre-Merge Checklist
|
| 199 |
-
- [x] All tests passing (86/86)
|
| 200 |
-
- [x] Coverage requirement met (75.63% ≥ 70%)
|
| 201 |
-
- [x] Code review: Clean commits with pedagogical messages
|
| 202 |
-
- [x] Linting: Black + Flake8 passing
|
| 203 |
-
- [x] Documentation: MkDocs builds successfully
|
| 204 |
-
- [x] Git history: Clean and traceable
|
| 205 |
-
- [x] No breaking changes: Zero functional loss
|
| 206 |
-
- [x] CI/CD: Optimized and ready for production
|
| 207 |
-
|
| 208 |
-
### Merge Strategy
|
| 209 |
-
1. This PR represents **completion of the comprehensive cleanup project**
|
| 210 |
-
2. No functional code was modified - only organization
|
| 211 |
-
3. All sub-steps have been validated and documented
|
| 212 |
-
4. Ready for immediate merge to main branch
|
| 213 |
-
|
| 214 |
-
### Post-Merge Steps
|
| 215 |
-
1. Tag release (e.g., v3.4.0-cleanup-complete)
|
| 216 |
-
2. Deploy to HF Spaces production (automatic via CI/CD)
|
| 217 |
-
3. Archive this PR as final cleanup documentation
|
| 218 |
-
4. Maintain tags for evaluator reference
|
| 219 |
-
|
| 220 |
-
---
|
| 221 |
-
|
| 222 |
-
## Impact on Project
|
| 223 |
-
|
| 224 |
-
### Before Cleanup
|
| 225 |
-
- 11 redundant root files (duplicates of archived versions)
|
| 226 |
-
- 5 sources for core documentation (API & Model)
|
| 227 |
-
- Flat test directory (difficult to navigate)
|
| 228 |
-
- 60% duplication in CI/CD workflow
|
| 229 |
-
- Logs directory in root (not archived)
|
| 230 |
-
- .vscode/ with personal IDE settings
|
| 231 |
-
|
| 232 |
-
### After Cleanup
|
| 233 |
-
- 8 essential root files (clean)
|
| 234 |
-
- 1 source for each documentation topic (single truth)
|
| 235 |
-
- Hierarchical test organization (modular)
|
| 236 |
-
- 20% duplication in CI/CD (67% improvement)
|
| 237 |
-
- Logs archived in docs/ (preserved but organized)
|
| 238 |
-
- .vscode/ removed (shared repo only)
|
| 239 |
-
|
| 240 |
-
---
|
| 241 |
-
|
| 242 |
-
## Conclusion
|
| 243 |
-
|
| 244 |
-
This cleanup project has successfully transformed the OC_P5 Employee Turnover Prediction API from a functional project into a **professional, evaluator-ready codebase** with excellent organization, comprehensive documentation, and optimized CI/CD pipeline. All work has been completed without functional loss, with every change documented and validated through automated testing.
|
| 245 |
-
|
| 246 |
-
**Status: ✅ READY FOR MERGE TO MAIN**
|
| 247 |
-
|
| 248 |
-
---
|
| 249 |
-
|
| 250 |
-
*Prepared for: OpenClassrooms Evaluation*
|
| 251 |
-
*Date: January 2, 2025*
|
| 252 |
-
*All 8 cleanup sub-steps completed and validated*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -4,7 +4,7 @@ emoji: 🚀
|
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: green
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version: "
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
|
|
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: green
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: "6.2.0"
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
exemples/GUIDE_DEMO.txt
DELETED
|
@@ -1,89 +0,0 @@
|
|
| 1 |
-
╔══════════════════════════════════════════════════════════════════╗
|
| 2 |
-
║ ║
|
| 3 |
-
║ 🚀 GUIDE DÉMONSTRATION - Employee Turnover API ║
|
| 4 |
-
║ Par défaut : API locale http://127.0.0.1:7860 ║
|
| 5 |
-
║ ║
|
| 6 |
-
╚══════════════════════════════════════════════════════════════════╝
|
| 7 |
-
|
| 8 |
-
⚙️ CONFIGURATION DE L'URL
|
| 9 |
-
──────────────────────────────────────────────────────────────────
|
| 10 |
-
Par défaut, les scripts utilisent l'API locale : http://127.0.0.1:7860
|
| 11 |
-
|
| 12 |
-
Pour utiliser l'API Hugging Face Spaces, utilisez les scripts dédiés :
|
| 13 |
-
- demo_unitaire_hf.py
|
| 14 |
-
- demo_batch_hf.py
|
| 15 |
-
|
| 16 |
-
Optionnel : surcharger l'URL via la variable d'environnement :
|
| 17 |
-
HF_API_URL="https://asi-engineer-oc-p5.hf.space"
|
| 18 |
-
Optionnel : si la Space protège les endpoints, ajouter une API key :
|
| 19 |
-
HF_API_KEY="votre-cle"
|
| 20 |
-
|
| 21 |
-
Si FastAPI n'est pas exposé sur la Space, le script batch tentera automatiquement l'API Gradio `/api/predict_batch` (nécessite l'onglet Batch activé dans l'interface).
|
| 22 |
-
|
| 23 |
-
📋 INSTALLATION (une seule fois)
|
| 24 |
-
──────────────────────────────────────────────────────────────────
|
| 25 |
-
pip install requests pandas
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
🚀 LANCER L'API LOCALE (nécessaire pour la démo)
|
| 29 |
-
──────────────────────────────────────────────────────────────────
|
| 30 |
-
./lancer_api.sh
|
| 31 |
-
|
| 32 |
-
Ou manuellement depuis le dossier racine :
|
| 33 |
-
cd ..
|
| 34 |
-
poetry run uvicorn api:app --host 127.0.0.1 --port 7860
|
| 35 |
-
|
| 36 |
-
L'API sera accessible sur http://127.0.0.1:7860
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
🔮 DÉMO 1 : PRÉDICTION UNITAIRE (1 employé)
|
| 40 |
-
──────────────────────────────────────────────────────────────────
|
| 41 |
-
python demo_unitaire.py
|
| 42 |
-
|
| 43 |
-
→ Le script pose des questions sur l'employé
|
| 44 |
-
→ Affiche directement le résultat de prédiction
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
📦 DÉMO 2 : PRÉDICTION BATCH (plusieurs employés)
|
| 48 |
-
──────────────────────────────────────────────────────────────────
|
| 49 |
-
python demo_batch.py
|
| 50 |
-
|
| 51 |
-
→ Demande 3 fichiers CSV (sondage, eval, sirh)
|
| 52 |
-
→ Génère un fichier CSV avec tous les résultats
|
| 53 |
-
→ Nom du fichier : predictions_batch_YYYYMMDD_HHMMSS.csv
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
✅ TEST RAPIDE (avec fichiers d'exemple fournis)
|
| 57 |
-
──────────────────────────────────────────────────────────────────
|
| 58 |
-
Quand demo_batch.py demande si vous voulez utiliser les fichiers
|
| 59 |
-
d'exemple, tapez simplement "O" ou appuyez sur Entrée.
|
| 60 |
-
|
| 61 |
-
Les 3 fichiers d'exemple (10 employés) seront utilisés automatiquement :
|
| 62 |
-
- 02_predict_batch_sondage.csv
|
| 63 |
-
- 02_predict_batch_eval.csv
|
| 64 |
-
- 02_predict_batch_sirh.csv
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
🎯 JOUR J - CHECKLIST
|
| 68 |
-
──────────────────────────────────────────────────────────────────
|
| 69 |
-
□ Lancer l'API locale : poetry run uvicorn api:app --host 127.0.0.1 --port 7860
|
| 70 |
-
(depuis le dossier racine du projet, ou ./lancer_api.sh depuis exemples/)
|
| 71 |
-
□ Ou configurer l'URL HF Spaces dans les scripts
|
| 72 |
-
(via HF_API_URL avec demo_unitaire_hf.py / demo_batch_hf.py)
|
| 73 |
-
□ Préparer les 3 fichiers CSV batch si nécessaire
|
| 74 |
-
□ Tester : python demo_batch.py
|
| 75 |
-
□ Vérifier le CSV de sortie est bien généré
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
📄 FICHIERS DANS CE DOSSIER
|
| 79 |
-
──────────────────────────────────────────────────────────────────
|
| 80 |
-
lancer_api.sh → Lance l'API locale facilement
|
| 81 |
-
demo_unitaire.py → Script démo prédiction 1 employé
|
| 82 |
-
demo_batch.py → Script démo prédiction batch
|
| 83 |
-
demo_unitaire_hf.py → Script démo unitaire via Hugging Face
|
| 84 |
-
demo_batch_hf.py → Script démo batch via Hugging Face
|
| 85 |
-
02_predict_batch_*.csv → Fichiers d'exemple (10 employés)
|
| 86 |
-
README.md → Documentation détaillée
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
C'EST TOUT ! 🎉
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
exemples/README.md
DELETED
|
@@ -1,138 +0,0 @@
|
|
| 1 |
-
# 🚀 DÉMONSTRATION API Employee Turnover
|
| 2 |
-
|
| 3 |
-
**Par défaut** : API locale `http://127.0.0.1:7860`
|
| 4 |
-
**Production** : Hugging Face Spaces `https://asi-engineer-oc-p5.hf.space`
|
| 5 |
-
|
| 6 |
-
## ⚙️ Configuration
|
| 7 |
-
|
| 8 |
-
Les scripts locaux utilisent par défaut l'API locale. Pour la Space Hugging Face, des scripts dédiés sont fournis et acceptent `HF_API_URL` comme variable d'environnement.
|
| 9 |
-
|
| 10 |
-
## 📋 Installation
|
| 11 |
-
|
| 12 |
-
```bash
|
| 13 |
-
pip install requests pandas
|
| 14 |
-
```
|
| 15 |
-
|
| 16 |
-
## 🚀 Lancer l'API locale
|
| 17 |
-
|
| 18 |
-
**Option 1** : Script automatique
|
| 19 |
-
```bash
|
| 20 |
-
./lancer_api.sh
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
**Option 2** : Commande manuelle
|
| 24 |
-
```bash
|
| 25 |
-
cd .. # Retour au dossier racine
|
| 26 |
-
poetry run uvicorn api:app --host 127.0.0.1 --port 7860
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
L'API sera disponible sur `http://127.0.0.1:7860`
|
| 30 |
-
|
| 31 |
-
## 🔮 Prédiction UNITAIRE (1 employé)
|
| 32 |
-
|
| 33 |
-
**Usage ultra-simple** : Le script pose toutes les questions une par une.
|
| 34 |
-
|
| 35 |
-
```bash
|
| 36 |
-
python demo_unitaire.py
|
| 37 |
-
```
|
| 38 |
-
|
| 39 |
-
Le script demande les informations de l'employé, interroge l'API et affiche le résultat immédiatement.
|
| 40 |
-
|
| 41 |
-
**Exemple de sortie** :
|
| 42 |
-
```
|
| 43 |
-
📊 RÉSULTAT
|
| 44 |
-
══════════════════════════════════════════════════════════
|
| 45 |
-
✅ PRÉDICTION: L'EMPLOYÉ VA RESTER
|
| 46 |
-
🎯 Niveau de risque: Low
|
| 47 |
-
Probabilité de rester: 85.2%
|
| 48 |
-
Probabilité de partir: 14.8%
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
---
|
| 52 |
-
|
| 53 |
-
## 📦 Prédiction BATCH (fichiers CSV)
|
| 54 |
-
|
| 55 |
-
**Usage ultra-simple** : Fournit 3 fichiers CSV, obtient 1 CSV de résultats.
|
| 56 |
-
|
| 57 |
-
```bash
|
| 58 |
-
python demo_batch.py
|
| 59 |
-
```
|
| 60 |
-
|
| 61 |
-
Le script demande les chemins des 3 fichiers CSV :
|
| 62 |
-
1. Fichier sondage
|
| 63 |
-
2. Fichier évaluation
|
| 64 |
-
3. Fichier SIRH
|
| 65 |
-
|
| 66 |
-
**Il génère automatiquement** : `predictions_batch_YYYYMMDD_HHMMSS.csv` dans le dossier courant.
|
| 67 |
-
|
| 68 |
-
**Exemple de sortie** :
|
| 69 |
-
```
|
| 70 |
-
📊 RÉSUMÉ
|
| 71 |
-
══════════════════════════════════════════════════════════
|
| 72 |
-
✅ Employés qui vont RESTER: 8
|
| 73 |
-
🏃 Employés qui vont PARTIR: 2
|
| 74 |
-
🔴 Risque ÉLEVÉ: 1
|
| 75 |
-
🟡 Risque MOYEN: 2
|
| 76 |
-
🟢 Risque FAIBLE: 7
|
| 77 |
-
|
| 78 |
-
💾 Résultats sauvegardés dans: predictions_batch_20260111_234530.csv
|
| 79 |
-
```
|
| 80 |
-
|
| 81 |
-
---
|
| 82 |
-
|
| 83 |
-
## ☁️ Utiliser l'API Hugging Face (Space)
|
| 84 |
-
|
| 85 |
-
Deux scripts ciblent directement la Space HF:
|
| 86 |
-
|
| 87 |
-
```bash
|
| 88 |
-
python demo_unitaire_hf.py
|
| 89 |
-
python demo_batch_hf.py
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
Optionnel: surcharger l'URL via `HF_API_URL`:
|
| 93 |
-
|
| 94 |
-
```bash
|
| 95 |
-
HF_API_URL="https://asi-engineer-oc-p5.hf.space" python demo_batch_hf.py
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
Optionnel: si la Space protège les endpoints, ajouter une API key:
|
| 99 |
-
|
| 100 |
-
```bash
|
| 101 |
-
HF_API_KEY="votre-cle" python demo_unitaire_hf.py
|
| 102 |
-
HF_API_KEY="votre-cle" python demo_batch_hf.py
|
| 103 |
-
```
|
| 104 |
-
|
| 105 |
-
Note: si la Space n'expose pas FastAPI, le script batch basculera automatiquement sur l'API Gradio (`/api/predict_batch`) si l'onglet Batch est activé. Sinon, utilisez l'API locale avec `lancer_api.sh`.
|
| 106 |
-
|
| 107 |
-
---
|
| 108 |
-
|
| 109 |
-
## 📂 Fichiers d'exemple fournis
|
| 110 |
-
|
| 111 |
-
Pour tester rapidement, 4 fichiers d'exemple sont fournis :
|
| 112 |
-
|
| 113 |
-
- **`01_predict_single_employee.json`** - Exemple d'employé pour test unitaire
|
| 114 |
-
- **`02_predict_batch_sondage.csv`** - 10 employés (données sondage)
|
| 115 |
-
- **`02_predict_batch_eval.csv`** - 10 employés (données évaluation)
|
| 116 |
-
- **`02_predict_batch_sirh.csv`** - 10 employés (données SIRH)
|
| 117 |
-
|
| 118 |
-
**Utilisation** : Indiquez simplement ces chemins quand `demo_batch.py` vous les demande.
|
| 119 |
-
|
| 120 |
-
---
|
| 121 |
-
|
| 122 |
-
## 🎯 Jour J - Checklist
|
| 123 |
-
|
| 124 |
-
1. ✅ Installer les dépendances : `pip install requests pandas`
|
| 125 |
-
2. ✅ Tester unitaire : `python demo_unitaire.py`
|
| 126 |
-
3. ✅ Tester batch : `python demo_batch.py` (utiliser les fichiers `02_predict_batch_*.csv`)
|
| 127 |
-
4. ✅ Vérifier que les CSV de résultats sont générés
|
| 128 |
-
|
| 129 |
-
**C'est tout !** 🎉
|
| 130 |
-
|
| 131 |
-
---
|
| 132 |
-
|
| 133 |
-
## 📖 Documentation complète
|
| 134 |
-
|
| 135 |
-
Pour plus d'informations sur l'API, les formats de données, etc., voir :
|
| 136 |
-
- [API Documentation](../docs/api_documentation.md)
|
| 137 |
-
- [Architecture](../docs/architecture.md)
|
| 138 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
exemples/demo_batch.py
CHANGED
|
@@ -1,160 +1,154 @@
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
-
📦 Prédiction BATCH -
|
| 4 |
|
| 5 |
Usage: python demo_batch.py
|
|
|
|
| 6 |
"""
|
| 7 |
|
| 8 |
import os
|
| 9 |
-
import
|
| 10 |
-
import requests
|
| 11 |
from datetime import datetime
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
# ═══════════════════════════════════════════════════════════════
|
| 14 |
# CONFIGURATION
|
| 15 |
# ═══════════════════════════════════════════════════════════════
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 23 |
-
print("║ 📦 PRÉDICTION BATCH -
|
| 24 |
-
print("
|
|
|
|
| 25 |
|
| 26 |
# ═══════════════════════════════════════════════════════════════
|
| 27 |
-
#
|
| 28 |
# ═══════════════════════════════════════════════════════════════
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
print(f"
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
if use_defaults in ["", "o", "oui", "y", "yes"]:
|
| 45 |
-
sondage_path = default_sondage
|
| 46 |
-
eval_path = default_eval
|
| 47 |
-
sirh_path = default_sirh
|
| 48 |
-
print("\n✅ Utilisation des fichiers d'exemple")
|
| 49 |
else:
|
| 50 |
-
print("\
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
#
|
| 56 |
-
for path in [
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
if not os.path.exists(path):
|
| 58 |
-
print(f"\n❌
|
| 59 |
-
exit(1)
|
| 60 |
-
|
| 61 |
-
print("\n✅ Fichiers chargés:")
|
| 62 |
-
print(f" - Sondage: {os.path.basename(sondage_path)}")
|
| 63 |
-
print(f" - Évaluation: {os.path.basename(eval_path)}")
|
| 64 |
-
print(f" - SIRH: {os.path.basename(sirh_path)}")
|
| 65 |
|
| 66 |
# ═══════════════════════════════════════════════════════════════
|
| 67 |
-
#
|
| 68 |
# ═══════════════════════════════════════════════════════════════
|
| 69 |
-
|
| 70 |
-
print("
|
| 71 |
-
|
| 72 |
-
files = {
|
| 73 |
-
"sondage_file": open(sondage_path, "rb"),
|
| 74 |
-
"eval_file": open(eval_path, "rb"),
|
| 75 |
-
"sirh_file": open(sirh_path, "rb"),
|
| 76 |
-
}
|
| 77 |
-
|
| 78 |
-
headers = {}
|
| 79 |
-
if API_KEY:
|
| 80 |
-
headers["X-API-Key"] = API_KEY
|
| 81 |
|
| 82 |
try:
|
| 83 |
-
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
)
|
| 86 |
-
response.raise_for_status()
|
| 87 |
-
result = response.json()
|
| 88 |
-
|
| 89 |
-
# ═══════════════════════════════════════════════════════════════
|
| 90 |
-
# CRÉATION DU CSV DE SORTIE
|
| 91 |
-
# ═══════════════════════════════════════════════════════════════
|
| 92 |
-
|
| 93 |
-
print("\n✅ Prédictions reçues!")
|
| 94 |
-
print(f" Total employés traités: {result['total_employees']}")
|
| 95 |
-
|
| 96 |
-
# Créer un DataFrame avec les résultats
|
| 97 |
-
predictions_data = []
|
| 98 |
-
for pred in result["predictions"]:
|
| 99 |
-
predictions_data.append(
|
| 100 |
-
{
|
| 101 |
-
"employee_id": pred["employee_id"],
|
| 102 |
-
"prediction": "VA PARTIR" if pred["prediction"] == 1 else "VA RESTER",
|
| 103 |
-
"prediction_code": pred["prediction"],
|
| 104 |
-
"risk_level": pred["risk_level"],
|
| 105 |
-
"probability_stay": f"{pred['probability_stay']:.2%}",
|
| 106 |
-
"probability_leave": f"{pred['probability_leave']:.2%}",
|
| 107 |
-
}
|
| 108 |
-
)
|
| 109 |
-
|
| 110 |
-
df_results = pd.DataFrame(predictions_data)
|
| 111 |
-
|
| 112 |
-
# Générer le nom du fichier de sortie
|
| 113 |
-
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 114 |
-
output_filename = f"predictions_batch_{timestamp}.csv"
|
| 115 |
-
|
| 116 |
-
# Sauvegarder dans le même dossier que ce script
|
| 117 |
-
script_dir = os.path.dirname(os.path.abspath(__file__))
|
| 118 |
-
output_path = os.path.join(script_dir, output_filename)
|
| 119 |
-
|
| 120 |
-
# Sauvegarder dans le même dossier que ce script
|
| 121 |
-
script_dir = os.path.dirname(os.path.abspath(__file__))
|
| 122 |
-
output_path = os.path.join(script_dir, output_filename)
|
| 123 |
-
|
| 124 |
-
df_results.to_csv(output_path, index=False, encoding="utf-8-sig")
|
| 125 |
|
| 126 |
# ═══════════════════════════════════════════════════════════════
|
| 127 |
-
# AFFICHAGE DU
|
| 128 |
# ═══════════════════════════════════════════════════════════════
|
| 129 |
-
|
| 130 |
-
print("\n" + "═" * 60)
|
| 131 |
-
print(" 📊 RÉSUMÉ")
|
| 132 |
-
print("═" * 60)
|
| 133 |
-
|
| 134 |
-
summary = result["summary"]
|
| 135 |
-
print(f"\n✅ Employés qui vont RESTER: {summary['total_stay']}")
|
| 136 |
-
print(f"🏃 Employés qui vont PARTIR: {summary['total_leave']}")
|
| 137 |
-
print(f"\n🔴 Risque ÉLEVÉ: {summary['high_risk_count']}")
|
| 138 |
-
print(f"🟡 Risque MOYEN: {summary['medium_risk_count']}")
|
| 139 |
-
print(f"🟢 Risque FAIBLE: {summary['low_risk_count']}")
|
| 140 |
-
|
| 141 |
print("\n" + "═" * 60)
|
| 142 |
-
print("
|
| 143 |
-
print(f" {output_path}")
|
| 144 |
print("═" * 60)
|
| 145 |
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
|
| 150 |
-
except requests.exceptions.RequestException as e:
|
| 151 |
-
print(f"\n❌ ERREUR API: {e}")
|
| 152 |
-
if hasattr(e, "response") and e.response is not None:
|
| 153 |
-
print(f"Détails: {e.response.text}")
|
| 154 |
except Exception as e:
|
| 155 |
-
print(f"\n❌
|
| 156 |
-
|
| 157 |
-
# Fermer les fichiers
|
| 158 |
-
for f in files.values():
|
| 159 |
-
if not f.closed:
|
| 160 |
-
f.close()
|
|
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
+
📦 Prédiction BATCH - API Locale
|
| 4 |
|
| 5 |
Usage: python demo_batch.py
|
| 6 |
+
Prérequis: API locale démarrée sur http://127.0.0.1:7860
|
| 7 |
"""
|
| 8 |
|
| 9 |
import os
|
| 10 |
+
import sys
|
|
|
|
| 11 |
from datetime import datetime
|
| 12 |
|
| 13 |
+
try:
|
| 14 |
+
import pandas as pd
|
| 15 |
+
from gradio_client import Client, handle_file
|
| 16 |
+
except ImportError:
|
| 17 |
+
print("❌ Dépendances manquantes. Installez avec:")
|
| 18 |
+
print(" pip install gradio_client pandas")
|
| 19 |
+
sys.exit(1)
|
| 20 |
+
|
| 21 |
# ═══════════════════════════════════════════════════════════════
|
| 22 |
# CONFIGURATION
|
| 23 |
# ═══════════════════════════════════════════════════════════════
|
| 24 |
+
API_URL = os.getenv("LOCAL_API_URL", "http://127.0.0.1:7860")
|
| 25 |
+
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
| 26 |
+
|
| 27 |
+
# Fichiers par défaut
|
| 28 |
+
DEFAULT_FILES = {
|
| 29 |
+
"eval": os.path.join(SCRIPT_DIR, "02_predict_batch_eval.csv"),
|
| 30 |
+
"sirh": os.path.join(SCRIPT_DIR, "02_predict_batch_sirh.csv"),
|
| 31 |
+
"sondage": os.path.join(SCRIPT_DIR, "02_predict_batch_sondage.csv"),
|
| 32 |
+
}
|
| 33 |
|
| 34 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 35 |
+
print("║ 📦 PRÉDICTION BATCH - API Locale ║")
|
| 36 |
+
print("╚══════════════════════════════════════════════════════════╝")
|
| 37 |
+
print(f"\n🌐 API: {API_URL}\n")
|
| 38 |
|
| 39 |
# ═══════════════════════════════════════════════════════════════
|
| 40 |
+
# SÉLECTION DES FICHIERS
|
| 41 |
# ═══════════════════════════════════════════════════════════════
|
| 42 |
+
print("═" * 60)
|
| 43 |
+
print("📁 SÉLECTION DES FICHIERS CSV")
|
| 44 |
+
print("═" * 60)
|
| 45 |
+
|
| 46 |
+
use_default = (
|
| 47 |
+
input("\nUtiliser les fichiers exemples par défaut? [O/n]: ").strip().lower()
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
if use_default in ("", "o", "oui", "y", "yes"):
|
| 51 |
+
fichier_eval = DEFAULT_FILES["eval"]
|
| 52 |
+
fichier_sirh = DEFAULT_FILES["sirh"]
|
| 53 |
+
fichier_sondage = DEFAULT_FILES["sondage"]
|
| 54 |
+
print(f"\n📄 Évaluation: {os.path.basename(fichier_eval)}")
|
| 55 |
+
print(f"📄 SIRH: {os.path.basename(fichier_sirh)}")
|
| 56 |
+
print(f"📄 Sondage: {os.path.basename(fichier_sondage)}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
else:
|
| 58 |
+
print("\nEntrez les chemins des fichiers CSV:")
|
| 59 |
+
fichier_eval = input("📄 Fichier évaluation: ").strip()
|
| 60 |
+
fichier_sirh = input("📄 Fichier SIRH: ").strip()
|
| 61 |
+
fichier_sondage = input("📄 Fichier sondage: ").strip()
|
| 62 |
+
|
| 63 |
+
# Vérification des fichiers
|
| 64 |
+
for name, path in [
|
| 65 |
+
("Évaluation", fichier_eval),
|
| 66 |
+
("SIRH", fichier_sirh),
|
| 67 |
+
("Sondage", fichier_sondage),
|
| 68 |
+
]:
|
| 69 |
if not os.path.exists(path):
|
| 70 |
+
print(f"\n❌ Fichier {name} introuvable: {path}")
|
| 71 |
+
sys.exit(1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
# ═══════════════════════════════════════════════════════════════
|
| 74 |
+
# PRÉDICTION BATCH
|
| 75 |
# ═══════════════════════════════════════════════════════════════
|
| 76 |
+
print("\n" + "═" * 60)
|
| 77 |
+
print("⏳ TRAITEMENT EN COURS...")
|
| 78 |
+
print("═" * 60)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
try:
|
| 81 |
+
print("\n⏳ Connexion à l'API...")
|
| 82 |
+
client = Client(API_URL)
|
| 83 |
+
print("✅ Connecté")
|
| 84 |
+
|
| 85 |
+
print("⏳ Envoi des fichiers...")
|
| 86 |
+
result = client.predict(
|
| 87 |
+
fichier_eval=handle_file(fichier_eval),
|
| 88 |
+
fichier_sirh=handle_file(fichier_sirh),
|
| 89 |
+
fichier_sondage=handle_file(fichier_sondage),
|
| 90 |
+
api_name="/predict_batch",
|
| 91 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
# ═══════════════════════════════════════════════════════════════
|
| 94 |
+
# AFFICHAGE DU RÉSULTAT
|
| 95 |
# ═══════════════════════════════════════════════════════════════
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
print("\n" + "═" * 60)
|
| 97 |
+
print("📊 RÉSULTAT DE LA PRÉDICTION BATCH")
|
|
|
|
| 98 |
print("═" * 60)
|
| 99 |
|
| 100 |
+
if isinstance(result, dict):
|
| 101 |
+
# Lecture du fichier résultat
|
| 102 |
+
result_path = result.get("value") or result.get("path")
|
| 103 |
+
if result_path and os.path.exists(result_path):
|
| 104 |
+
df = pd.read_csv(result_path)
|
| 105 |
+
total = len(df)
|
| 106 |
+
|
| 107 |
+
# Statistiques
|
| 108 |
+
if "prediction" in df.columns:
|
| 109 |
+
restent = (df["prediction"] == "Reste").sum()
|
| 110 |
+
partent = (df["prediction"] == "Part").sum()
|
| 111 |
+
else:
|
| 112 |
+
restent = partent = 0
|
| 113 |
+
|
| 114 |
+
if "risk_level" in df.columns:
|
| 115 |
+
risque_eleve = (df["risk_level"] == "Élevé").sum()
|
| 116 |
+
risque_moyen = (df["risk_level"] == "Moyen").sum()
|
| 117 |
+
risque_faible = (df["risk_level"] == "Faible").sum()
|
| 118 |
+
else:
|
| 119 |
+
risque_eleve = risque_moyen = risque_faible = 0
|
| 120 |
+
|
| 121 |
+
# Affichage des stats
|
| 122 |
+
print(f"\n👥 Total employés analysés: {total}")
|
| 123 |
+
print(f"\n📈 Vont RESTER: {restent} ({100 * restent / total:.1f}%)")
|
| 124 |
+
print(f"📉 Vont PARTIR: {partent} ({100 * partent / total:.1f}%)")
|
| 125 |
+
|
| 126 |
+
print(f"\n🟢 Risque faible: {risque_faible}")
|
| 127 |
+
print(f"🟠 Risque moyen: {risque_moyen}")
|
| 128 |
+
print(f"🔴 Risque élevé: {risque_eleve}")
|
| 129 |
+
|
| 130 |
+
# Sauvegarde
|
| 131 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 132 |
+
output_file = os.path.join(SCRIPT_DIR, f"predictions_batch_{timestamp}.csv")
|
| 133 |
+
df.to_csv(output_file, index=False)
|
| 134 |
+
|
| 135 |
+
print("\n" + "─" * 60)
|
| 136 |
+
print(f"💾 Fichier sauvegardé: {os.path.basename(output_file)}")
|
| 137 |
+
print("─" * 60)
|
| 138 |
+
|
| 139 |
+
# Aperçu
|
| 140 |
+
print("\n📋 Aperçu des résultats:")
|
| 141 |
+
cols = ["employee_id", "prediction", "prob_depart", "risk_level"]
|
| 142 |
+
cols_exist = [c for c in cols if c in df.columns]
|
| 143 |
+
if cols_exist:
|
| 144 |
+
print(df[cols_exist].head(10).to_string(index=False))
|
| 145 |
+
else:
|
| 146 |
+
print(f"\n⚠️ Fichier résultat non trouvé: {result_path}")
|
| 147 |
+
else:
|
| 148 |
+
print(f"\n📋 Résultat: {result}")
|
| 149 |
+
|
| 150 |
+
print("\n✅ Prédiction batch terminée avec succès!")
|
| 151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
except Exception as e:
|
| 153 |
+
print(f"\n❌ Erreur: {e}")
|
| 154 |
+
sys.exit(1)
|
|
|
|
|
|
|
|
|
|
|
|
exemples/demo_batch_hf.py
CHANGED
|
@@ -1,119 +1,154 @@
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
-
📦 Prédiction BATCH
|
| 4 |
|
| 5 |
Usage: python demo_batch_hf.py
|
| 6 |
-
|
| 7 |
-
- Envoie les 3 fichiers à la Space HF
|
| 8 |
-
- Sauvegarde un CSV de résultats
|
| 9 |
-
|
| 10 |
-
Option: définir HF_API_URL pour surcharger l'URL par défaut.
|
| 11 |
"""
|
| 12 |
|
| 13 |
import os
|
| 14 |
-
import
|
| 15 |
-
import requests
|
| 16 |
from datetime import datetime
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
API_URL = os.getenv("HF_API_URL", "https://asi-engineer-oc-p5.hf.space")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 21 |
-
print("║ 📦
|
| 22 |
-
print("
|
| 23 |
-
print(f"🌐 API: {API_URL}\n")
|
| 24 |
-
|
| 25 |
-
#
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
if not os.path.exists(path):
|
| 34 |
-
print(f"❌ Fichier introuvable: {path}")
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
print(
|
| 41 |
-
|
| 42 |
-
print("
|
| 43 |
-
files = {
|
| 44 |
-
"sondage_file": open(sondage_path, "rb"),
|
| 45 |
-
"eval_file": open(eval_path, "rb"),
|
| 46 |
-
"sirh_file": open(sirh_path, "rb"),
|
| 47 |
-
}
|
| 48 |
-
headers = {}
|
| 49 |
-
api_key = os.getenv("HF_API_KEY")
|
| 50 |
-
if api_key:
|
| 51 |
-
headers["X-API-Key"] = api_key
|
| 52 |
|
| 53 |
try:
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
)
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
)
|
| 63 |
-
r = requests.post(
|
| 64 |
-
f"{API_URL}/api/predict_batch", files=files, headers=headers, timeout=90
|
| 65 |
-
)
|
| 66 |
-
if r.status_code == 404:
|
| 67 |
-
print(
|
| 68 |
-
"\n❌ Endpoint HF introuvable (/predict/batch et /api/predict_batch)."
|
| 69 |
-
)
|
| 70 |
-
print(
|
| 71 |
-
" Vérifiez que la Space expose l'API FastAPI ou l'onglet Batch Gradio."
|
| 72 |
-
)
|
| 73 |
-
print(" Sinon, utilisez l'API locale (lancer_api.sh).")
|
| 74 |
-
raise SystemExit(1)
|
| 75 |
-
r.raise_for_status()
|
| 76 |
-
result = r.json()
|
| 77 |
-
|
| 78 |
-
# Construire le CSV de sortie
|
| 79 |
-
predictions_data = []
|
| 80 |
-
for pred in result.get("predictions", []):
|
| 81 |
-
predictions_data.append(
|
| 82 |
-
{
|
| 83 |
-
"employee_id": pred.get("employee_id"),
|
| 84 |
-
"prediction": (
|
| 85 |
-
"VA PARTIR" if pred.get("prediction") == 1 else "VA RESTER"
|
| 86 |
-
),
|
| 87 |
-
"prediction_code": pred.get("prediction"),
|
| 88 |
-
"risk_level": pred.get("risk_level"),
|
| 89 |
-
"probability_stay": f"{pred.get('probability_stay', 0):.2%}",
|
| 90 |
-
"probability_leave": f"{pred.get('probability_leave', 0):.2%}",
|
| 91 |
-
}
|
| 92 |
-
)
|
| 93 |
-
|
| 94 |
-
df = pd.DataFrame(predictions_data)
|
| 95 |
-
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 96 |
-
output_path = os.path.join(script_dir, f"predictions_batch_hf_{timestamp}.csv")
|
| 97 |
-
df.to_csv(output_path, index=False, encoding="utf-8-sig")
|
| 98 |
-
|
| 99 |
-
# Affichage
|
| 100 |
-
summary = result.get("summary", {})
|
| 101 |
print("\n" + "═" * 60)
|
| 102 |
-
print("
|
| 103 |
print("═" * 60)
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
+
📦 Prédiction BATCH - API Hugging Face (Gradio)
|
| 4 |
|
| 5 |
Usage: python demo_batch_hf.py
|
| 6 |
+
Prérequis: pip install gradio_client pandas
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
"""
|
| 8 |
|
| 9 |
import os
|
| 10 |
+
import sys
|
|
|
|
| 11 |
from datetime import datetime
|
| 12 |
|
| 13 |
+
try:
|
| 14 |
+
import pandas as pd
|
| 15 |
+
from gradio_client import Client, handle_file
|
| 16 |
+
except ImportError:
|
| 17 |
+
print("❌ Dépendances manquantes. Installez avec:")
|
| 18 |
+
print(" pip install gradio_client pandas")
|
| 19 |
+
sys.exit(1)
|
| 20 |
+
|
| 21 |
+
# ═══════════════════════════════════════════════════════════════
|
| 22 |
+
# CONFIGURATION
|
| 23 |
+
# ═══════════════════════════════════════════════════════════════
|
| 24 |
API_URL = os.getenv("HF_API_URL", "https://asi-engineer-oc-p5.hf.space")
|
| 25 |
+
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
| 26 |
+
|
| 27 |
+
# Fichiers par défaut
|
| 28 |
+
DEFAULT_FILES = {
|
| 29 |
+
"eval": os.path.join(SCRIPT_DIR, "02_predict_batch_eval.csv"),
|
| 30 |
+
"sirh": os.path.join(SCRIPT_DIR, "02_predict_batch_sirh.csv"),
|
| 31 |
+
"sondage": os.path.join(SCRIPT_DIR, "02_predict_batch_sondage.csv"),
|
| 32 |
+
}
|
| 33 |
|
| 34 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 35 |
+
print("║ 📦 PRÉDICTION BATCH - API Hugging Face ║")
|
| 36 |
+
print("╚══════════════════════════════════════════════════════════╝")
|
| 37 |
+
print(f"\n🌐 API: {API_URL}\n")
|
| 38 |
+
|
| 39 |
+
# ═══════════════════════════════════════════════════════════════
|
| 40 |
+
# SÉLECTION DES FICHIERS
|
| 41 |
+
# ═══════════════════════════════════════════════════════════════
|
| 42 |
+
print("═" * 60)
|
| 43 |
+
print("📁 SÉLECTION DES FICHIERS CSV")
|
| 44 |
+
print("═" * 60)
|
| 45 |
+
|
| 46 |
+
use_default = (
|
| 47 |
+
input("\nUtiliser les fichiers exemples par défaut? [O/n]: ").strip().lower()
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
if use_default in ("", "o", "oui", "y", "yes"):
|
| 51 |
+
fichier_eval = DEFAULT_FILES["eval"]
|
| 52 |
+
fichier_sirh = DEFAULT_FILES["sirh"]
|
| 53 |
+
fichier_sondage = DEFAULT_FILES["sondage"]
|
| 54 |
+
print(f"\n📄 Évaluation: {os.path.basename(fichier_eval)}")
|
| 55 |
+
print(f"📄 SIRH: {os.path.basename(fichier_sirh)}")
|
| 56 |
+
print(f"📄 Sondage: {os.path.basename(fichier_sondage)}")
|
| 57 |
+
else:
|
| 58 |
+
print("\nEntrez les chemins des fichiers CSV:")
|
| 59 |
+
fichier_eval = input("📄 Fichier évaluation: ").strip()
|
| 60 |
+
fichier_sirh = input("📄 Fichier SIRH: ").strip()
|
| 61 |
+
fichier_sondage = input("📄 Fichier sondage: ").strip()
|
| 62 |
+
|
| 63 |
+
# Vérification des fichiers
|
| 64 |
+
for name, path in [
|
| 65 |
+
("Évaluation", fichier_eval),
|
| 66 |
+
("SIRH", fichier_sirh),
|
| 67 |
+
("Sondage", fichier_sondage),
|
| 68 |
+
]:
|
| 69 |
if not os.path.exists(path):
|
| 70 |
+
print(f"\n❌ Fichier {name} introuvable: {path}")
|
| 71 |
+
sys.exit(1)
|
| 72 |
+
|
| 73 |
+
# ═══════════════════════════════════════════════════════════════
|
| 74 |
+
# PRÉDICTION BATCH
|
| 75 |
+
# ═══════════════════════════════════════════════════════════════
|
| 76 |
+
print("\n" + "═" * 60)
|
| 77 |
+
print("⏳ TRAITEMENT EN COURS...")
|
| 78 |
+
print("═" * 60)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
try:
|
| 81 |
+
print("\n⏳ Connexion à l'API...")
|
| 82 |
+
client = Client(API_URL)
|
| 83 |
+
print("✅ Connecté")
|
| 84 |
+
|
| 85 |
+
print("⏳ Envoi des fichiers...")
|
| 86 |
+
result = client.predict(
|
| 87 |
+
fichier_eval=handle_file(fichier_eval),
|
| 88 |
+
fichier_sirh=handle_file(fichier_sirh),
|
| 89 |
+
fichier_sondage=handle_file(fichier_sondage),
|
| 90 |
+
api_name="/predict_batch",
|
| 91 |
)
|
| 92 |
+
|
| 93 |
+
# ═══════════════════════════════════════════════════════════════
|
| 94 |
+
# AFFICHAGE DU RÉSULTAT
|
| 95 |
+
# ═══════════════════════════════════════════════════════════════
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
print("\n" + "═" * 60)
|
| 97 |
+
print("📊 RÉSULTAT DE LA PRÉDICTION BATCH")
|
| 98 |
print("═" * 60)
|
| 99 |
+
|
| 100 |
+
if isinstance(result, dict):
|
| 101 |
+
# Lecture du fichier résultat
|
| 102 |
+
result_path = result.get("value") or result.get("path")
|
| 103 |
+
if result_path and os.path.exists(result_path):
|
| 104 |
+
df = pd.read_csv(result_path)
|
| 105 |
+
total = len(df)
|
| 106 |
+
|
| 107 |
+
# Statistiques
|
| 108 |
+
if "prediction" in df.columns:
|
| 109 |
+
restent = (df["prediction"] == "Reste").sum()
|
| 110 |
+
partent = (df["prediction"] == "Part").sum()
|
| 111 |
+
else:
|
| 112 |
+
restent = partent = 0
|
| 113 |
+
|
| 114 |
+
if "risk_level" in df.columns:
|
| 115 |
+
risque_eleve = (df["risk_level"] == "Élevé").sum()
|
| 116 |
+
risque_moyen = (df["risk_level"] == "Moyen").sum()
|
| 117 |
+
risque_faible = (df["risk_level"] == "Faible").sum()
|
| 118 |
+
else:
|
| 119 |
+
risque_eleve = risque_moyen = risque_faible = 0
|
| 120 |
+
|
| 121 |
+
# Affichage des stats
|
| 122 |
+
print(f"\n👥 Total employés analysés: {total}")
|
| 123 |
+
print(f"\n📈 Vont RESTER: {restent} ({100 * restent / total:.1f}%)")
|
| 124 |
+
print(f"📉 Vont PARTIR: {partent} ({100 * partent / total:.1f}%)")
|
| 125 |
+
|
| 126 |
+
print(f"\n🟢 Risque faible: {risque_faible}")
|
| 127 |
+
print(f"🟠 Risque moyen: {risque_moyen}")
|
| 128 |
+
print(f"🔴 Risque élevé: {risque_eleve}")
|
| 129 |
+
|
| 130 |
+
# Sauvegarde
|
| 131 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 132 |
+
output_file = os.path.join(SCRIPT_DIR, f"predictions_batch_{timestamp}.csv")
|
| 133 |
+
df.to_csv(output_file, index=False)
|
| 134 |
+
|
| 135 |
+
print("\n" + "─" * 60)
|
| 136 |
+
print(f"💾 Fichier sauvegardé: {os.path.basename(output_file)}")
|
| 137 |
+
print("─" * 60)
|
| 138 |
+
|
| 139 |
+
# Aperçu
|
| 140 |
+
print("\n📋 Aperçu des résultats:")
|
| 141 |
+
cols = ["employee_id", "prediction", "prob_depart", "risk_level"]
|
| 142 |
+
cols_exist = [c for c in cols if c in df.columns]
|
| 143 |
+
if cols_exist:
|
| 144 |
+
print(df[cols_exist].head(10).to_string(index=False))
|
| 145 |
+
else:
|
| 146 |
+
print(f"\n⚠️ Fichier résultat non trouvé: {result_path}")
|
| 147 |
+
else:
|
| 148 |
+
print(f"\n📋 Résultat: {result}")
|
| 149 |
+
|
| 150 |
+
print("\n✅ Prédiction batch terminée avec succès!")
|
| 151 |
+
|
| 152 |
+
except Exception as e:
|
| 153 |
+
print(f"\n❌ Erreur: {e}")
|
| 154 |
+
sys.exit(1)
|
exemples/demo_unitaire.py
CHANGED
|
@@ -1,146 +1,213 @@
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
-
🔮 Prédiction UNITAIRE -
|
| 4 |
|
| 5 |
Usage: python demo_unitaire.py
|
|
|
|
| 6 |
"""
|
| 7 |
|
| 8 |
-
import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
# ═══════════════════════════════════════════════════════════════
|
| 11 |
# CONFIGURATION
|
| 12 |
# ═══════════════════════════════════════════════════════════════
|
|
|
|
| 13 |
|
| 14 |
-
#
|
| 15 |
-
#
|
| 16 |
-
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 20 |
-
print("║ 🔮 PRÉDICTION UNITAIRE -
|
| 21 |
-
print("
|
|
|
|
| 22 |
|
| 23 |
# ═══════════════════════════════════════════════════════════════
|
| 24 |
# COLLECTE DES DONNÉES
|
| 25 |
# ═══════════════════════════════════════════════════════════════
|
| 26 |
|
| 27 |
-
print("
|
| 28 |
-
|
| 29 |
-
# === SONDAGE ===
|
| 30 |
print("📋 DONNÉES SONDAGE")
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
"
|
| 37 |
-
)
|
| 38 |
-
ayant_enfants = input("A des enfants? (Y/N): ").upper()
|
| 39 |
-
frequence_deplacement = input("Fréquence déplacement (Aucun, Occasionnel, Frequent): ")
|
| 40 |
-
annees_depuis_la_derniere_promotion = int(input("Années depuis dernière promotion: "))
|
| 41 |
-
annes_sous_responsable_actuel = int(input("Années sous responsable actuel (0-17): "))
|
| 42 |
-
|
| 43 |
-
# === ÉVALUATION ===
|
| 44 |
-
print("\n📊 DONNÉES ÉVALUATION")
|
| 45 |
-
satisfaction_employee_environnement = int(input("Satisfaction environnement (1-4): "))
|
| 46 |
-
note_evaluation_precedente = int(input("Note évaluation précédente (1-4): "))
|
| 47 |
-
niveau_hierarchique_poste = int(input("Niveau hiérarchique (1-5): "))
|
| 48 |
-
satisfaction_employee_nature_travail = int(input("Satisfaction nature travail (1-4): "))
|
| 49 |
-
satisfaction_employee_equipe = int(input("Satisfaction équipe (1-4): "))
|
| 50 |
-
satisfaction_employee_equilibre_pro_perso = int(
|
| 51 |
-
input("Satisfaction équilibre pro/perso (1-4): ")
|
| 52 |
-
)
|
| 53 |
-
note_evaluation_actuelle = int(input("Note évaluation actuelle (3-4): "))
|
| 54 |
-
heure_supplementaires = input("Heures supplémentaires? (Oui/Non): ")
|
| 55 |
-
augementation_salaire_precedente = float(
|
| 56 |
-
input("Augmentation salaire précédente en % (0-100): ")
|
| 57 |
)
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
)
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
# ═══════════════════════════════════════════════════════════════
|
| 75 |
# PRÉDICTION
|
| 76 |
# ═══════════════════════════════════════════════════════════════
|
| 77 |
-
|
| 78 |
-
employee_data = {
|
| 79 |
-
"nombre_participation_pee": nombre_participation_pee,
|
| 80 |
-
"nb_formations_suivies": nb_formations_suivies,
|
| 81 |
-
"nombre_employee_sous_responsabilite": 1,
|
| 82 |
-
"distance_domicile_travail": distance_domicile_travail,
|
| 83 |
-
"niveau_education": niveau_education,
|
| 84 |
-
"domaine_etude": domaine_etude,
|
| 85 |
-
"ayant_enfants": ayant_enfants,
|
| 86 |
-
"frequence_deplacement": frequence_deplacement,
|
| 87 |
-
"annees_depuis_la_derniere_promotion": annees_depuis_la_derniere_promotion,
|
| 88 |
-
"annes_sous_responsable_actuel": annes_sous_responsable_actuel,
|
| 89 |
-
"satisfaction_employee_environnement": satisfaction_employee_environnement,
|
| 90 |
-
"note_evaluation_precedente": note_evaluation_precedente,
|
| 91 |
-
"niveau_hierarchique_poste": niveau_hierarchique_poste,
|
| 92 |
-
"satisfaction_employee_nature_travail": satisfaction_employee_nature_travail,
|
| 93 |
-
"satisfaction_employee_equipe": satisfaction_employee_equipe,
|
| 94 |
-
"satisfaction_employee_equilibre_pro_perso": satisfaction_employee_equilibre_pro_perso,
|
| 95 |
-
"note_evaluation_actuelle": note_evaluation_actuelle,
|
| 96 |
-
"heure_supplementaires": heure_supplementaires,
|
| 97 |
-
"augementation_salaire_precedente": augementation_salaire_precedente,
|
| 98 |
-
"age": age,
|
| 99 |
-
"genre": genre,
|
| 100 |
-
"revenu_mensuel": revenu_mensuel,
|
| 101 |
-
"statut_marital": statut_marital,
|
| 102 |
-
"departement": departement,
|
| 103 |
-
"poste": poste,
|
| 104 |
-
"nombre_experiences_precedentes": nombre_experiences_precedentes,
|
| 105 |
-
"nombre_heures_travailless": 80,
|
| 106 |
-
"annee_experience_totale": annee_experience_totale,
|
| 107 |
-
"annees_dans_l_entreprise": annees_dans_l_entreprise,
|
| 108 |
-
"annees_dans_le_poste_actuel": annees_dans_le_poste_actuel,
|
| 109 |
-
}
|
| 110 |
-
|
| 111 |
-
print("\n⏳ Envoi de la requête à l'API...")
|
| 112 |
-
|
| 113 |
-
headers = {"Content-Type": "application/json"}
|
| 114 |
-
if API_KEY:
|
| 115 |
-
headers["X-API-Key"] = API_KEY
|
| 116 |
|
| 117 |
try:
|
| 118 |
-
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
)
|
| 121 |
-
response.raise_for_status()
|
| 122 |
-
result = response.json()
|
| 123 |
|
| 124 |
# ═══════════════════════════════════════════════════════════════
|
| 125 |
# AFFICHAGE DU RÉSULTAT
|
| 126 |
# ═══════════════════════════════════════════════════════════════
|
| 127 |
-
|
| 128 |
print("\n" + "═" * 60)
|
| 129 |
-
print("
|
| 130 |
print("═" * 60)
|
| 131 |
|
| 132 |
-
if result
|
| 133 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
else:
|
| 135 |
-
print("\n
|
| 136 |
|
| 137 |
-
print(
|
| 138 |
-
print(f" Probabilité de rester: {result['probability_0']:.1%}")
|
| 139 |
-
print(f" Probabilité de partir: {result['probability_1']:.1%}")
|
| 140 |
-
|
| 141 |
-
print("\n" + "═" * 60)
|
| 142 |
|
| 143 |
-
except
|
| 144 |
-
print(
|
| 145 |
-
|
| 146 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
+
🔮 Prédiction UNITAIRE - API locale (Gradio)
|
| 4 |
|
| 5 |
Usage: python demo_unitaire.py
|
| 6 |
+
Prérequis: Lancer l'API locale avec `python app.py`
|
| 7 |
"""
|
| 8 |
|
| 9 |
+
import re
|
| 10 |
+
import sys
|
| 11 |
+
|
| 12 |
+
try:
|
| 13 |
+
from gradio_client import Client
|
| 14 |
+
except ImportError:
|
| 15 |
+
print("❌ gradio_client non installé. Installez-le avec:")
|
| 16 |
+
print(" pip install gradio_client")
|
| 17 |
+
sys.exit(1)
|
| 18 |
|
| 19 |
# ═══════════════════════════════════════════════════════════════
|
| 20 |
# CONFIGURATION
|
| 21 |
# ═══════════════════════════════════════════════════════════════
|
| 22 |
+
API_URL = "http://127.0.0.1:7860"
|
| 23 |
|
| 24 |
+
# ═══════════════════════════════════════════════════════════════
|
| 25 |
+
# OPTIONS (menus numérotés)
|
| 26 |
+
# ═══════════════════════════════════════════════════════════════
|
| 27 |
+
DOMAINES = {
|
| 28 |
+
1: "Infra & Cloud",
|
| 29 |
+
2: "Transformation Digitale",
|
| 30 |
+
3: "Marketing",
|
| 31 |
+
4: "Entrepreunariat",
|
| 32 |
+
5: "Ressources Humaines",
|
| 33 |
+
6: "Autre",
|
| 34 |
+
}
|
| 35 |
+
FREQUENCES = {1: "Aucun", 2: "Occasionnel", 3: "Frequent"}
|
| 36 |
+
STATUTS = {1: "Célibataire", 2: "Marié(e)", 3: "Divorcé(e)"}
|
| 37 |
+
DEPARTEMENTS = {1: "Commercial", 2: "Consulting", 3: "Ressources Humaines"}
|
| 38 |
+
POSTES = {
|
| 39 |
+
1: "Cadre Commercial",
|
| 40 |
+
2: "Assistant de Direction",
|
| 41 |
+
3: "Consultant",
|
| 42 |
+
4: "Tech Lead",
|
| 43 |
+
5: "Manager",
|
| 44 |
+
6: "Senior Manager",
|
| 45 |
+
7: "Représentant Commercial",
|
| 46 |
+
8: "Directeur Technique",
|
| 47 |
+
9: "Ressources Humaines",
|
| 48 |
+
}
|
| 49 |
|
| 50 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 51 |
+
print("║ 🔮 PRÉDICTION UNITAIRE - API Locale ║")
|
| 52 |
+
print("╚══════════════════════════════════════════════════════════╝")
|
| 53 |
+
print(f"\n🌐 API: {API_URL}\n")
|
| 54 |
|
| 55 |
# ═══════════════════════════════════════════════════════════════
|
| 56 |
# COLLECTE DES DONNÉES
|
| 57 |
# ═══════════════════════════════════════════════════════════════
|
| 58 |
|
| 59 |
+
print("═" * 60)
|
|
|
|
|
|
|
| 60 |
print("📋 DONNÉES SONDAGE")
|
| 61 |
+
print("═" * 60)
|
| 62 |
+
nombre_participation_pee = int(input("Participations PEE [0-3]: "))
|
| 63 |
+
nb_formations_suivies = int(input("Formations suivies [0-6]: "))
|
| 64 |
+
distance_domicile_travail = int(input("Distance domicile-travail km [1-30]: "))
|
| 65 |
+
niveau_education = int(
|
| 66 |
+
input("Niveau éducation [1=Bac, 2=Bac+2, 3=Licence, 4=Master, 5=Doctorat]: ")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
)
|
| 68 |
|
| 69 |
+
print(f"\nDomaine d'étude: {DOMAINES}")
|
| 70 |
+
domaine_choix = int(input("Choix [1-6]: "))
|
| 71 |
+
domaine_etude = DOMAINES.get(domaine_choix, "Autre")
|
| 72 |
+
|
| 73 |
+
ayant_enfants_choix = int(input("A des enfants? [0=Non, 1=Oui]: "))
|
| 74 |
+
ayant_enfants = "Y" if ayant_enfants_choix == 1 else "N"
|
| 75 |
+
|
| 76 |
+
print(f"\nFréquence déplacement: {FREQUENCES}")
|
| 77 |
+
freq_choix = int(input("Choix [1-3]: "))
|
| 78 |
+
frequence_deplacement = FREQUENCES.get(freq_choix, "Aucun")
|
| 79 |
+
|
| 80 |
+
annees_depuis_promo = int(input("Années depuis dernière promotion [0-15]: "))
|
| 81 |
+
annees_sous_responsable = int(input("Années sous responsable actuel [0-17]: "))
|
| 82 |
+
|
| 83 |
+
print("\n" + "═" * 60)
|
| 84 |
+
print("📊 DONNÉES ÉVALUATION")
|
| 85 |
+
print("═" * 60)
|
| 86 |
+
satisfaction_environnement = int(input("Satisfaction environnement [1-4]: "))
|
| 87 |
+
note_eval_precedente = int(input("Note évaluation précédente [1-4]: "))
|
| 88 |
+
niveau_hierarchique = int(input("Niveau hiérarchique [1-5]: "))
|
| 89 |
+
satisfaction_travail = int(input("Satisfaction nature travail [1-4]: "))
|
| 90 |
+
satisfaction_equipe = int(input("Satisfaction équipe [1-4]: "))
|
| 91 |
+
satisfaction_equilibre = int(input("Satisfaction équilibre pro/perso [1-4]: "))
|
| 92 |
+
note_eval_actuelle = int(input("Note évaluation actuelle [3-4]: "))
|
| 93 |
+
heures_sup_choix = int(input("Heures supplémentaires? [0=Non, 1=Oui]: "))
|
| 94 |
+
heure_supplementaires = "Oui" if heures_sup_choix == 1 else "Non"
|
| 95 |
+
augmentation_salaire = float(input("Augmentation salaire précédente % [0-25]: "))
|
| 96 |
+
|
| 97 |
+
print("\n" + "═" * 60)
|
| 98 |
+
print("💼 DONNÉES RH (SIRH)")
|
| 99 |
+
print("═" * 60)
|
| 100 |
+
age = int(input("Âge [18-60]: "))
|
| 101 |
+
genre_choix = int(input("Genre [1=Homme, 2=Femme]: "))
|
| 102 |
+
genre = "M" if genre_choix == 1 else "F"
|
| 103 |
+
revenu_mensuel = float(input("Revenu mensuel € [1000-20000]: "))
|
| 104 |
+
|
| 105 |
+
print(f"\nStatut marital: {STATUTS}")
|
| 106 |
+
statut_choix = int(input("Choix [1-3]: "))
|
| 107 |
+
statut_marital = STATUTS.get(statut_choix, "Célibataire")
|
| 108 |
+
|
| 109 |
+
print(f"\nDépartement: {DEPARTEMENTS}")
|
| 110 |
+
dept_choix = int(input("Choix [1-3]: "))
|
| 111 |
+
departement = DEPARTEMENTS.get(dept_choix, "Commercial")
|
| 112 |
+
|
| 113 |
+
print(f"\nPoste: {POSTES}")
|
| 114 |
+
poste_choix = int(input("Choix [1-9]: "))
|
| 115 |
+
poste = POSTES.get(poste_choix, "Consultant")
|
| 116 |
+
|
| 117 |
+
nombre_exp_precedentes = int(input("Expériences précédentes [0-9]: "))
|
| 118 |
+
annees_exp_totale = int(input("Années expérience totale [0-40]: "))
|
| 119 |
+
annees_entreprise = int(input("Années dans l'entreprise [0-40]: "))
|
| 120 |
+
annees_poste = int(input("Années dans le poste actuel [0-18]: "))
|
| 121 |
|
| 122 |
# ═══════════════════════════════════════════════════════════════
|
| 123 |
# PRÉDICTION
|
| 124 |
# ═══════════════════════════════════════════════════════════════
|
| 125 |
+
print("\n⏳ Connexion à l'API...")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
try:
|
| 128 |
+
client = Client(API_URL)
|
| 129 |
+
print("✅ Connecté")
|
| 130 |
+
print("⏳ Envoi de la prédiction...")
|
| 131 |
+
|
| 132 |
+
result = client.predict(
|
| 133 |
+
nombre_participation_pee=nombre_participation_pee,
|
| 134 |
+
nb_formations_suivies=nb_formations_suivies,
|
| 135 |
+
nombre_employee_sous_responsabilite=1,
|
| 136 |
+
distance_domicile_travail=distance_domicile_travail,
|
| 137 |
+
niveau_education=niveau_education,
|
| 138 |
+
domaine_etude=domaine_etude,
|
| 139 |
+
ayant_enfants=ayant_enfants,
|
| 140 |
+
frequence_deplacement=frequence_deplacement,
|
| 141 |
+
annees_depuis_la_derniere_promotion=annees_depuis_promo,
|
| 142 |
+
annes_sous_responsable_actuel=annees_sous_responsable,
|
| 143 |
+
satisfaction_employee_environnement=satisfaction_environnement,
|
| 144 |
+
note_evaluation_precedente=note_eval_precedente,
|
| 145 |
+
niveau_hierarchique_poste=niveau_hierarchique,
|
| 146 |
+
satisfaction_employee_nature_travail=satisfaction_travail,
|
| 147 |
+
satisfaction_employee_equipe=satisfaction_equipe,
|
| 148 |
+
satisfaction_employee_equilibre_pro_perso=satisfaction_equilibre,
|
| 149 |
+
note_evaluation_actuelle=note_eval_actuelle,
|
| 150 |
+
heure_supplementaires=heure_supplementaires,
|
| 151 |
+
augementation_salaire_precedente=augmentation_salaire,
|
| 152 |
+
age=age,
|
| 153 |
+
genre=genre,
|
| 154 |
+
revenu_mensuel=revenu_mensuel,
|
| 155 |
+
statut_marital=statut_marital,
|
| 156 |
+
departement=departement,
|
| 157 |
+
poste=poste,
|
| 158 |
+
nombre_experiences_precedentes=nombre_exp_precedentes,
|
| 159 |
+
nombre_heures_travailless=80,
|
| 160 |
+
annee_experience_totale=annees_exp_totale,
|
| 161 |
+
annees_dans_l_entreprise=annees_entreprise,
|
| 162 |
+
annees_dans_le_poste_actuel=annees_poste,
|
| 163 |
+
api_name="/predict",
|
| 164 |
)
|
|
|
|
|
|
|
| 165 |
|
| 166 |
# ═══════════════════════════════════════════════════════════════
|
| 167 |
# AFFICHAGE DU RÉSULTAT
|
| 168 |
# ═══════════════════════════════════════════════════════════════
|
|
|
|
| 169 |
print("\n" + "═" * 60)
|
| 170 |
+
print("📊 RÉSULTAT DE LA PRÉDICTION")
|
| 171 |
print("═" * 60)
|
| 172 |
|
| 173 |
+
if isinstance(result, str):
|
| 174 |
+
# Extraire les probabilités du Markdown
|
| 175 |
+
prob_depart = re.search(r"Probabilité de départ[^:]*:\s*([\d.]+)%", result)
|
| 176 |
+
prob_maintien = re.search(r"Probabilité de maintien[^:]*:\s*([\d.]+)%", result)
|
| 177 |
+
confiance = re.search(r"Confiance[^:]*:\s*([\d.]+)%", result)
|
| 178 |
+
|
| 179 |
+
# Niveau de risque
|
| 180 |
+
if "RISQUE ÉLEVÉ" in result:
|
| 181 |
+
print("\n🔴 Niveau de risque: ÉLEVÉ")
|
| 182 |
+
elif "RISQUE MOYEN" in result:
|
| 183 |
+
print("\n🟠 Niveau de risque: MOYEN")
|
| 184 |
+
else:
|
| 185 |
+
print("\n🟢 Niveau de risque: FAIBLE")
|
| 186 |
+
|
| 187 |
+
# Probabilités
|
| 188 |
+
if prob_maintien:
|
| 189 |
+
print(f"\n📈 Probabilité de rester: {prob_maintien.group(1)}%")
|
| 190 |
+
if prob_depart:
|
| 191 |
+
print(f"📉 Probabilité de partir: {prob_depart.group(1)}%")
|
| 192 |
+
if confiance:
|
| 193 |
+
print(f"🎯 Confiance du modèle: {confiance.group(1)}%")
|
| 194 |
+
|
| 195 |
+
# Prédiction finale
|
| 196 |
+
print("\n" + "─" * 60)
|
| 197 |
+
if "Départ probable" in result:
|
| 198 |
+
print("🚨 PRÉDICTION FINALE: VA PARTIR")
|
| 199 |
+
else:
|
| 200 |
+
print("✅ PRÉDICTION FINALE: VA RESTER")
|
| 201 |
+
print("─" * 60)
|
| 202 |
else:
|
| 203 |
+
print(f"\n📋 Résultat: {result}")
|
| 204 |
|
| 205 |
+
print("\n✅ Prédiction unitaire terminée avec succès!")
|
|
|
|
|
|
|
|
|
|
|
|
|
| 206 |
|
| 207 |
+
except ConnectionError:
|
| 208 |
+
print("\n❌ Impossible de se connecter à l'API locale.")
|
| 209 |
+
print(" Lancez d'abord: python app.py")
|
| 210 |
+
sys.exit(1)
|
| 211 |
+
except Exception as e:
|
| 212 |
+
print(f"\n❌ Erreur: {e}")
|
| 213 |
+
sys.exit(1)
|
exemples/demo_unitaire_hf.py
CHANGED
|
@@ -1,131 +1,210 @@
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
-
🔮 Prédiction UNITAIRE
|
| 4 |
|
| 5 |
Usage: python demo_unitaire_hf.py
|
| 6 |
-
|
| 7 |
-
- Envoie la requête à la Space HF
|
| 8 |
-
- Affiche la prédiction
|
| 9 |
-
|
| 10 |
-
Option: définir HF_API_URL pour surcharger l'URL par défaut.
|
| 11 |
"""
|
| 12 |
|
| 13 |
import os
|
| 14 |
-
import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
|
|
|
|
|
|
|
|
|
| 16 |
API_URL = os.getenv("HF_API_URL", "https://asi-engineer-oc-p5.hf.space")
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 19 |
-
print("║ 🔮
|
| 20 |
-
print("
|
| 21 |
-
print(f"🌐 API: {API_URL}\n")
|
| 22 |
-
|
| 23 |
-
#
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
frequence_deplacement = input("Fréquence déplacement (Aucun, Occasionnel, Frequent): ")
|
| 36 |
-
annees_depuis_la_derniere_promotion = int(input("Années depuis dernière promotion: "))
|
| 37 |
-
annes_sous_responsable_actuel = int(input("Années sous responsable actuel (0-17): "))
|
| 38 |
-
|
| 39 |
-
# === ÉVALUATION ===
|
| 40 |
-
satisfaction_employee_environnement = int(input("Satisfaction environnement (1-4): "))
|
| 41 |
-
note_evaluation_precedente = int(input("Note évaluation précédente (1-4): "))
|
| 42 |
-
niveau_hierarchique_poste = int(input("Niveau hiérarchique (1-5): "))
|
| 43 |
-
satisfaction_employee_nature_travail = int(input("Satisfaction nature travail (1-4): "))
|
| 44 |
-
satisfaction_employee_equipe = int(input("Satisfaction équipe (1-4): "))
|
| 45 |
-
satisfaction_employee_equilibre_pro_perso = int(
|
| 46 |
-
input("Satisfaction équilibre pro/perso (1-4): ")
|
| 47 |
)
|
| 48 |
-
note_evaluation_actuelle = int(input("Note évaluation actuelle (3-4): "))
|
| 49 |
-
heure_supplementaires = input("Heures supplémentaires? (Oui/Non): ")
|
| 50 |
-
augementation_salaire_precedente = float(input("Augmentation salaire précédente (%): "))
|
| 51 |
-
|
| 52 |
-
# === SIRH ===
|
| 53 |
-
age = int(input("Âge (18-60): "))
|
| 54 |
-
genre = input("Genre (M/F): ").upper()
|
| 55 |
-
revenu_mensuel = float(input("Revenu mensuel (€): "))
|
| 56 |
-
statut_marital = input("Statut marital (Célibataire, Marié(e), Divorcé(e)): ")
|
| 57 |
-
departement = input("Département (Commercial, Consulting, Ressources Humaines): ")
|
| 58 |
-
poste = input("Poste: ")
|
| 59 |
-
nombre_experiences_precedentes = int(input("Nb expériences précédentes (0-9): "))
|
| 60 |
-
annee_experience_totale = int(input("Années expérience totale: "))
|
| 61 |
-
annees_dans_l_entreprise = int(input("Années dans l'entreprise (0-40): "))
|
| 62 |
-
annees_dans_le_poste_actuel = int(input("Années dans le poste actuel (0-18): "))
|
| 63 |
-
|
| 64 |
-
employee_data = {
|
| 65 |
-
"nombre_participation_pee": nombre_participation_pee,
|
| 66 |
-
"nb_formations_suivies": nb_formations_suivies,
|
| 67 |
-
"nombre_employee_sous_responsabilite": 1,
|
| 68 |
-
"distance_domicile_travail": distance_domicile_travail,
|
| 69 |
-
"niveau_education": niveau_education,
|
| 70 |
-
"domaine_etude": domaine_etude,
|
| 71 |
-
"ayant_enfants": ayant_enfants,
|
| 72 |
-
"frequence_deplacement": frequence_deplacement,
|
| 73 |
-
"annees_depuis_la_derniere_promotion": annees_depuis_la_derniere_promotion,
|
| 74 |
-
"annes_sous_responsable_actuel": annes_sous_responsable_actuel,
|
| 75 |
-
"satisfaction_employee_environnement": satisfaction_employee_environnement,
|
| 76 |
-
"note_evaluation_precedente": note_evaluation_precedente,
|
| 77 |
-
"niveau_hierarchique_poste": niveau_hierarchique_poste,
|
| 78 |
-
"satisfaction_employee_nature_travail": satisfaction_employee_nature_travail,
|
| 79 |
-
"satisfaction_employee_equipe": satisfaction_employee_equipe,
|
| 80 |
-
"satisfaction_employee_equilibre_pro_perso": satisfaction_employee_equilibre_pro_perso,
|
| 81 |
-
"note_evaluation_actuelle": note_evaluation_actuelle,
|
| 82 |
-
"heure_supplementaires": heure_supplementaires,
|
| 83 |
-
"augementation_salaire_precedente": augementation_salaire_precedente,
|
| 84 |
-
"age": age,
|
| 85 |
-
"genre": genre,
|
| 86 |
-
"revenu_mensuel": revenu_mensuel,
|
| 87 |
-
"statut_marital": statut_marital,
|
| 88 |
-
"departement": departement,
|
| 89 |
-
"poste": poste,
|
| 90 |
-
"nombre_experiences_precedentes": nombre_experiences_precedentes,
|
| 91 |
-
"nombre_heures_travailless": 80,
|
| 92 |
-
"annee_experience_totale": annee_experience_totale,
|
| 93 |
-
"annees_dans_l_entreprise": annees_dans_l_entreprise,
|
| 94 |
-
"annees_dans_le_poste_actuel": annees_dans_le_poste_actuel,
|
| 95 |
-
}
|
| 96 |
|
| 97 |
-
print("\
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
try:
|
| 105 |
-
|
| 106 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
)
|
| 108 |
-
if r.status_code == 404:
|
| 109 |
-
print(
|
| 110 |
-
"\n❌ Endpoint HF introuvable (/predict). Vérifiez que la Space expose l'API FastAPI."
|
| 111 |
-
)
|
| 112 |
-
print(" Sinon, utilisez l'API locale (lancer_api.sh) ou GRADIO.")
|
| 113 |
-
raise SystemExit(1)
|
| 114 |
-
r.raise_for_status()
|
| 115 |
-
result = r.json()
|
| 116 |
|
|
|
|
|
|
|
|
|
|
| 117 |
print("\n" + "═" * 60)
|
| 118 |
-
print("
|
| 119 |
print("═" * 60)
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
#!/usr/bin/env python3
|
| 2 |
"""
|
| 3 |
+
🔮 Prédiction UNITAIRE - API Hugging Face (Gradio)
|
| 4 |
|
| 5 |
Usage: python demo_unitaire_hf.py
|
| 6 |
+
Prérequis: pip install gradio_client
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
"""
|
| 8 |
|
| 9 |
import os
|
| 10 |
+
import re
|
| 11 |
+
import sys
|
| 12 |
+
|
| 13 |
+
try:
|
| 14 |
+
from gradio_client import Client
|
| 15 |
+
except ImportError:
|
| 16 |
+
print("❌ gradio_client non installé. Installez-le avec:")
|
| 17 |
+
print(" pip install gradio_client")
|
| 18 |
+
sys.exit(1)
|
| 19 |
|
| 20 |
+
# ═══════════════════════════════════════════════════════════════
|
| 21 |
+
# CONFIGURATION
|
| 22 |
+
# ═══════════════════════════════════════════════════════════════
|
| 23 |
API_URL = os.getenv("HF_API_URL", "https://asi-engineer-oc-p5.hf.space")
|
| 24 |
|
| 25 |
+
# ═══════════════════════════════════════════════════════════════
|
| 26 |
+
# OPTIONS (menus numérotés)
|
| 27 |
+
# ═══════════════════════════════════════════════════════════════
|
| 28 |
+
DOMAINES = {
|
| 29 |
+
1: "Infra & Cloud",
|
| 30 |
+
2: "Transformation Digitale",
|
| 31 |
+
3: "Marketing",
|
| 32 |
+
4: "Entrepreunariat",
|
| 33 |
+
5: "Ressources Humaines",
|
| 34 |
+
6: "Autre",
|
| 35 |
+
}
|
| 36 |
+
FREQUENCES = {1: "Aucun", 2: "Occasionnel", 3: "Frequent"}
|
| 37 |
+
STATUTS = {1: "Célibataire", 2: "Marié(e)", 3: "Divorcé(e)"}
|
| 38 |
+
DEPARTEMENTS = {1: "Commercial", 2: "Consulting", 3: "Ressources Humaines"}
|
| 39 |
+
POSTES = {
|
| 40 |
+
1: "Cadre Commercial",
|
| 41 |
+
2: "Assistant de Direction",
|
| 42 |
+
3: "Consultant",
|
| 43 |
+
4: "Tech Lead",
|
| 44 |
+
5: "Manager",
|
| 45 |
+
6: "Senior Manager",
|
| 46 |
+
7: "Représentant Commercial",
|
| 47 |
+
8: "Directeur Technique",
|
| 48 |
+
9: "Ressources Humaines",
|
| 49 |
+
}
|
| 50 |
+
|
| 51 |
print("╔══════════════════════════════════════════════════════════╗")
|
| 52 |
+
print("║ 🔮 PRÉDICTION UNITAIRE - API Hugging Face ║")
|
| 53 |
+
print("╚══════════════════════════════════════════════════════════╝")
|
| 54 |
+
print(f"\n🌐 API: {API_URL}\n")
|
| 55 |
+
|
| 56 |
+
# ═══════════════════════════════════════════════════════════════
|
| 57 |
+
# COLLECTE DES DONNÉES
|
| 58 |
+
# ═══════════════════════════════════════════════════════════════
|
| 59 |
+
|
| 60 |
+
print("═" * 60)
|
| 61 |
+
print("📋 DONNÉES SONDAGE")
|
| 62 |
+
print("═" * 60)
|
| 63 |
+
nombre_participation_pee = int(input("Participations PEE [0-3]: "))
|
| 64 |
+
nb_formations_suivies = int(input("Formations suivies [0-6]: "))
|
| 65 |
+
distance_domicile_travail = int(input("Distance domicile-travail km [1-30]: "))
|
| 66 |
+
niveau_education = int(
|
| 67 |
+
input("Niveau éducation [1=Bac, 2=Bac+2, 3=Licence, 4=Master, 5=Doctorat]: ")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
+
print(f"\nDomaine d'étude: {DOMAINES}")
|
| 71 |
+
domaine_choix = int(input("Choix [1-6]: "))
|
| 72 |
+
domaine_etude = DOMAINES.get(domaine_choix, "Autre")
|
| 73 |
+
|
| 74 |
+
ayant_enfants_choix = int(input("A des enfants? [0=Non, 1=Oui]: "))
|
| 75 |
+
ayant_enfants = "Y" if ayant_enfants_choix == 1 else "N"
|
| 76 |
+
|
| 77 |
+
print(f"\nFréquence déplacement: {FREQUENCES}")
|
| 78 |
+
freq_choix = int(input("Choix [1-3]: "))
|
| 79 |
+
frequence_deplacement = FREQUENCES.get(freq_choix, "Aucun")
|
| 80 |
+
|
| 81 |
+
annees_depuis_promo = int(input("Années depuis dernière promotion [0-15]: "))
|
| 82 |
+
annees_sous_responsable = int(input("Années sous responsable actuel [0-17]: "))
|
| 83 |
+
|
| 84 |
+
print("\n" + "═" * 60)
|
| 85 |
+
print("📊 DONNÉES ÉVALUATION")
|
| 86 |
+
print("═" * 60)
|
| 87 |
+
satisfaction_environnement = int(input("Satisfaction environnement [1-4]: "))
|
| 88 |
+
note_eval_precedente = int(input("Note évaluation précédente [1-4]: "))
|
| 89 |
+
niveau_hierarchique = int(input("Niveau hiérarchique [1-5]: "))
|
| 90 |
+
satisfaction_travail = int(input("Satisfaction nature travail [1-4]: "))
|
| 91 |
+
satisfaction_equipe = int(input("Satisfaction équipe [1-4]: "))
|
| 92 |
+
satisfaction_equilibre = int(input("Satisfaction équilibre pro/perso [1-4]: "))
|
| 93 |
+
note_eval_actuelle = int(input("Note évaluation actuelle [3-4]: "))
|
| 94 |
+
heures_sup_choix = int(input("Heures supplémentaires? [0=Non, 1=Oui]: "))
|
| 95 |
+
heure_supplementaires = "Oui" if heures_sup_choix == 1 else "Non"
|
| 96 |
+
augmentation_salaire = float(input("Augmentation salaire précédente % [0-25]: "))
|
| 97 |
+
|
| 98 |
+
print("\n" + "═" * 60)
|
| 99 |
+
print("💼 DONNÉES RH (SIRH)")
|
| 100 |
+
print("═" * 60)
|
| 101 |
+
age = int(input("Âge [18-60]: "))
|
| 102 |
+
genre_choix = int(input("Genre [1=Homme, 2=Femme]: "))
|
| 103 |
+
genre = "M" if genre_choix == 1 else "F"
|
| 104 |
+
revenu_mensuel = float(input("Revenu mensuel € [1000-20000]: "))
|
| 105 |
+
|
| 106 |
+
print(f"\nStatut marital: {STATUTS}")
|
| 107 |
+
statut_choix = int(input("Choix [1-3]: "))
|
| 108 |
+
statut_marital = STATUTS.get(statut_choix, "Célibataire")
|
| 109 |
+
|
| 110 |
+
print(f"\nDépartement: {DEPARTEMENTS}")
|
| 111 |
+
dept_choix = int(input("Choix [1-3]: "))
|
| 112 |
+
departement = DEPARTEMENTS.get(dept_choix, "Commercial")
|
| 113 |
+
|
| 114 |
+
print(f"\nPoste: {POSTES}")
|
| 115 |
+
poste_choix = int(input("Choix [1-9]: "))
|
| 116 |
+
poste = POSTES.get(poste_choix, "Consultant")
|
| 117 |
+
|
| 118 |
+
nombre_exp_precedentes = int(input("Expériences précédentes [0-9]: "))
|
| 119 |
+
annees_exp_totale = int(input("Années expérience totale [0-40]: "))
|
| 120 |
+
annees_entreprise = int(input("Années dans l'entreprise [0-40]: "))
|
| 121 |
+
annees_poste = int(input("Années dans le poste actuel [0-18]: "))
|
| 122 |
+
|
| 123 |
+
# ═══════════════════════════════════════════════════════════════
|
| 124 |
+
# PRÉDICTION
|
| 125 |
+
# ═══════════════════════════════════════════════════════════════
|
| 126 |
+
print("\n⏳ Connexion à l'API...")
|
| 127 |
|
| 128 |
try:
|
| 129 |
+
client = Client(API_URL)
|
| 130 |
+
print("✅ Connecté")
|
| 131 |
+
print("⏳ Envoi de la prédiction...")
|
| 132 |
+
|
| 133 |
+
result = client.predict(
|
| 134 |
+
nombre_participation_pee=nombre_participation_pee,
|
| 135 |
+
nb_formations_suivies=nb_formations_suivies,
|
| 136 |
+
nombre_employee_sous_responsabilite=1,
|
| 137 |
+
distance_domicile_travail=distance_domicile_travail,
|
| 138 |
+
niveau_education=niveau_education,
|
| 139 |
+
domaine_etude=domaine_etude,
|
| 140 |
+
ayant_enfants=ayant_enfants,
|
| 141 |
+
frequence_deplacement=frequence_deplacement,
|
| 142 |
+
annees_depuis_la_derniere_promotion=annees_depuis_promo,
|
| 143 |
+
annes_sous_responsable_actuel=annees_sous_responsable,
|
| 144 |
+
satisfaction_employee_environnement=satisfaction_environnement,
|
| 145 |
+
note_evaluation_precedente=note_eval_precedente,
|
| 146 |
+
niveau_hierarchique_poste=niveau_hierarchique,
|
| 147 |
+
satisfaction_employee_nature_travail=satisfaction_travail,
|
| 148 |
+
satisfaction_employee_equipe=satisfaction_equipe,
|
| 149 |
+
satisfaction_employee_equilibre_pro_perso=satisfaction_equilibre,
|
| 150 |
+
note_evaluation_actuelle=note_eval_actuelle,
|
| 151 |
+
heure_supplementaires=heure_supplementaires,
|
| 152 |
+
augementation_salaire_precedente=augmentation_salaire,
|
| 153 |
+
age=age,
|
| 154 |
+
genre=genre,
|
| 155 |
+
revenu_mensuel=revenu_mensuel,
|
| 156 |
+
statut_marital=statut_marital,
|
| 157 |
+
departement=departement,
|
| 158 |
+
poste=poste,
|
| 159 |
+
nombre_experiences_precedentes=nombre_exp_precedentes,
|
| 160 |
+
nombre_heures_travailless=80,
|
| 161 |
+
annee_experience_totale=annees_exp_totale,
|
| 162 |
+
annees_dans_l_entreprise=annees_entreprise,
|
| 163 |
+
annees_dans_le_poste_actuel=annees_poste,
|
| 164 |
+
api_name="/predict",
|
| 165 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
|
| 167 |
+
# ═══════════════════════════════════════════════════════════════
|
| 168 |
+
# AFFICHAGE DU RÉSULTAT
|
| 169 |
+
# ═══════════════════════════════════════════════════════════════
|
| 170 |
print("\n" + "═" * 60)
|
| 171 |
+
print("📊 RÉSULTAT DE LA PRÉDICTION")
|
| 172 |
print("═" * 60)
|
| 173 |
+
|
| 174 |
+
if isinstance(result, str):
|
| 175 |
+
# Extraire les probabilités du Markdown
|
| 176 |
+
prob_depart = re.search(r"Probabilité de départ[^:]*:\s*([\d.]+)%", result)
|
| 177 |
+
prob_maintien = re.search(r"Probabilité de maintien[^:]*:\s*([\d.]+)%", result)
|
| 178 |
+
confiance = re.search(r"Confiance[^:]*:\s*([\d.]+)%", result)
|
| 179 |
+
|
| 180 |
+
# Niveau de risque
|
| 181 |
+
if "RISQUE ÉLEVÉ" in result:
|
| 182 |
+
print("\n🔴 Niveau de risque: ÉLEVÉ")
|
| 183 |
+
elif "RISQUE MOYEN" in result:
|
| 184 |
+
print("\n🟠 Niveau de risque: MOYEN")
|
| 185 |
+
else:
|
| 186 |
+
print("\n🟢 Niveau de risque: FAIBLE")
|
| 187 |
+
|
| 188 |
+
# Probabilités
|
| 189 |
+
if prob_maintien:
|
| 190 |
+
print(f"\n📈 Probabilité de rester: {prob_maintien.group(1)}%")
|
| 191 |
+
if prob_depart:
|
| 192 |
+
print(f"📉 Probabilité de partir: {prob_depart.group(1)}%")
|
| 193 |
+
if confiance:
|
| 194 |
+
print(f"🎯 Confiance du modèle: {confiance.group(1)}%")
|
| 195 |
+
|
| 196 |
+
# Prédiction finale
|
| 197 |
+
print("\n" + "─" * 60)
|
| 198 |
+
if "Départ probable" in result:
|
| 199 |
+
print("🚨 PRÉDICTION FINALE: VA PARTIR")
|
| 200 |
+
else:
|
| 201 |
+
print("✅ PRÉDICTION FINALE: VA RESTER")
|
| 202 |
+
print("─" * 60)
|
| 203 |
+
else:
|
| 204 |
+
print(f"\n📋 Résultat: {result}")
|
| 205 |
+
|
| 206 |
+
print("\n✅ Prédiction unitaire terminée avec succès!")
|
| 207 |
+
|
| 208 |
+
except Exception as e:
|
| 209 |
+
print(f"\n❌ Erreur: {e}")
|
| 210 |
+
sys.exit(1)
|
exemples/lancer_api.sh
DELETED
|
@@ -1,44 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
#
|
| 3 |
-
# 🚀 Script de lancement de l'API locale pour la démo
|
| 4 |
-
#
|
| 5 |
-
# Usage: ./lancer_api.sh
|
| 6 |
-
#
|
| 7 |
-
|
| 8 |
-
cd "$(dirname "$0")/.."
|
| 9 |
-
|
| 10 |
-
echo "╔══════════════════════════════════════════════════════════╗"
|
| 11 |
-
echo "║ 🚀 Lancement de l'API Employee Turnover ║"
|
| 12 |
-
echo "╚══════════════════════════════════════════════════════════╝"
|
| 13 |
-
echo ""
|
| 14 |
-
|
| 15 |
-
# Vérifier que poetry est installé
|
| 16 |
-
if ! command -v poetry &> /dev/null; then
|
| 17 |
-
echo "❌ poetry n'est pas installé"
|
| 18 |
-
echo " Installation : pip install poetry"
|
| 19 |
-
exit 1
|
| 20 |
-
fi
|
| 21 |
-
|
| 22 |
-
# Vérifier que le fichier api.py existe
|
| 23 |
-
if [ ! -f "api.py" ]; then
|
| 24 |
-
echo "❌ Fichier api.py introuvable"
|
| 25 |
-
echo " Assurez-vous d'être dans le bon dossier"
|
| 26 |
-
exit 1
|
| 27 |
-
fi
|
| 28 |
-
|
| 29 |
-
echo "✅ Démarrage de l'API sur http://127.0.0.1:7860"
|
| 30 |
-
echo ""
|
| 31 |
-
echo "📖 Documentation disponible sur:"
|
| 32 |
-
echo " - http://127.0.0.1:7860/docs (Swagger)"
|
| 33 |
-
echo " - http://127.0.0.1:7860/redoc (ReDoc)"
|
| 34 |
-
echo ""
|
| 35 |
-
echo "🔮 Interface Gradio (si activée):"
|
| 36 |
-
echo " - http://127.0.0.1:7860/"
|
| 37 |
-
echo ""
|
| 38 |
-
echo "💡 Pour arrêter l'API : Ctrl+C"
|
| 39 |
-
echo ""
|
| 40 |
-
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
| 41 |
-
echo ""
|
| 42 |
-
|
| 43 |
-
# Lancer l'API avec poetry en mode DEBUG (sans API key)
|
| 44 |
-
DEBUG=True poetry run uvicorn api:app --host 127.0.0.1 --port 7860
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
exemples/{predictions_batch_20260111_235739.csv → predictions_batch_20260112_043228.csv}
RENAMED
|
File without changes
|
exemples/predictions_batch_hf_20260112_043238.csv
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
employee_id,prediction,prediction_code,risk_level,probability_stay,probability_leave
|
| 2 |
+
1,VA PARTIR,1,High,16.41%,83.59%
|
| 3 |
+
2,VA RESTER,0,Low,88.46%,11.54%
|
| 4 |
+
3,VA PARTIR,1,Medium,35.19%,64.81%
|
| 5 |
+
4,VA PARTIR,1,High,24.39%,75.61%
|
| 6 |
+
5,VA PARTIR,1,Medium,32.16%,67.84%
|
| 7 |
+
6,VA RESTER,0,Low,95.30%,4.70%
|
| 8 |
+
7,VA RESTER,0,Low,81.61%,18.39%
|
| 9 |
+
8,VA PARTIR,1,High,20.77%,79.23%
|
| 10 |
+
9,VA RESTER,0,Low,96.22%,3.78%
|
| 11 |
+
10,VA RESTER,0,Low,92.47%,7.53%
|
mkdocs.yml
CHANGED
|
@@ -149,16 +149,12 @@ extra:
|
|
| 149 |
nav:
|
| 150 |
- Accueil: index.md
|
| 151 |
|
| 152 |
-
-
|
| 153 |
-
-
|
| 154 |
-
-
|
| 155 |
-
-
|
| 156 |
-
- Modèle ML: model.md
|
| 157 |
-
- Entraînement: training.md
|
| 158 |
-
- Déploiement: deployment.md
|
| 159 |
|
| 160 |
- Référence:
|
| 161 |
-
-
|
| 162 |
-
-
|
| 163 |
-
- Archive mission OC: etapes_archive.txt
|
| 164 |
|
|
|
|
| 149 |
nav:
|
| 150 |
- Accueil: index.md
|
| 151 |
|
| 152 |
+
- Documentation:
|
| 153 |
+
- API: api_documentation.md
|
| 154 |
+
- Architecture: architecture.md
|
| 155 |
+
- Déploiement: deployment_guide.md
|
|
|
|
|
|
|
|
|
|
| 156 |
|
| 157 |
- Référence:
|
| 158 |
+
- Base de données: database_setup.md
|
| 159 |
+
- Tests: tests_report.md
|
|
|
|
| 160 |
|
requirements_prod.txt
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
aiofiles==24.1.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 2 |
+
alembic==1.17.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 3 |
+
annotated-doc==0.0.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 4 |
+
annotated-types==0.7.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 5 |
+
anyio==4.12.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 6 |
+
audioop-lts==0.2.2 ; python_version >= "3.13" and python_version < "4.0"
|
| 7 |
+
blinker==1.9.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 8 |
+
brotli==1.2.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 9 |
+
cachetools==6.2.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 10 |
+
certifi==2025.11.12 ; python_version >= "3.12" and python_version < "4.0"
|
| 11 |
+
cffi==2.0.0 ; python_version >= "3.12" and python_version < "4.0" and platform_python_implementation != "PyPy"
|
| 12 |
+
charset-normalizer==3.4.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 13 |
+
click==8.3.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 14 |
+
cloudpickle==3.1.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 15 |
+
colorama==0.4.6 ; python_version >= "3.12" and python_version < "4.0" and (platform_system == "Windows" or sys_platform == "win32")
|
| 16 |
+
contourpy==1.3.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 17 |
+
cryptography==46.0.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 18 |
+
cycler==0.12.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 19 |
+
databricks-sdk==0.76.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 20 |
+
deprecated==1.3.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 21 |
+
docker==7.1.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 22 |
+
fastapi==0.127.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 23 |
+
ffmpy==1.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 24 |
+
filelock==3.20.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 25 |
+
flask-cors==6.0.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 26 |
+
flask==3.1.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 27 |
+
fonttools==4.61.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 28 |
+
fsspec==2025.12.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 29 |
+
gitdb==4.0.12 ; python_version >= "3.12" and python_version < "4.0"
|
| 30 |
+
gitpython==3.1.45 ; python_version >= "3.12" and python_version < "4.0"
|
| 31 |
+
google-auth==2.45.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 32 |
+
gradio-client==2.0.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 33 |
+
gradio==6.2.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 34 |
+
graphene==3.4.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 35 |
+
graphql-core==3.2.7 ; python_version >= "3.12" and python_version < "4.0"
|
| 36 |
+
graphql-relay==3.2.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 37 |
+
greenlet==3.3.0 ; python_version >= "3.12" and python_version < "4.0" and (platform_machine == "aarch64" or platform_machine == "ppc64le" or platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "AMD64" or platform_machine == "win32" or platform_machine == "WIN32")
|
| 38 |
+
groovy==0.1.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 39 |
+
gunicorn==23.0.0 ; python_version >= "3.12" and python_version < "4.0" and platform_system != "Windows"
|
| 40 |
+
h11==0.16.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 41 |
+
hf-xet==1.2.0 ; python_version >= "3.12" and python_version < "4.0" and (platform_machine == "x86_64" or platform_machine == "amd64" or platform_machine == "AMD64" or platform_machine == "arm64" or platform_machine == "aarch64")
|
| 42 |
+
httpcore==1.0.9 ; python_version >= "3.12" and python_version < "4.0"
|
| 43 |
+
httptools==0.7.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 44 |
+
httpx==0.28.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 45 |
+
huey==2.5.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 46 |
+
huggingface-hub==1.2.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 47 |
+
idna==3.11 ; python_version >= "3.12" and python_version < "4.0"
|
| 48 |
+
imbalanced-learn==0.13.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 49 |
+
importlib-metadata==8.7.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 50 |
+
itsdangerous==2.2.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 51 |
+
jinja2==3.1.6 ; python_version >= "3.12" and python_version < "4.0"
|
| 52 |
+
joblib==1.5.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 53 |
+
kiwisolver==1.4.9 ; python_version >= "3.12" and python_version < "4.0"
|
| 54 |
+
limits==5.6.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 55 |
+
mako==1.3.10 ; python_version >= "3.12" and python_version < "4.0"
|
| 56 |
+
markdown-it-py==4.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 57 |
+
markupsafe==3.0.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 58 |
+
matplotlib==3.10.8 ; python_version >= "3.12" and python_version < "4.0"
|
| 59 |
+
mdurl==0.1.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 60 |
+
mlflow-skinny==3.8.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 61 |
+
mlflow-tracing==3.8.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 62 |
+
mlflow==3.8.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 63 |
+
numpy==2.4.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 64 |
+
nvidia-nccl-cu12==2.28.9 ; python_version >= "3.12" and python_version < "4.0" and platform_system == "Linux" and platform_machine != "aarch64"
|
| 65 |
+
opentelemetry-api==1.39.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 66 |
+
opentelemetry-proto==1.39.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 67 |
+
opentelemetry-sdk==1.39.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 68 |
+
opentelemetry-semantic-conventions==0.60b1 ; python_version >= "3.12" and python_version < "4.0"
|
| 69 |
+
orjson==3.11.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 70 |
+
packaging==25.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 71 |
+
pandas==2.3.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 72 |
+
pillow==12.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 73 |
+
protobuf==6.33.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 74 |
+
psycopg2-binary==2.9.9 ; python_version >= "3.12" and python_version < "4.0"
|
| 75 |
+
pyarrow==22.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 76 |
+
pyasn1-modules==0.4.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 77 |
+
pyasn1==0.6.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 78 |
+
pycparser==2.23 ; python_version >= "3.12" and python_version < "4.0" and platform_python_implementation != "PyPy" and implementation_name != "PyPy"
|
| 79 |
+
pydantic-core==2.41.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 80 |
+
pydantic==2.12.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 81 |
+
pydub==0.25.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 82 |
+
pygments==2.19.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 83 |
+
pyparsing==3.3.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 84 |
+
python-dateutil==2.9.0.post0 ; python_version >= "3.12" and python_version < "4.0"
|
| 85 |
+
python-dotenv==1.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 86 |
+
python-json-logger==4.0.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 87 |
+
python-multipart==0.0.21 ; python_version >= "3.12" and python_version < "4.0"
|
| 88 |
+
pytz==2025.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 89 |
+
pywin32==311 ; python_version >= "3.12" and python_version < "4.0" and sys_platform == "win32"
|
| 90 |
+
pyyaml==6.0.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 91 |
+
requests==2.32.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 92 |
+
rich==14.2.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 93 |
+
rsa==4.9.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 94 |
+
safehttpx==0.1.7 ; python_version >= "3.12" and python_version < "4.0"
|
| 95 |
+
scikit-learn==1.6.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 96 |
+
scipy==1.16.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 97 |
+
semantic-version==2.10.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 98 |
+
shellingham==1.5.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 99 |
+
six==1.17.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 100 |
+
sklearn-compat==0.1.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 101 |
+
slowapi==0.1.9 ; python_version >= "3.12" and python_version < "4.0"
|
| 102 |
+
smmap==5.0.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 103 |
+
sqlalchemy==2.0.23 ; python_version >= "3.12" and python_version < "4.0"
|
| 104 |
+
sqlparse==0.5.5 ; python_version >= "3.12" and python_version < "4.0"
|
| 105 |
+
starlette==0.50.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 106 |
+
threadpoolctl==3.6.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 107 |
+
tomlkit==0.13.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 108 |
+
tqdm==4.67.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 109 |
+
typer-slim==0.21.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 110 |
+
typer==0.21.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 111 |
+
typing-extensions==4.15.0 ; python_version >= "3.12" and python_version < "4.0"
|
| 112 |
+
typing-inspection==0.4.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 113 |
+
tzdata==2025.3 ; python_version >= "3.12" and python_version < "4.0"
|
| 114 |
+
urllib3==2.6.2 ; python_version >= "3.12" and python_version < "4.0"
|
| 115 |
+
uvicorn==0.32.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 116 |
+
uvloop==0.22.1 ; python_version >= "3.12" and python_version < "4.0" and sys_platform != "win32" and sys_platform != "cygwin" and platform_python_implementation != "PyPy"
|
| 117 |
+
waitress==3.0.2 ; python_version >= "3.12" and python_version < "4.0" and platform_system == "Windows"
|
| 118 |
+
watchfiles==1.1.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 119 |
+
websockets==15.0.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 120 |
+
werkzeug==3.1.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 121 |
+
wrapt==2.0.1 ; python_version >= "3.12" and python_version < "4.0"
|
| 122 |
+
xgboost==2.1.4 ; python_version >= "3.12" and python_version < "4.0"
|
| 123 |
+
zipp==3.23.0 ; python_version >= "3.12" and python_version < "4.0"
|
src/Dockerfile
CHANGED
|
@@ -2,34 +2,32 @@ FROM python:3.12-slim
|
|
| 2 |
|
| 3 |
WORKDIR /app
|
| 4 |
|
| 5 |
-
# Installer Poetry
|
| 6 |
-
RUN pip install poetry
|
| 7 |
-
|
| 8 |
# Installer les dépendances système
|
| 9 |
RUN apt-get update && apt-get install -y \
|
| 10 |
curl \
|
| 11 |
&& rm -rf /var/lib/apt/lists/*
|
| 12 |
|
| 13 |
-
# Copier
|
| 14 |
-
COPY
|
| 15 |
|
| 16 |
-
#
|
| 17 |
-
|
|
|
|
| 18 |
|
| 19 |
-
#
|
| 20 |
-
RUN
|
| 21 |
|
| 22 |
# Copier le code de l'application
|
| 23 |
COPY app.py .
|
|
|
|
| 24 |
COPY db_models.py .
|
| 25 |
COPY src/ ./src/
|
| 26 |
-
COPY .env.example .env
|
| 27 |
|
| 28 |
# Créer le dossier logs
|
| 29 |
RUN mkdir -p logs
|
| 30 |
|
| 31 |
-
# Exposer
|
| 32 |
-
EXPOSE 7860
|
| 33 |
|
| 34 |
# Variables d'environnement par défaut
|
| 35 |
ENV DEBUG=false
|
|
@@ -41,5 +39,5 @@ ENV PYTHONUNBUFFERED=1
|
|
| 41 |
HEALTHCHECK --interval=30s --timeout=10s --start-period=120s --retries=3 \
|
| 42 |
CMD curl -f http://localhost:7860/ || exit 1
|
| 43 |
|
| 44 |
-
# Commande de démarrage
|
| 45 |
CMD ["python", "app.py"]
|
|
|
|
| 2 |
|
| 3 |
WORKDIR /app
|
| 4 |
|
|
|
|
|
|
|
|
|
|
| 5 |
# Installer les dépendances système
|
| 6 |
RUN apt-get update && apt-get install -y \
|
| 7 |
curl \
|
| 8 |
&& rm -rf /var/lib/apt/lists/*
|
| 9 |
|
| 10 |
+
# Copier le fichier de dépendances (généré via poetry export)
|
| 11 |
+
COPY requirements_prod.txt .
|
| 12 |
|
| 13 |
+
# Installer les dépendances Python directement via pip
|
| 14 |
+
# Cette approche est plus robuste que poetry install sur HF Spaces
|
| 15 |
+
RUN pip install --no-cache-dir -r requirements_prod.txt
|
| 16 |
|
| 17 |
+
# Vérifier que les dépendances critiques sont bien installées
|
| 18 |
+
RUN python -c "import slowapi; import fastapi; import gradio; print('✓ All critical dependencies installed')"
|
| 19 |
|
| 20 |
# Copier le code de l'application
|
| 21 |
COPY app.py .
|
| 22 |
+
COPY api.py .
|
| 23 |
COPY db_models.py .
|
| 24 |
COPY src/ ./src/
|
|
|
|
| 25 |
|
| 26 |
# Créer le dossier logs
|
| 27 |
RUN mkdir -p logs
|
| 28 |
|
| 29 |
+
# Exposer les ports (7860 = Gradio, 8000 = FastAPI)
|
| 30 |
+
EXPOSE 7860 8000
|
| 31 |
|
| 32 |
# Variables d'environnement par défaut
|
| 33 |
ENV DEBUG=false
|
|
|
|
| 39 |
HEALTHCHECK --interval=30s --timeout=10s --start-period=120s --retries=3 \
|
| 40 |
CMD curl -f http://localhost:7860/ || exit 1
|
| 41 |
|
| 42 |
+
# Commande de démarrage
|
| 43 |
CMD ["python", "app.py"]
|
src/gradio_ui.py
CHANGED
|
@@ -542,17 +542,26 @@ def create_gradio_interface():
|
|
| 542 |
# Onglet Batch
|
| 543 |
with gr.TabItem("📦 Batch"):
|
| 544 |
gr.Markdown(
|
| 545 |
-
"### Prédictions batch à partir de 3 CSV (sondage, évaluation, SIRH)
|
|
|
|
|
|
|
|
|
|
| 546 |
)
|
| 547 |
with gr.Column():
|
| 548 |
sondage_file = gr.File(
|
| 549 |
-
label="CSV Sondage
|
|
|
|
|
|
|
| 550 |
)
|
| 551 |
eval_file = gr.File(
|
| 552 |
-
label="CSV Évaluation
|
|
|
|
|
|
|
| 553 |
)
|
| 554 |
sirh_file = gr.File(
|
| 555 |
-
label="CSV SIRH
|
|
|
|
|
|
|
| 556 |
)
|
| 557 |
batch_btn = gr.Button("📦 Prédire en batch", variant="primary")
|
| 558 |
batch_result = gr.JSON(label="Résultat batch")
|
|
|
|
| 542 |
# Onglet Batch
|
| 543 |
with gr.TabItem("📦 Batch"):
|
| 544 |
gr.Markdown(
|
| 545 |
+
"""### Prédictions batch à partir de 3 CSV (sondage, évaluation, SIRH)
|
| 546 |
+
|
| 547 |
+
⚠️ **Ordre important :** Assurez-vous d'uploader les bons fichiers dans chaque champ.
|
| 548 |
+
"""
|
| 549 |
)
|
| 550 |
with gr.Column():
|
| 551 |
sondage_file = gr.File(
|
| 552 |
+
label="📋 CSV Sondage (ex: 02_predict_batch_sondage.csv)",
|
| 553 |
+
file_types=[".csv"],
|
| 554 |
+
type="filepath",
|
| 555 |
)
|
| 556 |
eval_file = gr.File(
|
| 557 |
+
label="📊 CSV Évaluation (ex: 02_predict_batch_eval.csv)",
|
| 558 |
+
file_types=[".csv"],
|
| 559 |
+
type="filepath",
|
| 560 |
)
|
| 561 |
sirh_file = gr.File(
|
| 562 |
+
label="👤 CSV SIRH (ex: 02_predict_batch_sirh.csv)",
|
| 563 |
+
file_types=[".csv"],
|
| 564 |
+
type="filepath",
|
| 565 |
)
|
| 566 |
batch_btn = gr.Button("📦 Prédire en batch", variant="primary")
|
| 567 |
batch_result = gr.JSON(label="Résultat batch")
|
test_deployment.sh
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# Script de test avant déploiement sur HuggingFace Spaces
|
| 3 |
+
# Vérifie que FastAPI et Gradio fonctionnent correctement ensemble
|
| 4 |
+
|
| 5 |
+
set -e
|
| 6 |
+
|
| 7 |
+
echo "=========================================="
|
| 8 |
+
echo "🧪 Test de l'application avant déploiement"
|
| 9 |
+
echo "=========================================="
|
| 10 |
+
|
| 11 |
+
# Couleurs
|
| 12 |
+
RED='\033[0;31m'
|
| 13 |
+
GREEN='\033[0;32m'
|
| 14 |
+
YELLOW='\033[1;33m'
|
| 15 |
+
NC='\033[0m' # No Color
|
| 16 |
+
|
| 17 |
+
# Fonction de nettoyage
|
| 18 |
+
cleanup() {
|
| 19 |
+
echo -e "\n${YELLOW}🧹 Nettoyage...${NC}"
|
| 20 |
+
pkill -f "python app.py" 2>/dev/null || true
|
| 21 |
+
pkill -f "uvicorn api:app" 2>/dev/null || true
|
| 22 |
+
sleep 2
|
| 23 |
+
}
|
| 24 |
+
|
| 25 |
+
# Nettoyer avant de commencer
|
| 26 |
+
cleanup
|
| 27 |
+
|
| 28 |
+
# Trap pour nettoyer en cas d'interruption
|
| 29 |
+
trap cleanup EXIT INT TERM
|
| 30 |
+
|
| 31 |
+
echo -e "\n${YELLOW}1️⃣ Démarrage de l'application...${NC}"
|
| 32 |
+
|
| 33 |
+
# Chercher l'environnement virtuel
|
| 34 |
+
if [ -d ".venv" ]; then
|
| 35 |
+
PYTHON=".venv/bin/python"
|
| 36 |
+
elif [ -d "venv" ]; then
|
| 37 |
+
PYTHON="venv/bin/python"
|
| 38 |
+
else
|
| 39 |
+
PYTHON="python3"
|
| 40 |
+
fi
|
| 41 |
+
|
| 42 |
+
echo -e "${YELLOW} Using Python: $PYTHON${NC}"
|
| 43 |
+
$PYTHON app.py > /tmp/app_test.log 2>&1 &
|
| 44 |
+
APP_PID=$!
|
| 45 |
+
|
| 46 |
+
# Attendre le démarrage
|
| 47 |
+
echo -e "${YELLOW}⏳ Attente du démarrage (20s)...${NC}"
|
| 48 |
+
sleep 20
|
| 49 |
+
|
| 50 |
+
# Vérifier que le processus tourne
|
| 51 |
+
if ! ps -p $APP_PID > /dev/null; then
|
| 52 |
+
echo -e "${RED}❌ L'application a crashé au démarrage${NC}"
|
| 53 |
+
echo -e "\n${YELLOW}Logs:${NC}"
|
| 54 |
+
tail -30 /tmp/app_test.log
|
| 55 |
+
exit 1
|
| 56 |
+
fi
|
| 57 |
+
|
| 58 |
+
echo -e "${GREEN}✅ Application démarrée${NC}"
|
| 59 |
+
|
| 60 |
+
# Test 1: Health check FastAPI
|
| 61 |
+
echo -e "\n${YELLOW}2️⃣ Test health check FastAPI (port 8000)...${NC}"
|
| 62 |
+
if curl -s -f http://localhost:8000/health > /dev/null; then
|
| 63 |
+
echo -e "${GREEN}✅ FastAPI répond${NC}"
|
| 64 |
+
curl -s http://localhost:8000/health | python3 -m json.tool 2>/dev/null || echo "{}"
|
| 65 |
+
else
|
| 66 |
+
echo -e "${RED}❌ FastAPI ne répond pas${NC}"
|
| 67 |
+
tail -30 /tmp/app_test.log
|
| 68 |
+
exit 1
|
| 69 |
+
fi
|
| 70 |
+
|
| 71 |
+
# Test 2: Gradio home
|
| 72 |
+
echo -e "\n${YELLOW}3️⃣ Test interface Gradio (port 7860)...${NC}"
|
| 73 |
+
if curl -s -f http://localhost:7860/ > /dev/null; then
|
| 74 |
+
echo -e "${GREEN}✅ Gradio répond${NC}"
|
| 75 |
+
else
|
| 76 |
+
echo -e "${RED}❌ Gradio ne répond pas${NC}"
|
| 77 |
+
tail -30 /tmp/app_test.log
|
| 78 |
+
exit 1
|
| 79 |
+
fi
|
| 80 |
+
|
| 81 |
+
# Test 3: Prédiction API
|
| 82 |
+
echo -e "\n${YELLOW}4️⃣ Test prédiction via API FastAPI...${NC}"
|
| 83 |
+
|
| 84 |
+
# Récupérer la clé API depuis .env ou utiliser la clé par défaut
|
| 85 |
+
if [ -f ".env" ]; then
|
| 86 |
+
API_KEY=$(grep "^API_KEY=" .env | cut -d'=' -f2)
|
| 87 |
+
else
|
| 88 |
+
API_KEY="dev-key-change-me-in-production"
|
| 89 |
+
fi
|
| 90 |
+
|
| 91 |
+
RESPONSE=$(curl -s -X POST http://localhost:8000/predict \
|
| 92 |
+
-H "Content-Type: application/json" \
|
| 93 |
+
-H "X-API-Key: $API_KEY" \
|
| 94 |
+
-d '{
|
| 95 |
+
"nombre_participation_pee": 0,
|
| 96 |
+
"nb_formations_suivies": 2,
|
| 97 |
+
"nombre_employee_sous_responsabilite": 1,
|
| 98 |
+
"distance_domicile_travail": 15,
|
| 99 |
+
"niveau_education": 3,
|
| 100 |
+
"domaine_etude": "Infra & Cloud",
|
| 101 |
+
"ayant_enfants": "Y",
|
| 102 |
+
"frequence_deplacement": "Occasionnel",
|
| 103 |
+
"annees_depuis_la_derniere_promotion": 2,
|
| 104 |
+
"annes_sous_responsable_actuel": 5,
|
| 105 |
+
"satisfaction_employee_environnement": 3,
|
| 106 |
+
"note_evaluation_precedente": 4,
|
| 107 |
+
"niveau_hierarchique_poste": 2,
|
| 108 |
+
"satisfaction_employee_nature_travail": 3,
|
| 109 |
+
"satisfaction_employee_equipe": 3,
|
| 110 |
+
"satisfaction_employee_equilibre_pro_perso": 2,
|
| 111 |
+
"note_evaluation_actuelle": 4,
|
| 112 |
+
"heure_supplementaires": "Non",
|
| 113 |
+
"augementation_salaire_precedente": 5.5,
|
| 114 |
+
"age": 35,
|
| 115 |
+
"genre": "M",
|
| 116 |
+
"revenu_mensuel": 4500.0,
|
| 117 |
+
"statut_marital": "Marié(e)",
|
| 118 |
+
"departement": "Commercial",
|
| 119 |
+
"poste": "Manager",
|
| 120 |
+
"nombre_experiences_precedentes": 3,
|
| 121 |
+
"nombre_heures_travailless": 80,
|
| 122 |
+
"annee_experience_totale": 10,
|
| 123 |
+
"annees_dans_l_entreprise": 5,
|
| 124 |
+
"annees_dans_le_poste_actuel": 2
|
| 125 |
+
}')
|
| 126 |
+
|
| 127 |
+
if echo "$RESPONSE" | grep -q "prediction"; then
|
| 128 |
+
echo -e "${GREEN}✅ Prédiction réussie${NC}"
|
| 129 |
+
echo "$RESPONSE" | python3 -m json.tool 2>/dev/null || echo "$RESPONSE"
|
| 130 |
+
else
|
| 131 |
+
echo -e "${RED}❌ Erreur lors de la prédiction${NC}"
|
| 132 |
+
echo "$RESPONSE"
|
| 133 |
+
exit 1
|
| 134 |
+
fi
|
| 135 |
+
|
| 136 |
+
# Test 4: Documentation Swagger
|
| 137 |
+
echo -e "\n${YELLOW}5️⃣ Test documentation Swagger...${NC}"
|
| 138 |
+
if curl -s -f http://localhost:8000/docs > /dev/null; then
|
| 139 |
+
echo -e "${GREEN}✅ Documentation accessible${NC}"
|
| 140 |
+
else
|
| 141 |
+
echo -e "${RED}❌ Documentation non accessible${NC}"
|
| 142 |
+
exit 1
|
| 143 |
+
fi
|
| 144 |
+
|
| 145 |
+
# Résumé final
|
| 146 |
+
echo -e "\n=========================================="
|
| 147 |
+
echo -e "${GREEN}✅ TOUS LES TESTS SONT PASSÉS !${NC}"
|
| 148 |
+
echo -e "=========================================="
|
| 149 |
+
echo ""
|
| 150 |
+
echo "L'application est prête pour le déploiement sur HuggingFace Spaces."
|
| 151 |
+
echo ""
|
| 152 |
+
echo "Prochaines étapes :"
|
| 153 |
+
echo "1. Committez vos changements : git add . && git commit -m 'Deploy FastAPI + Gradio'"
|
| 154 |
+
echo "2. Poussez sur GitHub : git push origin main"
|
| 155 |
+
echo "3. HF Spaces se synchronisera automatiquement"
|
| 156 |
+
echo "4. Vérifiez les logs sur https://huggingface.co/spaces/votre-username/votre-space/logs"
|
| 157 |
+
echo ""
|
| 158 |
+
echo "URLs attendues sur HF Spaces :"
|
| 159 |
+
echo " - Interface : https://votre-space.hf.space/"
|
| 160 |
+
echo " - API interne : http://localhost:8000 (non publique)"
|
| 161 |
+
echo ""
|
| 162 |
+
|
| 163 |
+
# Nettoyer
|
| 164 |
+
cleanup
|
| 165 |
+
|
| 166 |
+
exit 0
|