| ## SENTINEL-CHECK API | |
| **Ensemble model fake review detection for small businesses.** | |
| ## Presidential AI Challenge Submission | |
| THIS IS SOLELY FOR SUBMISSION TO THE PRESIDENTIAL AI CHALLENGE! | |
| This is the deployed and final iteration of our submission. Any commits after the submission date are generally for maintenance purposes. | |
| All documentation on the API can be found at the bottom of the "App" section of this space - look for the hyperlink button with the text **"use via API"**. | |
| --- | |
| ## Prediction Meanings: | |
| - `"genuine"` - review is probably authentic | |
| - `"fake"` - review is probably fake/fraudulent | |
| - `"uncertain"` - models disagree (agreement < 67%); less than 2/3 models agree. | |
| --- | |
| ## Model Arch | |
| The API uses an ensemble of three models: | |
| 1. **DistilBERT** (distilbert-base-uncased) - max 128 tokens | |
| 2. **RoBERTa** (roberta-base) - max 192 tokens | |
| 3. **BERT** (bert-base-uncased) - max 256 tokens | |
| **Ensemble Strategy:** Equal weighting (0.333 each) | |
| **Decision Threshold:** 0.45 for fake classification | |
| **Uncertainty Threshold:** 0.67 model agreement req. | |
| Models loaded from HuggingFace Hub here is the link: https://huggingface.co/codingcoolfun9ed/sentinelcheck-models | |
| --- | |
| ## Performance Metrics | |
| - **Test Accuracy:** 78.26% | |
| - **Validation Accuracy:** 84.64% | |
| - **F1 Score:** 0.8531 | |
| - **Model Agreement:** 88.4% | |
| Threshold optimized on validation set. Equal weighting outperformed an accuracy-based weighting approach. | |
| --- | |
| ## Tech Stack | |
| - **Framework:** Flask + Flask-CORS | |
| - **ML:** PyTorch, Transformers (HuggingFace) | |
| - **Models:** BERT, RoBERTa, DistilBERT | |
| - **Deployment:** Python 3 | |
| --- | |
| ## Text Preprocessing | |
| Reviews are automatically cleaned: | |
| - URL removal | |
| - HTML tag removal | |
| - Punctuation normalization | |
| - Whitespace normalization | |
| --- | |
| ## Author | |
| **codingcoolfun9ed** (Aaban R.) | Submission Date: January 20, 2026 | |
| For API documentation and the live demo, visit the app interface and/or click **"use via API"** at the bottom. |