misakovhearst commited on
Commit
48c7fed
Β·
1 Parent(s): 6112a6a

Initial deploy

Browse files
.gitignore ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ pip-wheel-metadata/
20
+ share/python-wheels/
21
+ *.egg-info/
22
+ .installed.cfg
23
+ *.egg
24
+ MANIFEST
25
+
26
+ # Virtual environments
27
+ venv/
28
+ env/
29
+ ENV/
30
+ env.bak/
31
+ venv.bak/
32
+
33
+ # IDE
34
+ .vscode/
35
+ .idea/
36
+ *.swp
37
+ *.swo
38
+ *~
39
+ .DS_Store
40
+
41
+ # Database
42
+ *.db
43
+ *.sqlite
44
+ *.sqlite3
45
+ .db-journal
46
+
47
+ # Uploads and results
48
+ backend/uploads/
49
+ backend/results/
50
+ backend/.model_cache/
51
+
52
+ # Environment
53
+ .env
54
+ .env.local
55
+ .env.*.local
56
+
57
+ # Flask
58
+ instance/
59
+ .webassets-cache
60
+
61
+ # Model cache
62
+ .model_cache/
63
+ models/
64
+ *.bin
65
+ *.pt
66
+ *.pth
67
+
68
+ # Logs
69
+ *.log
70
+ logs/
71
+
72
+ # TMP
73
+ tmp/
74
+ .tmp/
75
+
76
+ # Local data (not for repo/HF Space)
77
+ submissions.jsonl
78
+ slop_detect.db
79
+
80
+ # HuggingFace model/hub cache
81
+ .cache/
Dockerfile ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # System deps needed by pypdf, python-docx, torch
4
+ RUN apt-get update && apt-get install -y --no-install-recommends \
5
+ gcc \
6
+ libgomp1 \
7
+ && rm -rf /var/lib/apt/lists/*
8
+
9
+ WORKDIR /app
10
+
11
+ # Install Python dependencies (cached as a separate layer)
12
+ COPY backend/requirements.txt ./requirements.txt
13
+ RUN pip install --no-cache-dir -r requirements.txt
14
+
15
+ # Copy source
16
+ COPY backend/ ./backend/
17
+ COPY frontend/ ./frontend/
18
+
19
+ # Runtime directories
20
+ RUN mkdir -p uploads results .cache
21
+
22
+ # HF Spaces requires port 7860 and runs containers as uid 1000
23
+ RUN useradd -m -u 1000 user && chown -R user:user /app
24
+ USER user
25
+
26
+ # PYTHONPATH=/app lets `from backend.X import ...` resolve correctly
27
+ ENV PORT=7860 \
28
+ HOST=0.0.0.0 \
29
+ DEBUG=False \
30
+ PYTHONPATH=/app \
31
+ HF_HOME=/app/.cache/huggingface \
32
+ MODEL_CACHE_DIR=/app/.cache/models
33
+
34
+ EXPOSE 7860
35
+
36
+ CMD ["python", "backend/main.py"]
README.md CHANGED
@@ -1,12 +1,503 @@
1
  ---
2
- title: Screencomply Documents
3
- emoji: πŸ”₯
4
- colorFrom: red
5
- colorTo: pink
6
  sdk: docker
 
7
  pinned: false
8
- license: mit
9
- short_description: Document AI Detector
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: AI Slop Detector
3
+ emoji: πŸ”
4
+ colorFrom: purple
5
+ colorTo: blue
6
  sdk: docker
7
+ app_port: 7860
8
  pinned: false
 
 
9
  ---
10
 
11
+ # πŸ” AI Slop Detector
12
+
13
+ A comprehensive Python API and web UI for detecting AI-generated content in PDFs, DOCX files, and raw text. Uses an ensemble of multiple state-of-the-art detection methods.
14
+
15
+ ## Features
16
+
17
+ ✨ **Multi-Detector Ensemble**
18
+ - **RoBERTa Classifier** - Fine-tuned RoBERTa model for AI text detection
19
+ - **Perplexity Analysis** - Detects statistical anomalies and repetitive patterns
20
+ - **LLMDet** - Entropy and log-probability based detection
21
+ - **HuggingFace Classifier** - Generic transformer-based classification
22
+ - **OUTFOX Statistical** - Word/sentence length and vocabulary analysis
23
+
24
+ ✨ **Easy Feature Flags**
25
+ - Enable/disable each detector with a single config change
26
+ - Adjust detector weights for ensemble averaging
27
+ - Environment variable overrides
28
+
29
+ ✨ **Multiple File Formats**
30
+ - PDF documents
31
+ - DOCX/DOC files
32
+ - Plain text files
33
+ - Raw text input
34
+
35
+ ✨ **Persistent Storage**
36
+ - SQLite database (default, configurable)
37
+ - Upload history with timestamps
38
+ - Detailed result tracking and statistics
39
+
40
+ ✨ **Web UI**
41
+ - Beautiful, responsive interface
42
+ - Drag-and-drop file upload
43
+ - Real-time analysis results
44
+ - History and statistics views
45
+
46
+ ✨ **REST API**
47
+ - Analyze text and files via HTTP
48
+ - Get historical results
49
+ - Query statistics
50
+ - Full result management
51
+
52
+ ## Installation
53
+
54
+ ### Prerequisites
55
+ - Python 3.8+
56
+ - pip or conda
57
+
58
+ ### Setup
59
+
60
+ 1. **Clone/Navigate to the project:**
61
+ ```bash
62
+ cd slop-detect
63
+ ```
64
+
65
+ 2. **Create a Python virtual environment:**
66
+ ```bash
67
+ python -m venv venv
68
+
69
+ # On Windows:
70
+ venv\Scripts\activate
71
+
72
+ # On macOS/Linux:
73
+ source venv/bin/activate
74
+ ```
75
+
76
+ 3. **Install dependencies:**
77
+ ```bash
78
+ pip install -r backend/requirements.txt
79
+ ```
80
+
81
+ ## Configuration
82
+
83
+ ### Enable/Disable Detectors
84
+
85
+ Edit `backend/config/detectors_config.py`:
86
+
87
+ ```python
88
+ ENABLED_DETECTORS: Dict[str, bool] = {
89
+ "roberta": True, # Enable RoBERTa
90
+ "perplexity": True, # Enable Perplexity
91
+ "llmdet": True, # Enable LLMDet
92
+ "hf_classifier": True, # Enable HF Classifier
93
+ "outfox": False, # Disable OUTFOX
94
+ }
95
+ ```
96
+
97
+ ### Set Detector Weights
98
+
99
+ ```python
100
+ DETECTOR_WEIGHTS: Dict[str, float] = {
101
+ "roberta": 0.30, # 30% weight
102
+ "perplexity": 0.25, # 25% weight
103
+ "llmdet": 0.25, # 25% weight
104
+ "hf_classifier": 0.20, # 20% weight
105
+ "outfox": 0.00, # 0% weight (not used)
106
+ }
107
+ ```
108
+
109
+ ### Environment-based Configuration
110
+
111
+ You can also use environment variables to override config:
112
+
113
+ ```bash
114
+ # Enable/disable detectors
115
+ export ENABLE_ROBERTA=true
116
+ export ENABLE_PERPLEXITY=true
117
+ export ENABLE_LLMDET=true
118
+ export ENABLE_HF_CLASSIFIER=true
119
+ export ENABLE_OUTFOX=false
120
+
121
+ # Database
122
+ export DATABASE_URL=sqlite:///slop_detect.db
123
+ export UPLOAD_FOLDER=./uploads
124
+
125
+ # Flask
126
+ export HOST=0.0.0.0
127
+ export PORT=5000
128
+ export DEBUG=False
129
+ ```
130
+
131
+ ## Running the Application
132
+
133
+ ### Start the Flask Server
134
+
135
+ ```bash
136
+ cd backend
137
+ python main.py
138
+ ```
139
+
140
+ The API will be available at `http://localhost:5000`
141
+
142
+ ### API Endpoints
143
+
144
+ #### Health Check
145
+ ```
146
+ GET /api/health
147
+ ```
148
+
149
+ #### Analyze Text
150
+ ```
151
+ POST /api/analyze/text
152
+ Content-Type: application/json
153
+
154
+ {
155
+ "text": "Your text here...",
156
+ "filename": "optional_name.txt",
157
+ "user_id": "optional_user_id"
158
+ }
159
+
160
+ Response:
161
+ {
162
+ "status": "success",
163
+ "result_id": 1,
164
+ "overall_ai_score": 0.78,
165
+ "overall_ai_score_percentage": "78.0%",
166
+ "overall_confidence": "high",
167
+ "status_label": "Likely AI",
168
+ "detector_results": {
169
+ "roberta": {
170
+ "detector_name": "roberta",
171
+ "score": 0.85,
172
+ "confidence": "high",
173
+ "explanation": "Very strong indicators of AI-generated text..."
174
+ },
175
+ ...
176
+ },
177
+ "enabled_detectors": ["roberta", "perplexity", "llmdet", "hf_classifier"],
178
+ "text_stats": {
179
+ "character_count": 1500,
180
+ "word_count": 250,
181
+ "sentence_count": 15,
182
+ "average_word_length": 4.8
183
+ }
184
+ }
185
+ ```
186
+
187
+ #### Analyze File
188
+ ```
189
+ POST /api/analyze/file
190
+ FormData:
191
+ - file: <file>
192
+ - user_id: <optional_user_id>
193
+
194
+ Response: (same as analyze/text)
195
+ ```
196
+
197
+ #### Get All Results
198
+ ```
199
+ GET /api/results?page=1&limit=10&sort=recent
200
+
201
+ Response:
202
+ {
203
+ "status": "success",
204
+ "page": 1,
205
+ "limit": 10,
206
+ "total_count": 42,
207
+ "results": [...]
208
+ }
209
+ ```
210
+
211
+ #### Get Specific Result
212
+ ```
213
+ GET /api/results/{result_id}
214
+
215
+ Response:
216
+ {
217
+ "status": "success",
218
+ "result": {
219
+ "id": 1,
220
+ "filename": "document.pdf",
221
+ "overall_ai_score": 0.78,
222
+ "overall_ai_score_percentage": "78.0%",
223
+ ...
224
+ }
225
+ }
226
+ ```
227
+
228
+ #### Delete Result
229
+ ```
230
+ DELETE /api/results/{result_id}
231
+ ```
232
+
233
+ #### Update Result
234
+ ```
235
+ PUT /api/results/{result_id}
236
+ Content-Type: application/json
237
+
238
+ {
239
+ "notes": "Manual review: likely AI",
240
+ "is_flagged": true
241
+ }
242
+ ```
243
+
244
+ #### Get Statistics
245
+ ```
246
+ GET /api/statistics/summary
247
+
248
+ Response:
249
+ {
250
+ "status": "success",
251
+ "summary": {
252
+ "total_analyses": 42,
253
+ "average_ai_score": 0.65,
254
+ "total_text_analyzed": 125000,
255
+ "likely_human": 15,
256
+ "suspicious": 12,
257
+ "likely_ai": 15
258
+ }
259
+ }
260
+ ```
261
+
262
+ #### Get Configuration
263
+ ```
264
+ GET /api/config
265
+
266
+ Response:
267
+ {
268
+ "status": "success",
269
+ "config": {
270
+ "enabled_detectors": [
271
+ "roberta", "perplexity", "llmdet", "hf_classifier"
272
+ ],
273
+ "aggregation_method": "weighted_average",
274
+ "detector_weights": {...},
275
+ "detector_info": {...}
276
+ }
277
+ }
278
+ ```
279
+
280
+ ## Web Interface
281
+
282
+ Open `http://localhost:5000` in your browser to access the web UI.
283
+
284
+ ### Features:
285
+ - **Upload Section** - Drag-and-drop or click to upload files
286
+ - **Text Analysis** - Paste text directly
287
+ - **Results Dashboard** - View detailed analysis results
288
+ - **History Tab** - See all previous analyses
289
+ - **Statistics Tab** - View aggregate statistics
290
+
291
+ ## How It Works
292
+
293
+ ### Detection Process
294
+
295
+ 1. **File Parsing** - Extracts text from PDF/DOCX/TXT files
296
+ 2. **Text Cleaning** - Normalizes whitespace and formatting
297
+ 3. **Detector Ensemble** - Runs enabled detectors in parallel
298
+ 4. **Score Aggregation** - Combines detector scores using weighted average, max, or voting
299
+ 5. **Result Storage** - Saves to database with full metadata
300
+ 6. **Response** - Returns overall score and per-detector breakdown
301
+
302
+ ### Detector Details
303
+
304
+ #### RoBERTa Detector
305
+ - **Model**: roberta-base-openai-detector
306
+ - **Type**: Transformer-based classification
307
+ - **Output**: 0-1 probability score
308
+ - **Speed**: Medium
309
+
310
+ #### Perplexity Detector
311
+ - **Model**: GPT-2
312
+ - **Method**: Analyzes token probability distributions
313
+ - **Detects**: Repetitive patterns, unusual word choices
314
+ - **Output**: 0-1 score based on perplexity, repetition, AI phrases
315
+
316
+ #### LLMDet Detector
317
+ - **Model**: BERT
318
+ - **Method**: Entropy and log-probability analysis
319
+ - **Detects**: Predictable sequences, unusual statistical patterns
320
+ - **Output**: 0-1 score from combined metrics
321
+
322
+ #### HF Classifier
323
+ - **Model**: Configurable (default: BERT)
324
+ - **Type**: Generic sequence classification
325
+ - **Output**: 0-1 probability score
326
+
327
+ #### OUTFOX Statistical
328
+ - **Type**: Statistical signature analysis
329
+ - **Detects**: Unusual word length distributions, sentence structure patterns, vocabulary diversity
330
+ - **Output**: 0-1 score from multiple statistical metrics
331
+
332
+ ### Scoring
333
+
334
+ Default aggregation: **Weighted Average**
335
+ ```
336
+ Overall Score = Ξ£ (normalized_detector_score Γ— weight)
337
+ ```
338
+
339
+ Each detector's score is normalized to 0-1 range, then multiplied by its configured weight. The sum is clamped to [0, 1].
340
+
341
+ ### Confidence Levels
342
+
343
+ - **Very Low** (< 20%) - Almost certainly human-written
344
+ - **Low** (20-40%) - Probably human-written
345
+ - **Medium** (40-60%) - Uncertain
346
+ - **High** (60-80%) - Probably AI-generated
347
+ - **Very High** (> 80%) - Almost certainly AI-generated
348
+
349
+ ## Project Structure
350
+
351
+ ```
352
+ slop-detect/
353
+ β”œβ”€β”€ backend/
354
+ β”‚ β”œβ”€β”€ config/
355
+ β”‚ β”‚ β”œβ”€β”€ settings.py # App settings
356
+ β”‚ β”‚ └── detectors_config.py # Detector configuration (FEATURE FLAGS HERE)
357
+ β”‚ β”œβ”€β”€ detectors/
358
+ β”‚ β”‚ β”œβ”€β”€ base.py # Base detector class
359
+ β”‚ β”‚ β”œβ”€β”€ roberta.py # RoBERTa detector
360
+ β”‚ β”‚ β”œβ”€β”€ perplexity.py # Perplexity detector
361
+ β”‚ β”‚ β”œβ”€β”€ llmdet.py # LLMDet detector
362
+ β”‚ β”‚ β”œβ”€β”€ hf_classifier.py # HF classifier
363
+ β”‚ β”‚ β”œβ”€β”€ outfox.py # OUTFOX detector
364
+ β”‚ β”‚ └── ensemble.py # Ensemble manager
365
+ β”‚ β”œβ”€β”€ database/
366
+ β”‚ β”‚ β”œβ”€β”€ models.py # SQLAlchemy models
367
+ β”‚ β”‚ └── db.py # Database manager
368
+ β”‚ β”œβ”€β”€ api/
369
+ β”‚ β”‚ β”œβ”€β”€ routes.py # Flask API routes
370
+ β”‚ β”‚ └── models.py # Pydantic request/response models
371
+ β”‚ β”œβ”€β”€ utils/
372
+ β”‚ β”‚ β”œβ”€β”€ file_parser.py # PDF/DOCX/TXT parsing
373
+ β”‚ β”‚ └── highlighter.py # Text highlighting utilities
374
+ β”‚ β”œβ”€β”€ main.py # Flask app entry point
375
+ β”‚ └── requirements.txt # Python dependencies
376
+ β”œβ”€β”€ frontend/
377
+ β”‚ └── index.html # Web UI (HTML + CSS + JS)
378
+ └── README.md # This file
379
+ ```
380
+
381
+ ## Customization
382
+
383
+ ### Change Detector Weights
384
+
385
+ In `backend/config/detectors_config.py`:
386
+
387
+ ```python
388
+ DETECTOR_WEIGHTS: Dict[str, float] = {
389
+ "roberta": 0.40, # Increase weight
390
+ "perplexity": 0.30,
391
+ "llmdet": 0.20,
392
+ "hf_classifier": 0.10,
393
+ }
394
+ ```
395
+
396
+ ### Change Aggregation Method
397
+
398
+ In `backend/config/detectors_config.py`:
399
+
400
+ ```python
401
+ AGGREGATION_METHOD = "max" # Options: weighted_average, max, voting
402
+ ```
403
+
404
+ ### Use Different Models
405
+
406
+ In `backend/config/detectors_config.py`:
407
+
408
+ ```python
409
+ ROBERTA_MODEL = "distilbert-base-uncased"
410
+ PERPLEXITY_MODEL = "gpt2-medium"
411
+ HF_CLASSIFIER_MODEL = "your-custom-model"
412
+ ```
413
+
414
+ ### Add Custom Detectors
415
+
416
+ 1. Create a new file in `backend/detectors/`
417
+ 2. Inherit from `BaseDetector`
418
+ 3. Implement `detect()` method
419
+ 4. Add to `ensemble.py` initialization
420
+ 5. Add to `ENABLED_DETECTORS` in config
421
+
422
+ Example:
423
+
424
+ ```python
425
+ from detectors.base import BaseDetector, DetectorResult
426
+
427
+ class CustomDetector(BaseDetector):
428
+ def __init__(self):
429
+ super().__init__(name="custom")
430
+
431
+ def detect(self, text: str) -> DetectorResult:
432
+ # Your detection logic here
433
+ score = calculate_ai_score(text)
434
+ return DetectorResult(
435
+ detector_name=self.name,
436
+ score=score,
437
+ explanation="Custom detection result"
438
+ )
439
+ ```
440
+
441
+ ## Performance Tips
442
+
443
+ 1. **Model Caching** - Models are lazy-loaded and cached in memory
444
+ 2. **Parallel Detection** - Detectors can run in parallel (future enhancement)
445
+ 3. **Batch Processing** - Configure batch size for GPU processing
446
+ 4. **Disable Unused Detectors** - Reduce load by disabling detectors you don't need
447
+
448
+ ## Troubleshooting
449
+
450
+ ### Slow First Run
451
+ - Models need to be downloaded from Hugging Face Hub
452
+ - Subsequent runs will use cached models
453
+ - First model download can take 1-5 minutes
454
+
455
+ ### Out of Memory
456
+ - Reduce batch size in config
457
+ - Disable memory-intensive detectors
458
+ - Run on a machine with more RAM
459
+
460
+ ### Model Not Found
461
+ ```
462
+ transformers.utils.RepositoryNotFoundError: Model not found
463
+ ```
464
+ - Model name is incorrect in config
465
+ - Check Hugging Face Hub for correct model name
466
+
467
+ ### Database Locked
468
+ ```
469
+ sqlite3.OperationalError: database is locked
470
+ ```
471
+ - Close other connections to the database
472
+ - Ensure only one Flask instance is running
473
+ - Delete `.db-journal` file if present
474
+
475
+ ## Future Enhancements
476
+
477
+ - [ ] Parallel detector execution
478
+ - [ ] GPU support optimization
479
+ - [ ] Custom model fine-tuning
480
+ - [ ] Batch analysis API
481
+ - [ ] User authentication/authorization
482
+ - [ ] Document highlighting with suspicious sections
483
+ - [ ] Advanced filtering and search
484
+ - [ ] Export results to PDF/Excel
485
+ - [ ] API rate limiting
486
+ - [ ] Webhook notifications
487
+
488
+ ## License
489
+
490
+ MIT License - feel free to use and modify
491
+
492
+ ## References
493
+
494
+ - [LLMDet](https://github.com/TrustedLLM/LLMDet)
495
+ - [RAID](https://github.com/liamdugan/raid)
496
+ - [OUTFOX](https://github.com/ryuryukke/OUTFOX)
497
+ - [AIGTD Survey](https://github.com/Nicozwy/AIGTD-Survey)
498
+ - [Plagiarism Detection](https://github.com/Kyle6012/plagiarism-detection)
499
+ - [Hugging Face Transformers](https://huggingface.co/transformers/)
500
+
501
+ ## Support
502
+
503
+ For issues, questions, or suggestions, please open an issue on the project repository.
backend/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ """
2
+ AI Slop Detection Backend
3
+ """
4
+
5
+ __version__ = "1.0.0"
6
+ __author__ = "AI Slop Detection Team"
backend/api/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from .routes import api
2
+
3
+ __all__ = ["api"]
backend/api/models.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel
2
+ from typing import Optional, Dict, List, Any
3
+ from datetime import datetime
4
+
5
+ # Request Models
6
+
7
+ class AnalyzeTextRequest(BaseModel):
8
+ """Request model for analyzing raw text"""
9
+ text: str
10
+ user_id: Optional[str] = None
11
+ filename: Optional[str] = "untitled.txt"
12
+
13
+ class GetResultRequest(BaseModel):
14
+ """Request model for getting a specific result"""
15
+ result_id: int
16
+
17
+ class DeleteResultRequest(BaseModel):
18
+ """Request model for deleting a result"""
19
+ result_id: int
20
+
21
+ class UpdateResultRequest(BaseModel):
22
+ """Request model for updating a result"""
23
+ result_id: int
24
+ notes: Optional[str] = None
25
+ is_flagged: Optional[bool] = None
26
+
27
+ # Response Models
28
+
29
+ class DetectorResultResponse(BaseModel):
30
+ """Response model for single detector result"""
31
+ detector_name: str
32
+ score: float
33
+ confidence: Optional[str] = None
34
+ explanation: Optional[str] = None
35
+ suspicious_spans: List[Dict[str, Any]] = []
36
+ metadata: Dict[str, Any] = {}
37
+
38
+ class AnalysisResponse(BaseModel):
39
+ """Response model for analysis result"""
40
+ status: str # success, error
41
+ message: str
42
+ result_id: Optional[int] = None
43
+ file_id: Optional[str] = None
44
+ overall_ai_score: Optional[float] = None
45
+ overall_ai_score_percentage: Optional[str] = None
46
+ overall_confidence: Optional[str] = None
47
+ status_label: Optional[str] = None
48
+ detector_results: Optional[Dict[str, DetectorResultResponse]] = None
49
+ enabled_detectors: Optional[List[str]] = None
50
+ text_stats: Optional[Dict[str, Any]] = None
51
+ error_details: Optional[str] = None
52
+
53
+ class ResultListResponse(BaseModel):
54
+ """Response model for list of results"""
55
+ status: str
56
+ message: str
57
+ total_count: int
58
+ results: List[Dict[str, Any]] = []
59
+ page: int = 1
60
+ page_size: int = 10
61
+
62
+ class SummaryResponse(BaseModel):
63
+ """Response model for summary statistics"""
64
+ status: str
65
+ message: str
66
+ summary: Dict[str, Any] = {}
67
+
68
+ class StatusResponse(BaseModel):
69
+ """Response model for system status"""
70
+ status: str
71
+ message: str
72
+ detector_status: Dict[str, Any] = {}
73
+ config_summary: Optional[str] = None
backend/api/routes.py ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Blueprint, request, jsonify, send_file
2
+ from werkzeug.utils import secure_filename
3
+ from typing import Tuple
4
+ import os
5
+ import uuid
6
+ from datetime import datetime
7
+
8
+ from .models import (
9
+ AnalyzeTextRequest, AnalysisResponse, ResultListResponse,
10
+ SummaryResponse, StatusResponse
11
+ )
12
+ from backend.detectors import DetectorEnsemble
13
+ from backend.utils import FileParser, TextCleaner
14
+ from backend.database import DatabaseManager, Session
15
+ from backend.config.detectors_config import DetectorsConfig
16
+ from backend.config.settings import settings
17
+ from backend.utils.submission_logger import log_submission
18
+
19
+ # Create blueprint
20
+ api = Blueprint('api', __name__, url_prefix='/api')
21
+
22
+ # Global detector ensemble
23
+ _detector_ensemble = None
24
+
25
+ def get_detector_ensemble():
26
+ """Get or initialize detector ensemble"""
27
+ global _detector_ensemble
28
+ if _detector_ensemble is None:
29
+ _detector_ensemble = DetectorEnsemble()
30
+ return _detector_ensemble
31
+
32
+ # ============================================================================
33
+ # Health & Status Endpoints
34
+ # ============================================================================
35
+
36
+ @api.route('/health', methods=['GET'])
37
+ def health_check():
38
+ """Health check endpoint"""
39
+ return jsonify({
40
+ "status": "healthy",
41
+ "timestamp": datetime.utcnow().isoformat()
42
+ }), 200
43
+
44
+ @api.route('/status', methods=['GET'])
45
+ def get_status():
46
+ """Get system and detector status"""
47
+ try:
48
+ ensemble = get_detector_ensemble()
49
+ status_info = ensemble.get_status()
50
+
51
+ return jsonify({
52
+ "status": "ok",
53
+ "detectors": status_info,
54
+ "config": {
55
+ "database_url": settings.DATABASE_URL[:20] + "***",
56
+ "upload_folder": settings.UPLOAD_FOLDER,
57
+ }
58
+ }), 200
59
+
60
+ except Exception as e:
61
+ return jsonify({
62
+ "status": "error",
63
+ "message": str(e)
64
+ }), 500
65
+
66
+ # ============================================================================
67
+ # Analysis Endpoints
68
+ # ============================================================================
69
+
70
+ @api.route('/analyze/text', methods=['POST'])
71
+ def analyze_text():
72
+ """
73
+ Analyze raw text for AI-generated content.
74
+
75
+ POST /api/analyze/text
76
+ {
77
+ "text": "Your text here...",
78
+ "filename": "optional_name.txt"
79
+ }
80
+ """
81
+ try:
82
+ data = request.get_json()
83
+
84
+ if not data or 'text' not in data:
85
+ return jsonify({
86
+ "status": "error",
87
+ "message": "Missing 'text' field in request"
88
+ }), 400
89
+
90
+ text = data.get('text', '').strip()
91
+ filename = data.get('filename', 'untitled.txt')
92
+ user_id = data.get('user_id')
93
+
94
+ # Validate text
95
+ if not text:
96
+ return jsonify({
97
+ "status": "error",
98
+ "message": "Text cannot be empty"
99
+ }), 400
100
+
101
+ if len(text) < 10:
102
+ return jsonify({
103
+ "status": "error",
104
+ "message": "Text must be at least 10 characters"
105
+ }), 400
106
+
107
+ # Clean text
108
+ text = TextCleaner.clean(text)
109
+
110
+ # Get text stats
111
+ text_stats = TextCleaner.get_text_stats(text)
112
+
113
+ # Run detection
114
+ ensemble = get_detector_ensemble()
115
+ ensemble_result = ensemble.detect(text)
116
+
117
+ # Save to database
118
+ detector_results_dict = ensemble_result.to_dict()
119
+
120
+ db_result = DatabaseManager.save_analysis_result(
121
+ filename=filename,
122
+ file_format="raw",
123
+ file_size=len(text),
124
+ text_length=len(text),
125
+ word_count=text_stats['word_count'],
126
+ overall_ai_score=ensemble_result.overall_score,
127
+ overall_confidence=ensemble_result.overall_confidence,
128
+ detector_results=detector_results_dict,
129
+ text_preview=text[:500],
130
+ user_id=user_id,
131
+ )
132
+
133
+ if not db_result:
134
+ return jsonify({
135
+ "status": "error",
136
+ "message": "Failed to save analysis result"
137
+ }), 500
138
+
139
+ status_label = db_result.get_status_label(ensemble_result.overall_score)
140
+ log_submission(
141
+ filename=filename,
142
+ overall_ai_score=ensemble_result.overall_score,
143
+ overall_confidence=ensemble_result.overall_confidence,
144
+ status_label=status_label,
145
+ detector_results=detector_results_dict["detector_results"],
146
+ text_stats=text_stats,
147
+ text_preview=text[:200],
148
+ )
149
+
150
+ return jsonify({
151
+ "status": "success",
152
+ "message": "Text analyzed successfully",
153
+ "result_id": db_result.id,
154
+ "file_id": db_result.file_id,
155
+ "overall_ai_score": round(ensemble_result.overall_score, 4),
156
+ "overall_ai_score_percentage": f"{ensemble_result.overall_score * 100:.1f}%",
157
+ "overall_confidence": ensemble_result.overall_confidence,
158
+ "status_label": status_label,
159
+ "detector_results": detector_results_dict["detector_results"],
160
+ "enabled_detectors": ensemble_result.enabled_detectors,
161
+ "text_stats": text_stats,
162
+ }), 200
163
+
164
+ except Exception as e:
165
+ import traceback
166
+ traceback.print_exc()
167
+ return jsonify({
168
+ "status": "error",
169
+ "message": "Error during analysis",
170
+ "error_details": str(e)
171
+ }), 500
172
+
173
+ @api.route('/analyze/file', methods=['POST'])
174
+ def analyze_file():
175
+ """
176
+ Upload and analyze a file (PDF, DOCX, or TXT).
177
+
178
+ POST /api/analyze/file
179
+ FormData:
180
+ - file: (required) File to upload
181
+ """
182
+ try:
183
+ # Check if file is in request
184
+ if 'file' not in request.files:
185
+ return jsonify({
186
+ "status": "error",
187
+ "message": "No file provided"
188
+ }), 400
189
+
190
+ file = request.files['file']
191
+ user_id = request.form.get('user_id')
192
+
193
+ if not file or not file.filename:
194
+ return jsonify({
195
+ "status": "error",
196
+ "message": "Invalid file"
197
+ }), 400
198
+
199
+ # Validate file format
200
+ filename = secure_filename(file.filename)
201
+ file_ext = os.path.splitext(filename)[1].lower()
202
+
203
+ if file_ext not in {'.pdf', '.docx', '.doc', '.txt'}:
204
+ return jsonify({
205
+ "status": "error",
206
+ "message": f"Unsupported file format: {file_ext}. Supported: PDF, DOCX, TXT"
207
+ }), 400
208
+
209
+ # Save uploaded file temporarily
210
+ temp_filename = f"{uuid.uuid4()}{file_ext}"
211
+ temp_filepath = os.path.join(settings.UPLOAD_FOLDER, temp_filename)
212
+ file.save(temp_filepath)
213
+
214
+ file_size = os.path.getsize(temp_filepath)
215
+
216
+ # Validate file size
217
+ if file_size > settings.MAX_FILE_SIZE:
218
+ os.remove(temp_filepath)
219
+ return jsonify({
220
+ "status": "error",
221
+ "message": f"File too large. Max size: {settings.MAX_FILE_SIZE / (1024*1024):.1f}MB"
222
+ }), 400
223
+
224
+ # Parse file
225
+ text, file_format, parse_error = FileParser.parse_file(temp_filepath)
226
+
227
+ if parse_error:
228
+ os.remove(temp_filepath)
229
+ return jsonify({
230
+ "status": "error",
231
+ "message": f"Error parsing file: {str(parse_error)}"
232
+ }), 400
233
+
234
+ # Validate extracted text
235
+ if not text or len(text) < 10:
236
+ os.remove(temp_filepath)
237
+ return jsonify({
238
+ "status": "error",
239
+ "message": "Could not extract sufficient text from file"
240
+ }), 400
241
+
242
+ # Clean text
243
+ text = TextCleaner.clean(text)
244
+ text_stats = TextCleaner.get_text_stats(text)
245
+
246
+ # Run detection
247
+ ensemble = get_detector_ensemble()
248
+ ensemble_result = ensemble.detect(text)
249
+
250
+ # Save to database
251
+ detector_results_dict = ensemble_result.to_dict()
252
+
253
+ db_result = DatabaseManager.save_analysis_result(
254
+ filename=filename,
255
+ file_format=file_format,
256
+ file_size=file_size,
257
+ text_length=len(text),
258
+ word_count=text_stats['word_count'],
259
+ overall_ai_score=ensemble_result.overall_score,
260
+ overall_confidence=ensemble_result.overall_confidence,
261
+ detector_results=detector_results_dict,
262
+ text_preview=text[:500],
263
+ user_id=user_id,
264
+ )
265
+
266
+ # Clean up temp file
267
+ os.remove(temp_filepath)
268
+
269
+ if not db_result:
270
+ return jsonify({
271
+ "status": "error",
272
+ "message": "Failed to save analysis result"
273
+ }), 500
274
+
275
+ status_label = db_result.get_status_label(ensemble_result.overall_score)
276
+ log_submission(
277
+ filename=filename,
278
+ overall_ai_score=ensemble_result.overall_score,
279
+ overall_confidence=ensemble_result.overall_confidence,
280
+ status_label=status_label,
281
+ detector_results=detector_results_dict["detector_results"],
282
+ text_stats=text_stats,
283
+ text_preview=text[:200],
284
+ )
285
+
286
+ return jsonify({
287
+ "status": "success",
288
+ "message": "File analyzed successfully",
289
+ "result_id": db_result.id,
290
+ "file_id": db_result.file_id,
291
+ "filename": filename,
292
+ "overall_ai_score": round(ensemble_result.overall_score, 4),
293
+ "overall_ai_score_percentage": f"{ensemble_result.overall_score * 100:.1f}%",
294
+ "overall_confidence": ensemble_result.overall_confidence,
295
+ "status_label": status_label,
296
+ "detector_results": detector_results_dict["detector_results"],
297
+ "enabled_detectors": ensemble_result.enabled_detectors,
298
+ "text_stats": text_stats,
299
+ }), 200
300
+
301
+ except Exception as e:
302
+ import traceback
303
+ traceback.print_exc()
304
+ return jsonify({
305
+ "status": "error",
306
+ "message": "Error analyzing file",
307
+ "error_details": str(e)
308
+ }), 500
309
+
310
+ # ============================================================================
311
+ # Results Endpoints
312
+ # ============================================================================
313
+
314
+ @api.route('/results', methods=['GET'])
315
+ def list_results():
316
+ """
317
+ Get list of all analysis results with pagination.
318
+
319
+ GET /api/results?page=1&limit=10&sort=recent
320
+ """
321
+ try:
322
+ page = request.args.get('page', 1, type=int)
323
+ limit = request.args.get('limit', 10, type=int)
324
+ sort = request.args.get('sort', 'recent')
325
+
326
+ # Validate pagination
327
+ if page < 1:
328
+ page = 1
329
+ if limit < 1 or limit > 100:
330
+ limit = 10
331
+
332
+ # Map sort parameter
333
+ order_by = "upload_timestamp_desc" if sort == "recent" else "score_desc"
334
+
335
+ # Calculate offset
336
+ offset = (page - 1) * limit
337
+
338
+ # Get results
339
+ results = DatabaseManager.get_all_results(
340
+ limit=limit,
341
+ offset=offset,
342
+ order_by=order_by
343
+ )
344
+
345
+ # Get total count
346
+ session = Session.get_session()
347
+ total_count = session.query(DatabaseManager).count() if hasattr(DatabaseManager, '__table__') else 0
348
+ from backend.database import AnalysisResult
349
+ total_count = session.query(AnalysisResult).count()
350
+ session.close()
351
+
352
+ result_dicts = [result.to_dict() for result in results]
353
+
354
+ return jsonify({
355
+ "status": "success",
356
+ "message": "Results retrieved",
357
+ "page": page,
358
+ "limit": limit,
359
+ "total_count": total_count,
360
+ "results": result_dicts
361
+ }), 200
362
+
363
+ except Exception as e:
364
+ import traceback
365
+ traceback.print_exc()
366
+ return jsonify({
367
+ "status": "error",
368
+ "message": "Error retrieving results",
369
+ "error_details": str(e)
370
+ }), 500
371
+
372
+ @api.route('/results/<int:result_id>', methods=['GET'])
373
+ def get_result(result_id):
374
+ """Get a specific analysis result"""
375
+ try:
376
+ result = DatabaseManager.get_result_by_id(result_id)
377
+
378
+ if not result:
379
+ return jsonify({
380
+ "status": "error",
381
+ "message": "Result not found"
382
+ }), 404
383
+
384
+ return jsonify({
385
+ "status": "success",
386
+ "message": "Result retrieved",
387
+ "result": result.to_dict()
388
+ }), 200
389
+
390
+ except Exception as e:
391
+ return jsonify({
392
+ "status": "error",
393
+ "message": "Error retrieving result",
394
+ "error_details": str(e)
395
+ }), 500
396
+
397
+ @api.route('/results/<int:result_id>', methods=['DELETE'])
398
+ def delete_result(result_id):
399
+ """Delete an analysis result"""
400
+ try:
401
+ success = DatabaseManager.delete_result(result_id)
402
+
403
+ if not success:
404
+ return jsonify({
405
+ "status": "error",
406
+ "message": "Result not found"
407
+ }), 404
408
+
409
+ return jsonify({
410
+ "status": "success",
411
+ "message": "Result deleted"
412
+ }), 200
413
+
414
+ except Exception as e:
415
+ return jsonify({
416
+ "status": "error",
417
+ "message": "Error deleting result",
418
+ "error_details": str(e)
419
+ }), 500
420
+
421
+ @api.route('/results/<int:result_id>', methods=['PUT'])
422
+ def update_result(result_id):
423
+ """Update analysis result (notes, flags)"""
424
+ try:
425
+ data = request.get_json()
426
+
427
+ updates = {}
428
+ if 'notes' in data:
429
+ updates['notes'] = data['notes']
430
+ if 'is_flagged' in data:
431
+ updates['is_flagged'] = data['is_flagged']
432
+
433
+ if not updates:
434
+ return jsonify({
435
+ "status": "error",
436
+ "message": "No fields to update"
437
+ }), 400
438
+
439
+ result = DatabaseManager.update_result(result_id, **updates)
440
+
441
+ if not result:
442
+ return jsonify({
443
+ "status": "error",
444
+ "message": "Result not found"
445
+ }), 404
446
+
447
+ return jsonify({
448
+ "status": "success",
449
+ "message": "Result updated",
450
+ "result": result.to_dict()
451
+ }), 200
452
+
453
+ except Exception as e:
454
+ return jsonify({
455
+ "status": "error",
456
+ "message": "Error updating result",
457
+ "error_details": str(e)
458
+ }), 500
459
+
460
+ # ============================================================================
461
+ # Statistics Endpoints
462
+ # ============================================================================
463
+
464
+ @api.route('/statistics/summary', methods=['GET'])
465
+ def get_summary():
466
+ """Get summary statistics"""
467
+ try:
468
+ summary = DatabaseManager.get_results_summary()
469
+
470
+ return jsonify({
471
+ "status": "success",
472
+ "message": "Summary retrieved",
473
+ "summary": summary
474
+ }), 200
475
+
476
+ except Exception as e:
477
+ return jsonify({
478
+ "status": "error",
479
+ "message": "Error getting summary",
480
+ "error_details": str(e)
481
+ }), 500
482
+
483
+ @api.route('/config', methods=['GET'])
484
+ def get_config():
485
+ """Get detector configuration"""
486
+ try:
487
+ config = {
488
+ "enabled_detectors": DetectorsConfig.get_enabled_detectors(),
489
+ "aggregation_method": DetectorsConfig.AGGREGATION_METHOD,
490
+ "detector_weights": DetectorsConfig.DETECTOR_WEIGHTS,
491
+ "detector_info": {
492
+ "roberta": {
493
+ "name": "RoBERTa Detector",
494
+ "description": "Fine-tuned RoBERTa model for AI text detection",
495
+ "model": DetectorsConfig.ROBERTA_MODEL,
496
+ },
497
+ "perplexity": {
498
+ "name": "Perplexity-based Detector",
499
+ "description": "Detects AI patterns through perplexity and repetition analysis",
500
+ "model": DetectorsConfig.PERPLEXITY_MODEL,
501
+ },
502
+ "llmdet": {
503
+ "name": "LLMDet Detector",
504
+ "description": "Combines entropy and classification metrics",
505
+ },
506
+ "hf_classifier": {
507
+ "name": "HuggingFace Classifier",
508
+ "description": "Generic HF-based sequence classification",
509
+ "model": DetectorsConfig.HF_CLASSIFIER_MODEL,
510
+ },
511
+ "outfox": {
512
+ "name": "OUTFOX Statistical Detector",
513
+ "description": "Statistical signature-based detection",
514
+ },
515
+ }
516
+ }
517
+
518
+ return jsonify({
519
+ "status": "success",
520
+ "config": config
521
+ }), 200
522
+
523
+ except Exception as e:
524
+ return jsonify({
525
+ "status": "error",
526
+ "message": "Error getting config",
527
+ "error_details": str(e)
528
+ }), 500
backend/config/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .settings import Settings
2
+ from .detectors_config import DetectorsConfig
3
+
4
+ __all__ = ["Settings", "DetectorsConfig"]
backend/config/detectors_config.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Detector Configuration - Enable/Disable detectors easily here
3
+ Toggle any detector by changing True/False
4
+ """
5
+ import os
6
+ from typing import Dict
7
+
8
+ class DetectorsConfig:
9
+ """Configuration for AI detection methods"""
10
+
11
+ # ===========================================
12
+ # FEATURE FLAGS - Toggle detectors here
13
+ # ===========================================
14
+ # Set to True to enable, False to disable
15
+ # Use environment variables to override: ENABLE_<DETECTOR_NAME>=true/false
16
+
17
+ ENABLED_DETECTORS: Dict[str, bool] = {
18
+ # RoBERTa-based detector (fine-tuned on OpenAI detector dataset)
19
+ "roberta": os.getenv("ENABLE_ROBERTA", "False").lower() == "true",
20
+
21
+ # Perplexity-based baseline (detects repetitive patterns)
22
+ "perplexity": os.getenv("ENABLE_PERPLEXITY", "False").lower() == "true",
23
+
24
+ # LLMDet - Detectron perplexity + classifier
25
+ "llmdet": os.getenv("ENABLE_LLMDET", "False").lower() == "true",
26
+
27
+ # Simple HuggingFace classifier for AI detection
28
+ "hf_classifier": os.getenv("ENABLE_HF_CLASSIFIER", "False").lower() == "true",
29
+
30
+ # OUTFOX method (statistical signature detection)
31
+ "outfox": os.getenv("ENABLE_OUTFOX", "False").lower() == "true",
32
+
33
+ # TMR AI text detector
34
+ "tmr_detector": os.getenv("ENABLE_TMR_DETECTOR", "True").lower() == "true",
35
+
36
+ # Pattern heuristics (em-dash index + emoji index)
37
+ "pattern": os.getenv("ENABLE_PATTERN", "True").lower() == "true",
38
+ }
39
+
40
+ # ===========================================
41
+ # Detector weights for ensemble averaging
42
+ # ===========================================
43
+ DETECTOR_WEIGHTS: Dict[str, float] = {
44
+ "roberta": 0.30,
45
+ "perplexity": 0.25,
46
+ "llmdet": 0.25,
47
+ "hf_classifier": 0.20,
48
+ "outfox": 0.00, # Not in use by default
49
+ "tmr_detector": 0.80,
50
+ "pattern": 0.20,
51
+ }
52
+
53
+ # ===========================================
54
+ # Detector-specific settings
55
+ # ===========================================
56
+
57
+ # RoBERTa detector settings
58
+ ROBERTA_MODEL = "roberta-base-openai-detector"
59
+ ROBERTA_THRESHOLD = 0.5
60
+ ROBERTA_BATCH_SIZE = 16
61
+
62
+ # Perplexity settings
63
+ PERPLEXITY_MODEL = "gpt2"
64
+ PERPLEXITY_THRESHOLD = 50 # Words with higher perplexity are suspicious
65
+ PERPLEXITY_WINDOW_SIZE = 5 # tokens to consider
66
+
67
+ # LLMDet settings
68
+ LLMDET_THRESHOLD = 0.5
69
+ LLMDET_BATCH_SIZE = 32
70
+
71
+ # HF Classifier settings
72
+ HF_CLASSIFIER_MODEL = "cardiffnlp/twitter-roberta-base-sentiment-latest" # Valid classification model
73
+ HF_CLASSIFIER_THRESHOLD = 0.5
74
+
75
+ # OUTFOX settings
76
+ OUTFOX_THRESHOLD = 0.5
77
+
78
+ # General
79
+ MIN_TEXT_LENGTH = 50 # Minimum characters to analyze
80
+ AGGREGATION_METHOD = "weighted_average" # Options: weighted_average, max, voting
81
+
82
+ @classmethod
83
+ def get_enabled_detectors(cls) -> Dict[str, bool]:
84
+ """Return dict of enabled detectors"""
85
+ return cls.ENABLED_DETECTORS.copy()
86
+
87
+ @classmethod
88
+ def is_detector_enabled(cls, detector_name: str) -> bool:
89
+ """Check if a specific detector is enabled"""
90
+ return cls.ENABLED_DETECTORS.get(detector_name.lower(), False)
91
+
92
+ @classmethod
93
+ def get_weight(cls, detector_name: str) -> float:
94
+ """Get weight for a detector"""
95
+ return cls.DETECTOR_WEIGHTS.get(detector_name.lower(), 0.0)
96
+
97
+ @classmethod
98
+ def normalize_weights(cls) -> Dict[str, float]:
99
+ """Normalize weights for enabled detectors only"""
100
+ enabled_weights = {
101
+ d: w for d, w in cls.DETECTOR_WEIGHTS.items()
102
+ if cls.is_detector_enabled(d)
103
+ }
104
+ total = sum(enabled_weights.values())
105
+ if total == 0:
106
+ return {}
107
+ return {d: w/total for d, w in enabled_weights.items()}
108
+
109
+ @classmethod
110
+ def summary(cls) -> str:
111
+ """Print configuration summary"""
112
+ enabled = [d for d, enabled in cls.ENABLED_DETECTORS.items() if enabled]
113
+ return f"Enabled Detectors: {', '.join(enabled)}"
114
+
115
+ detectors_config = DetectorsConfig()
backend/config/settings.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+
4
+ class Settings:
5
+ """Application Settings"""
6
+
7
+ BASE_DIR = Path(__file__).parent.parent
8
+ DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///slop_detect.db")
9
+ UPLOAD_FOLDER = os.getenv("UPLOAD_FOLDER", str(BASE_DIR / "uploads"))
10
+ RESULTS_FOLDER = os.getenv("RESULTS_FOLDER", str(BASE_DIR / "results"))
11
+
12
+ # Flask settings
13
+ DEBUG = os.getenv("DEBUG", "False").lower() == "true"
14
+ HOST = os.getenv("HOST", "0.0.0.0")
15
+ PORT = int(os.getenv("PORT", 5000))
16
+
17
+ # Model cache
18
+ MODEL_CACHE_DIR = os.getenv("MODEL_CACHE_DIR", str(BASE_DIR / ".model_cache"))
19
+
20
+ # Max file size (50MB)
21
+ MAX_FILE_SIZE = 50 * 1024 * 1024
22
+
23
+ # Batch processing
24
+ BATCH_SIZE = 32
25
+
26
+ def __init__(self):
27
+ # Create upload and results folders
28
+ os.makedirs(self.UPLOAD_FOLDER, exist_ok=True)
29
+ os.makedirs(self.RESULTS_FOLDER, exist_ok=True)
30
+ os.makedirs(self.MODEL_CACHE_DIR, exist_ok=True)
31
+
32
+ settings = Settings()
backend/database/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .models import AnalysisResult, Session, Base
2
+ from .db import DatabaseManager
3
+
4
+ __all__ = ["AnalysisResult", "Session", "Base", "DatabaseManager"]
backend/database/db.py ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import uuid
2
+ from datetime import datetime
3
+ from typing import Optional, List
4
+ from .models import AnalysisResult, Session
5
+ import json
6
+
7
+ class DatabaseManager:
8
+ """Manager for database operations"""
9
+
10
+ @staticmethod
11
+ def save_analysis_result(
12
+ filename: str,
13
+ file_format: str,
14
+ file_size: int,
15
+ text_length: int,
16
+ word_count: int,
17
+ overall_ai_score: float,
18
+ overall_confidence: str,
19
+ detector_results: dict,
20
+ text_preview: str = "",
21
+ user_id: Optional[str] = None,
22
+ analysis_status: str = "completed",
23
+ error_message: Optional[str] = None,
24
+ report_html_path: Optional[str] = None
25
+ ) -> Optional[AnalysisResult]:
26
+ """
27
+ Save an analysis result to the database.
28
+
29
+ Returns:
30
+ AnalysisResult object or None if save failed
31
+ """
32
+ try:
33
+ session = Session.get_session()
34
+
35
+ # Generate unique file ID
36
+ file_id = str(uuid.uuid4())
37
+
38
+ result = AnalysisResult(
39
+ file_id=file_id,
40
+ filename=filename,
41
+ file_format=file_format,
42
+ file_size=file_size,
43
+ text_length=text_length,
44
+ word_count=word_count,
45
+ overall_ai_score=overall_ai_score,
46
+ overall_confidence=overall_confidence,
47
+ detector_results=detector_results,
48
+ text_preview=text_preview[:500],
49
+ user_id=user_id,
50
+ analysis_status=analysis_status,
51
+ error_message=error_message,
52
+ report_html_path=report_html_path,
53
+ upload_timestamp=datetime.utcnow()
54
+ )
55
+
56
+ session.add(result)
57
+ session.commit()
58
+
59
+ # Refresh to get ID
60
+ session.refresh(result)
61
+ return result
62
+
63
+ except Exception as e:
64
+ print(f"Error saving analysis result: {e}")
65
+ return None
66
+
67
+ finally:
68
+ session.close()
69
+
70
+ @staticmethod
71
+ def get_result_by_id(result_id: int) -> Optional[AnalysisResult]:
72
+ """Get analysis result by ID"""
73
+ try:
74
+ session = Session.get_session()
75
+ result = session.query(AnalysisResult).filter(
76
+ AnalysisResult.id == result_id
77
+ ).first()
78
+ return result
79
+ except Exception as e:
80
+ print(f"Error retrieving result: {e}")
81
+ return None
82
+ finally:
83
+ session.close()
84
+
85
+ @staticmethod
86
+ def get_result_by_file_id(file_id: str) -> Optional[AnalysisResult]:
87
+ """Get analysis result by file ID"""
88
+ try:
89
+ session = Session.get_session()
90
+ result = session.query(AnalysisResult).filter(
91
+ AnalysisResult.file_id == file_id
92
+ ).first()
93
+ return result
94
+ except Exception as e:
95
+ print(f"Error retrieving result: {e}")
96
+ return None
97
+ finally:
98
+ session.close()
99
+
100
+ @staticmethod
101
+ def get_all_results(
102
+ limit: int = 100,
103
+ offset: int = 0,
104
+ order_by: str = "upload_timestamp_desc"
105
+ ) -> List[AnalysisResult]:
106
+ """
107
+ Get all analysis results with pagination.
108
+
109
+ Args:
110
+ limit: Number of results to return
111
+ offset: Number of results to skip
112
+ order_by: Sort order (upload_timestamp_desc, upload_timestamp_asc, score_desc, score_asc)
113
+
114
+ Returns:
115
+ List of AnalysisResult objects
116
+ """
117
+ try:
118
+ session = Session.get_session()
119
+ query = session.query(AnalysisResult)
120
+
121
+ # Apply ordering
122
+ if order_by == "upload_timestamp_desc":
123
+ query = query.order_by(AnalysisResult.upload_timestamp.desc())
124
+ elif order_by == "upload_timestamp_asc":
125
+ query = query.order_by(AnalysisResult.upload_timestamp.asc())
126
+ elif order_by == "score_desc":
127
+ query = query.order_by(AnalysisResult.overall_ai_score.desc())
128
+ elif order_by == "score_asc":
129
+ query = query.order_by(AnalysisResult.overall_ai_score.asc())
130
+
131
+ results = query.limit(limit).offset(offset).all()
132
+ return results
133
+
134
+ except Exception as e:
135
+ print(f"Error retrieving results: {e}")
136
+ return []
137
+
138
+ finally:
139
+ session.close()
140
+
141
+ @staticmethod
142
+ def get_results_summary() -> dict:
143
+ """Get summary statistics of all results"""
144
+ try:
145
+ session = Session.get_session()
146
+
147
+ total = session.query(AnalysisResult).count()
148
+
149
+ if total == 0:
150
+ return {
151
+ "total_analyses": 0,
152
+ "average_ai_score": 0,
153
+ "total_text_analyzed": 0,
154
+ "likely_human": 0,
155
+ "suspicious": 0,
156
+ "likely_ai": 0,
157
+ }
158
+
159
+ from sqlalchemy import func
160
+
161
+ # Calculate averages
162
+ avg_score = session.query(func.avg(AnalysisResult.overall_ai_score)).scalar() or 0
163
+ total_text = session.query(func.sum(AnalysisResult.text_length)).scalar() or 0
164
+
165
+ # Count by confidence
166
+ likely_human = session.query(AnalysisResult).filter(
167
+ AnalysisResult.overall_ai_score < 0.3
168
+ ).count()
169
+
170
+ suspicious = session.query(AnalysisResult).filter(
171
+ (AnalysisResult.overall_ai_score >= 0.3) &
172
+ (AnalysisResult.overall_ai_score < 0.7)
173
+ ).count()
174
+
175
+ likely_ai = session.query(AnalysisResult).filter(
176
+ AnalysisResult.overall_ai_score >= 0.7
177
+ ).count()
178
+
179
+ return {
180
+ "total_analyses": total,
181
+ "average_ai_score": round(avg_score, 3),
182
+ "total_text_analyzed": total_text,
183
+ "likely_human": likely_human,
184
+ "suspicious": suspicious,
185
+ "likely_ai": likely_ai,
186
+ }
187
+
188
+ except Exception as e:
189
+ print(f"Error getting summary: {e}")
190
+ return {}
191
+
192
+ finally:
193
+ session.close()
194
+
195
+ @staticmethod
196
+ def delete_result(result_id: int) -> bool:
197
+ """Delete an analysis result"""
198
+ try:
199
+ session = Session.get_session()
200
+ result = session.query(AnalysisResult).filter(
201
+ AnalysisResult.id == result_id
202
+ ).first()
203
+
204
+ if result:
205
+ session.delete(result)
206
+ session.commit()
207
+ return True
208
+ return False
209
+
210
+ except Exception as e:
211
+ print(f"Error deleting result: {e}")
212
+ return False
213
+
214
+ finally:
215
+ session.close()
216
+
217
+ @staticmethod
218
+ def update_result(result_id: int, **kwargs) -> Optional[AnalysisResult]:
219
+ """Update an analysis result"""
220
+ try:
221
+ session = Session.get_session()
222
+ result = session.query(AnalysisResult).filter(
223
+ AnalysisResult.id == result_id
224
+ ).first()
225
+
226
+ if result:
227
+ for key, value in kwargs.items():
228
+ if hasattr(result, key):
229
+ setattr(result, key, value)
230
+
231
+ session.commit()
232
+ session.refresh(result)
233
+
234
+ return result
235
+
236
+ except Exception as e:
237
+ print(f"Error updating result: {e}")
238
+ return None
239
+
240
+ finally:
241
+ session.close()
backend/database/models.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sqlalchemy import create_engine, Column, String, Float, DateTime, Integer, JSON, Text, Boolean
2
+ from sqlalchemy.ext.declarative import declarative_base
3
+ from sqlalchemy.orm import sessionmaker
4
+ from datetime import datetime
5
+ import json
6
+ import os
7
+
8
+ Base = declarative_base()
9
+
10
+ class AnalysisResult(Base):
11
+ """Database model for storing analysis results"""
12
+
13
+ __tablename__ = "analysis_results"
14
+
15
+ id = Column(Integer, primary_key=True)
16
+
17
+ # File information
18
+ file_id = Column(String(50), unique=True, nullable=False, index=True)
19
+ filename = Column(String(255), nullable=False)
20
+ file_format = Column(String(10), nullable=False) # pdf, docx, txt, raw
21
+ file_size = Column(Integer, nullable=True) # in bytes
22
+
23
+ # Upload information
24
+ upload_timestamp = Column(DateTime, nullable=False, default=datetime.utcnow, index=True)
25
+ user_id = Column(String(100), nullable=True) # For future auth integration
26
+
27
+ # Text analysis
28
+ text_preview = Column(String(500), nullable=True) # First 500 chars
29
+ text_length = Column(Integer, nullable=False) # Total character count
30
+ word_count = Column(Integer, nullable=False)
31
+
32
+ # Overall detection results
33
+ overall_ai_score = Column(Float, nullable=False) # 0-1
34
+ overall_confidence = Column(String(20), nullable=False) # very_low, low, medium, high, very_high
35
+
36
+ # Detector results (JSON)
37
+ detector_results = Column(JSON, nullable=False) # Full detector results
38
+
39
+ # Status
40
+ analysis_status = Column(String(20), default="completed") # completed, failed, pending
41
+ error_message = Column(Text, nullable=True)
42
+
43
+ # Report storage
44
+ report_html_path = Column(String(500), nullable=True)
45
+
46
+ # Metadata
47
+ notes = Column(Text, nullable=True)
48
+ is_flagged = Column(Boolean, default=False) # For manual flagging
49
+
50
+ def to_dict(self):
51
+ """Convert to dictionary"""
52
+ return {
53
+ "id": self.id,
54
+ "file_id": self.file_id,
55
+ "filename": self.filename,
56
+ "file_format": self.file_format,
57
+ "file_size": self.file_size,
58
+ "upload_timestamp": self.upload_timestamp.isoformat() if self.upload_timestamp else None,
59
+ "text_length": self.text_length,
60
+ "word_count": self.word_count,
61
+ "overall_ai_score": self.overall_ai_score,
62
+ "overall_ai_score_percentage": f"{self.overall_ai_score * 100:.1f}%",
63
+ "overall_confidence": self.overall_confidence,
64
+ "detector_results": self.detector_results,
65
+ "analysis_status": self.analysis_status,
66
+ "error_message": self.error_message,
67
+ "is_flagged": self.is_flagged,
68
+ "notes": self.notes,
69
+ }
70
+
71
+ @staticmethod
72
+ def get_status_label(score: float) -> str:
73
+ """Get human-readable status based on AI score"""
74
+ if score < 0.3:
75
+ return "Likely Human"
76
+ elif score < 0.6:
77
+ return "Suspicious"
78
+ else:
79
+ return "Likely AI"
80
+
81
+ class Session:
82
+ """Database session manager"""
83
+
84
+ _engine = None
85
+ _SessionLocal = None
86
+
87
+ @classmethod
88
+ def init(cls):
89
+ """Initialize database connection"""
90
+ # Get DATABASE_URL from environment or use SQLite default
91
+ database_url = os.getenv('DATABASE_URL', 'sqlite:///./analysis_results.db')
92
+
93
+ cls._engine = create_engine(
94
+ database_url,
95
+ connect_args={"check_same_thread": False} if "sqlite" in database_url else {}
96
+ )
97
+
98
+ # Create tables
99
+ Base.metadata.create_all(cls._engine)
100
+
101
+ cls._SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=cls._engine)
102
+
103
+ @classmethod
104
+ def get_session(cls):
105
+ """Get a new database session"""
106
+ if cls._SessionLocal is None:
107
+ cls.init()
108
+ return cls._SessionLocal()
109
+
110
+ @classmethod
111
+ def close(cls):
112
+ """Close database connection"""
113
+ if cls._engine:
114
+ cls._engine.dispose()
backend/detectors/__init__.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseDetector, DetectorResult, SuspiciousSpan, DetectionConfidence
2
+ from .tmr_detector import TMRDetector
3
+ from .ensemble import DetectorEnsemble, EnsembleResult
4
+
5
+ __all__ = [
6
+ "BaseDetector",
7
+ "DetectorResult",
8
+ "SuspiciousSpan",
9
+ "DetectionConfidence",
10
+ "TMRDetector",
11
+ "DetectorEnsemble",
12
+ "EnsembleResult",
13
+ ]
backend/detectors/base.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from abc import ABC, abstractmethod
2
+ from dataclasses import dataclass, field, asdict
3
+ from typing import Optional, List, Dict, Any
4
+ from enum import Enum
5
+
6
+ class DetectionConfidence(Enum):
7
+ """Confidence levels for detection"""
8
+ VERY_LOW = "very_low"
9
+ LOW = "low"
10
+ MEDIUM = "medium"
11
+ HIGH = "high"
12
+ VERY_HIGH = "very_high"
13
+
14
+ @dataclass
15
+ class SuspiciousSpan:
16
+ """Represents a suspicious text span"""
17
+ text: str
18
+ start: int
19
+ end: int
20
+ reason: str
21
+ confidence: float
22
+
23
+ @dataclass
24
+ class DetectorResult:
25
+ """Result from a single detector"""
26
+ detector_name: str
27
+ score: float # 0-1 or 0-100 (standardized to 0-1 later)
28
+ confidence: Optional[str] = None
29
+ explanation: Optional[str] = None
30
+ suspicious_spans: List[SuspiciousSpan] = field(default_factory=list)
31
+ metadata: Dict[str, Any] = field(default_factory=dict)
32
+
33
+ def to_dict(self) -> Dict[str, Any]:
34
+ """Convert to dictionary"""
35
+ return {
36
+ "detector_name": self.detector_name,
37
+ "score": self.score,
38
+ "confidence": self.confidence,
39
+ "explanation": self.explanation,
40
+ "suspicious_spans": [
41
+ {
42
+ "text": span.text,
43
+ "start": span.start,
44
+ "end": span.end,
45
+ "reason": span.reason,
46
+ "confidence": span.confidence,
47
+ }
48
+ for span in self.suspicious_spans
49
+ ],
50
+ "metadata": self.metadata,
51
+ }
52
+
53
+ def normalize_score(self) -> float:
54
+ """Ensure score is 0-1"""
55
+ if self.score > 1:
56
+ # Assume 0-100 scale
57
+ return self.score / 100.0
58
+ return self.score
59
+
60
+ class BaseDetector(ABC):
61
+ """Base class for all AI detection methods"""
62
+
63
+ def __init__(self, name: str, model_name: Optional[str] = None):
64
+ self.name = name
65
+ self.model_name = model_name
66
+ self.model = None
67
+ self.tokenizer = None
68
+ self._is_loaded = False
69
+
70
+ @abstractmethod
71
+ def detect(self, text: str) -> DetectorResult:
72
+ """
73
+ Detect if text is AI-generated
74
+
75
+ Args:
76
+ text: Input text to analyze
77
+
78
+ Returns:
79
+ DetectorResult with score, explanation, and optional flagged spans
80
+ """
81
+ pass
82
+
83
+ def load_model(self):
84
+ """Load model resources (override in subclasses)"""
85
+ self._is_loaded = True
86
+
87
+ def is_loaded(self) -> bool:
88
+ """Check if model is loaded"""
89
+ return self._is_loaded
90
+
91
+ def cleanup(self):
92
+ """Clean up resources"""
93
+ if self.model:
94
+ del self.model
95
+ if self.tokenizer:
96
+ del self.tokenizer
97
+ self._is_loaded = False
98
+
99
+ @staticmethod
100
+ def get_confidence_level(score: float) -> str:
101
+ """Convert numeric score to confidence level"""
102
+ if score < 0.2:
103
+ return DetectionConfidence.VERY_LOW.value
104
+ elif score < 0.4:
105
+ return DetectionConfidence.LOW.value
106
+ elif score < 0.6:
107
+ return DetectionConfidence.MEDIUM.value
108
+ elif score < 0.8:
109
+ return DetectionConfidence.HIGH.value
110
+ else:
111
+ return DetectionConfidence.VERY_HIGH.value
backend/detectors/ensemble.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Optional, Tuple
2
+ import numpy as np
3
+ import math
4
+ from dataclasses import dataclass
5
+ from .base import DetectorResult, BaseDetector
6
+ from .tmr_detector import TMRDetector
7
+ from .pattern_detector import PatternDetector
8
+ from backend.config.detectors_config import DetectorsConfig
9
+
10
+ @dataclass
11
+ class EnsembleResult:
12
+ """Result from ensemble of detectors"""
13
+ overall_score: float # 0-1
14
+ overall_confidence: str
15
+ detector_results: Dict[str, DetectorResult]
16
+ weighted_scores: Dict[str, float]
17
+ enabled_detectors: List[str]
18
+
19
+ def to_dict(self) -> Dict:
20
+ """Convert to dictionary for JSON serialization"""
21
+ return {
22
+ "overall_score": round(self.overall_score, 4),
23
+ "overall_confidence": self.overall_confidence,
24
+ "overall_score_percentage": f"{self.overall_score * 100:.1f}%",
25
+ "enabled_detectors": self.enabled_detectors,
26
+ "detector_results": {
27
+ name: result.to_dict()
28
+ for name, result in self.detector_results.items()
29
+ },
30
+ "weighted_scores": {
31
+ name: round(score, 4)
32
+ for name, score in self.weighted_scores.items()
33
+ }
34
+ }
35
+
36
+ class DetectorEnsemble:
37
+ """
38
+ Manages multiple AI detectors and combines their results.
39
+ Easily enable/disable detectors via config.
40
+ """
41
+
42
+ def __init__(self):
43
+ self.detectors: Dict[str, any] = {}
44
+ self.loaded_detectors: set = set()
45
+ self._initialize_detectors()
46
+
47
+ def _initialize_detectors(self):
48
+ """Initialize all available detectors"""
49
+ detector_classes = {
50
+ "tmr_detector": TMRDetector,
51
+ "pattern": PatternDetector,
52
+ }
53
+
54
+ for name, detector_class in detector_classes.items():
55
+ try:
56
+ self.detectors[name] = detector_class()
57
+ except Exception as e:
58
+ print(f"Warning: Could not initialize {name} detector: {e}")
59
+
60
+ def load_detector(self, detector_name: str) -> bool:
61
+ """
62
+ Lazy load a detector when needed.
63
+ Returns True if successful, False otherwise.
64
+ """
65
+ if detector_name in self.loaded_detectors:
66
+ return True
67
+
68
+ if detector_name not in self.detectors:
69
+ return False
70
+
71
+ try:
72
+ self.detectors[detector_name].load_model()
73
+ self.loaded_detectors.add(detector_name)
74
+ return True
75
+ except Exception as e:
76
+ print(f"Error loading detector {detector_name}: {e}")
77
+ return False
78
+
79
+ def get_enabled_detectors(self) -> List[str]:
80
+ """Get list of enabled detectors"""
81
+ return [
82
+ name for name in self.detectors.keys()
83
+ if DetectorsConfig.is_detector_enabled(name)
84
+ ]
85
+
86
+ def detect(self, text: str) -> EnsembleResult:
87
+ """
88
+ Run all enabled detectors and combine results.
89
+
90
+ Args:
91
+ text: Input text to analyze
92
+
93
+ Returns:
94
+ EnsembleResult with combined scores and per-detector results
95
+ """
96
+ enabled_detectors = self.get_enabled_detectors()
97
+
98
+ if not enabled_detectors:
99
+ raise ValueError("No detectors are enabled!")
100
+
101
+ results: Dict[str, DetectorResult] = {}
102
+ weighted_scores: Dict[str, float] = {}
103
+
104
+ # Run each enabled detector
105
+ for detector_name in enabled_detectors:
106
+ # Skip if detector not initialized
107
+ if detector_name not in self.detectors:
108
+ continue
109
+
110
+ # Load detector if not already loaded
111
+ if not self.load_detector(detector_name):
112
+ print(f"Warning: Could not load detector {detector_name}")
113
+ continue
114
+
115
+ try:
116
+ result = self.detectors[detector_name].detect(text)
117
+ results[detector_name] = result
118
+
119
+ # Normalize score to 0-1
120
+ normalized_score = result.normalize_score()
121
+
122
+ # Handle NaN scores
123
+ if math.isnan(normalized_score):
124
+ normalized_score = 0.5
125
+
126
+ # Apply weight
127
+ weight = DetectorsConfig.get_weight(detector_name)
128
+ weighted_scores[detector_name] = normalized_score * weight
129
+
130
+ except Exception as e:
131
+ print(f"Error running detector {detector_name}: {e}")
132
+
133
+ # Calculate overall score using selected aggregation method
134
+ overall_score = self._aggregate_scores(results, weighted_scores, enabled_detectors)
135
+
136
+ # Determine confidence level
137
+ confidence = BaseDetector.get_confidence_level(overall_score)
138
+
139
+ return EnsembleResult(
140
+ overall_score=overall_score,
141
+ overall_confidence=confidence,
142
+ detector_results=results,
143
+ weighted_scores=weighted_scores,
144
+ enabled_detectors=enabled_detectors
145
+ )
146
+
147
+ def _aggregate_scores(
148
+ self,
149
+ results: Dict[str, DetectorResult],
150
+ weighted_scores: Dict[str, float],
151
+ enabled_detectors: List[str]
152
+ ) -> float:
153
+ """
154
+ Aggregate detector scores based on configured method.
155
+
156
+ Args:
157
+ results: DetectorResult objects from each detector
158
+ weighted_scores: Pre-calculated weighted scores
159
+ enabled_detectors: List of enabled detector names
160
+
161
+ Returns:
162
+ Overall AI detection score (0-1)
163
+ """
164
+ method = DetectorsConfig.AGGREGATION_METHOD
165
+
166
+ if method == "weighted_average":
167
+ # Use weighted average (sum of weighted scores)
168
+ total = sum(weighted_scores.values())
169
+ return total if not math.isnan(total) else 0.5
170
+
171
+ elif method == "max":
172
+ # Use maximum score (most confident detector)
173
+ scores = [result.normalize_score() for result in results.values()]
174
+ max_score = max(scores) if scores else 0.5
175
+ return max_score if not math.isnan(max_score) else 0.5
176
+
177
+ elif method == "voting":
178
+ # Simple voting: count how many detectors say AI
179
+ threshold = 0.5
180
+ votes = sum(
181
+ 1 for result in results.values()
182
+ if result.normalize_score() > threshold
183
+ )
184
+ return min(1.0, votes / len(results)) if results else 0.5
185
+
186
+ else:
187
+ # Default: weighted average
188
+ return sum(weighted_scores.values())
189
+
190
+ def cleanup(self):
191
+ """Clean up all detector resources"""
192
+ for detector in self.detectors.values():
193
+ try:
194
+ detector.cleanup()
195
+ except:
196
+ pass
197
+
198
+ def get_status(self) -> Dict[str, any]:
199
+ """Get status of all detectors"""
200
+ return {
201
+ "total_detectors": len(self.detectors),
202
+ "enabled_detectors": self.get_enabled_detectors(),
203
+ "loaded_detectors": list(self.loaded_detectors),
204
+ "config_summary": DetectorsConfig.summary()
205
+ }
backend/detectors/pattern_detector.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .base import BaseDetector, DetectorResult
2
+
3
+ # Thresholds at which the index reaches a score of 1.0
4
+ _EM_DASH_RATE_MAX = 0.03 # 3 em-dashes per 100 words β†’ fully "AI"
5
+ _EMOJI_RATE_MAX = 0.05 # 5 emojis per 100 words β†’ fully "AI"
6
+
7
+ # Weight split between the two signals
8
+ _EM_DASH_WEIGHT = 0.70
9
+ _EMOJI_WEIGHT = 0.30
10
+
11
+
12
+ class PatternDetector(BaseDetector):
13
+ """
14
+ Heuristic detector based on stylometric patterns known to be
15
+ over-represented in AI-generated text:
16
+
17
+ - EM Dash Index : em-dashes (β€”) per word
18
+ - Emoji Index : emojis per word
19
+
20
+ No model download required β€” loads instantly.
21
+ """
22
+
23
+ def __init__(self):
24
+ super().__init__(name="pattern", model_name=None)
25
+
26
+ def load_model(self):
27
+ self._is_loaded = True
28
+
29
+ def detect(self, text: str) -> DetectorResult:
30
+ if not self._is_loaded:
31
+ self.load_model()
32
+
33
+ words = text.split()
34
+ word_count = max(len(words), 1)
35
+
36
+ em_dash_count = text.count('\u2014')
37
+ # Reuse the same emoji regex logic inline to avoid circular imports
38
+ import re
39
+ _EMOJI_RE = re.compile(
40
+ "[\U0001F600-\U0001F64F"
41
+ "\U0001F300-\U0001F5FF"
42
+ "\U0001F680-\U0001F6FF"
43
+ "\U0001F1E0-\U0001F1FF"
44
+ "\U0001F900-\U0001F9FF"
45
+ "\U0001FA00-\U0001FA6F"
46
+ "\U0001FA70-\U0001FAFF"
47
+ "]+",
48
+ flags=re.UNICODE,
49
+ )
50
+ emoji_count = sum(len(m) for m in _EMOJI_RE.findall(text))
51
+
52
+ em_dash_rate = em_dash_count / word_count
53
+ emoji_rate = emoji_count / word_count
54
+
55
+ em_dash_index = min(1.0, em_dash_rate / _EM_DASH_RATE_MAX)
56
+ emoji_index = min(1.0, emoji_rate / _EMOJI_RATE_MAX)
57
+
58
+ combined_score = (_EM_DASH_WEIGHT * em_dash_index
59
+ + _EMOJI_WEIGHT * emoji_index)
60
+
61
+ confidence = self.get_confidence_level(combined_score)
62
+ em_dash_pct = f"{em_dash_rate * 100:.2f}%"
63
+ emoji_pct = f"{emoji_rate * 100:.2f}%"
64
+
65
+ explanation = (
66
+ f"Pattern signals β€” "
67
+ f"EM Dash index: {em_dash_index:.2f} ({em_dash_count} dashes, {em_dash_pct} of words); "
68
+ f"Emoji index: {emoji_index:.2f} ({emoji_count} emojis, {emoji_pct} of words)"
69
+ )
70
+
71
+ return DetectorResult(
72
+ detector_name=self.name,
73
+ score=combined_score,
74
+ confidence=confidence,
75
+ explanation=explanation,
76
+ metadata={
77
+ "model": "Heuristic (no ML)",
78
+ "prediction": "AI-generated" if combined_score >= 0.5 else "Human-written",
79
+ "ai_probability": combined_score,
80
+ "human_probability": 1.0 - combined_score,
81
+ "em_dash_count": em_dash_count,
82
+ "em_dash_rate": round(em_dash_rate, 4),
83
+ "em_dash_index": round(em_dash_index, 4),
84
+ "emoji_count": emoji_count,
85
+ "emoji_rate": round(emoji_rate, 4),
86
+ "emoji_index": round(emoji_index, 4),
87
+ "word_count": word_count,
88
+ },
89
+ )
90
+
91
+ def cleanup(self):
92
+ self._is_loaded = False
backend/detectors/tmr_detector.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
2
+ import torch
3
+ from .base import BaseDetector, DetectorResult
4
+ from backend.config.detectors_config import DetectorsConfig
5
+
6
+ class TMRDetector(BaseDetector):
7
+ """
8
+ Text detector using Oxidane/tmr-ai-text-detector model.
9
+ A fine-tuned model for detecting AI-generated text.
10
+ """
11
+
12
+ def __init__(self):
13
+ super().__init__(
14
+ name="tmr_detector",
15
+ model_name="Oxidane/tmr-ai-text-detector"
16
+ )
17
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
18
+
19
+ def load_model(self):
20
+ """Load the TMR model and tokenizer"""
21
+ if self._is_loaded:
22
+ return
23
+
24
+ try:
25
+ print(f"Loading tokenizer from: {self.model_name}")
26
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
27
+
28
+ print(f"Loading model from: {self.model_name}")
29
+ self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name)
30
+ self.model.to(self.device)
31
+ self.model.eval()
32
+ self._is_loaded = True
33
+ print("TMR model and tokenizer loaded successfully.")
34
+ except Exception as e:
35
+ print(f"Error loading TMR model or tokenizer: {e}")
36
+ raise
37
+
38
+ def detect(self, text: str) -> DetectorResult:
39
+ """
40
+ Detect if text is AI-generated using TMR model.
41
+
42
+ Args:
43
+ text: Input text to analyze
44
+
45
+ Returns:
46
+ DetectorResult with AI detection score
47
+ """
48
+ if not self._is_loaded:
49
+ self.load_model()
50
+
51
+ if len(text) < 10:
52
+ return DetectorResult(
53
+ detector_name=self.name,
54
+ score=0.5,
55
+ explanation="Text too short for reliable detection",
56
+ confidence=self.get_confidence_level(0.5)
57
+ )
58
+
59
+ try:
60
+ # Tokenize the text
61
+ inputs = self.tokenizer(text, return_tensors="pt", truncation=True, max_length=512, padding=True)
62
+ inputs = {k: v.to(self.device) for k, v in inputs.items()}
63
+
64
+ # Perform inference
65
+ with torch.no_grad():
66
+ outputs = self.model(**inputs)
67
+ logits = outputs.logits
68
+ probs = torch.softmax(logits, dim=-1)
69
+
70
+ # Probability that text is AI-generated (class 1)
71
+ ai_probability = probs[0][1].item()
72
+
73
+ # Binary classification
74
+ is_ai = ai_probability > 0.5
75
+ prediction = "AI-generated" if is_ai else "Human-written"
76
+
77
+ confidence = self.get_confidence_level(ai_probability)
78
+
79
+ explanation = f"TMR model predicts: {prediction} with probability {ai_probability:.4f}"
80
+
81
+ return DetectorResult(
82
+ detector_name=self.name,
83
+ score=ai_probability,
84
+ confidence=confidence,
85
+ explanation=explanation,
86
+ metadata={
87
+ "model": self.model_name,
88
+ "prediction": prediction,
89
+ "ai_probability": ai_probability,
90
+ "human_probability": probs[0][0].item(),
91
+ }
92
+ )
93
+
94
+ except Exception as e:
95
+ return DetectorResult(
96
+ detector_name=self.name,
97
+ score=0.5,
98
+ explanation=f"Error during TMR detection: {str(e)}",
99
+ confidence=self.get_confidence_level(0.5)
100
+ )
101
+
102
+ def cleanup(self):
103
+ """Clean up model resources"""
104
+ super().cleanup()
105
+ if torch.cuda.is_available():
106
+ torch.cuda.empty_cache()
backend/main.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Main Flask application for AI Slop Detection API
4
+ """
5
+
6
+ import os
7
+ import sys
8
+ from flask import Flask, send_from_directory
9
+ from flask_cors import CORS
10
+ from datetime import datetime
11
+
12
+ # Add backend to path
13
+ backend_dir = os.path.dirname(os.path.abspath(__file__))
14
+ frontend_dir = os.path.join(os.path.dirname(backend_dir), 'frontend')
15
+ sys.path.insert(0, backend_dir)
16
+
17
+ from config.settings import settings
18
+ from config.detectors_config import DetectorsConfig
19
+ from database import Session
20
+ from api import api
21
+
22
+ def create_app():
23
+ """Create and configure Flask application"""
24
+
25
+ app = Flask(__name__, static_folder=frontend_dir, static_url_path='/')
26
+
27
+ # Enable CORS
28
+ CORS(app, resources={
29
+ r"/api/*": {
30
+ "origins": "*",
31
+ "methods": ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
32
+ "allow_headers": ["Content-Type"]
33
+ }
34
+ })
35
+
36
+ # Initialize database
37
+ Session.init()
38
+
39
+ # Register blueprints
40
+ app.register_blueprint(api)
41
+
42
+ # Serve static files
43
+ @app.route('/')
44
+ def index():
45
+ """Serve index.html"""
46
+ return send_from_directory(frontend_dir, 'index.html')
47
+
48
+ @app.route('/<path:filename>')
49
+ def serve_static(filename):
50
+ """Serve static files from frontend"""
51
+ return send_from_directory(frontend_dir, filename)
52
+
53
+ # Error handlers
54
+ @app.errorhandler(404)
55
+ def not_found(error):
56
+ return {"status": "error", "message": "Not found"}, 404
57
+
58
+ @app.errorhandler(500)
59
+ def internal_error(error):
60
+ return {"status": "error", "message": "Internal server error"}, 500
61
+
62
+ # Startup logging
63
+ with app.app_context():
64
+ print(f"\n{'='*60}")
65
+ print(f"AI Slop Detection API Started")
66
+ print(f"{'='*60}")
67
+ print(f"Timestamp: {datetime.utcnow().isoformat()}")
68
+ print(f"Database: {settings.DATABASE_URL[:30]}...")
69
+ print(f"Upload Folder: {settings.UPLOAD_FOLDER}")
70
+ print(f"\nDetector Configuration:")
71
+ print(f"{DetectorsConfig.summary()}")
72
+ print(f"\nEnabled Detectors:")
73
+ for detector, enabled in DetectorsConfig.get_enabled_detectors().items():
74
+ if enabled:
75
+ weight = DetectorsConfig.get_weight(detector)
76
+ print(f" βœ“ {detector.upper()} (weight: {weight})")
77
+ print(f"\nAggregation Method: {DetectorsConfig.AGGREGATION_METHOD}")
78
+ print(f"{'='*60}\n")
79
+
80
+ return app
81
+
82
+ if __name__ == '__main__':
83
+ app = create_app()
84
+ app.run(
85
+ host=settings.HOST,
86
+ port=settings.PORT,
87
+ debug=settings.DEBUG
88
+ )
backend/requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ flask>=3.0.0
2
+ flask-cors>=4.0.0
3
+ transformers>=4.30.0
4
+ torch>=2.0.0
5
+ numpy>=1.21.0
6
+ scipy>=1.7.0
7
+ pypdf>=4.0.0
8
+ python-docx>=0.8.0
9
+ requests>=2.28.0
10
+ pydantic>=2.0.0
11
+ sqlalchemy>=2.0.0
12
+ pydantic-settings>=2.0.0
backend/utils/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .file_parser import FileParser, TextCleaner
2
+ from .highlighter import TextHighlighter, Highlight
3
+
4
+ __all__ = ["FileParser", "TextCleaner", "TextHighlighter", "Highlight"]
backend/utils/file_parser.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ from typing import Tuple, Optional
4
+ from pathlib import Path
5
+ from pypdf import PdfReader
6
+ from docx import Document
7
+
8
+ _EMOJI_RE = re.compile(
9
+ "[\U0001F600-\U0001F64F" # emoticons (πŸ˜€ πŸ˜‚ πŸ₯Ή etc.)
10
+ "\U0001F300-\U0001F5FF" # misc symbols & pictographs (🌍 πŸŽ‰ πŸ”₯ etc.)
11
+ "\U0001F680-\U0001F6FF" # transport & map (πŸš€ ✈️ πŸš— etc.)
12
+ "\U0001F1E0-\U0001F1FF" # regional indicator letters (flags πŸ‡ΊπŸ‡Έ)
13
+ "\U0001F900-\U0001F9FF" # supplemental symbols (πŸ€” 🀣 🧠 etc.)
14
+ "\U0001FA00-\U0001FA6F" # chess / extended pictographic
15
+ "\U0001FA70-\U0001FAFF" # symbols & pictographs extended-A
16
+ "]+",
17
+ flags=re.UNICODE,
18
+ )
19
+
20
+ class FileParser:
21
+ """
22
+ Parse multiple file formats and extract text.
23
+ Supports: PDF, DOCX, TXT, and raw text input.
24
+ """
25
+
26
+ SUPPORTED_FORMATS = {".pdf", ".docx", ".doc", ".txt"}
27
+
28
+ @staticmethod
29
+ def parse_file(file_path: str) -> Tuple[str, str, Optional[Exception]]:
30
+ """
31
+ Parse a file and extract text.
32
+
33
+ Args:
34
+ file_path: Path to the file
35
+
36
+ Returns:
37
+ Tuple of (text, format, error)
38
+ - text: Extracted text content
39
+ - format: File format (pdf, docx, txt)
40
+ - error: Exception if parsing failed, None if successful
41
+ """
42
+ file_extension = Path(file_path).suffix.lower()
43
+
44
+ if file_extension not in FileParser.SUPPORTED_FORMATS:
45
+ error = ValueError(f"Unsupported file format: {file_extension}")
46
+ return "", "", error
47
+
48
+ if file_extension == ".pdf":
49
+ return FileParser.parse_pdf(file_path)
50
+ elif file_extension in {".docx", ".doc"}:
51
+ return FileParser.parse_docx(file_path)
52
+ elif file_extension == ".txt":
53
+ return FileParser.parse_txt(file_path)
54
+
55
+ return "", "", ValueError("Unknown error")
56
+
57
+ @staticmethod
58
+ def parse_pdf(file_path: str) -> Tuple[str, str, Optional[Exception]]:
59
+ """Extract text from PDF file"""
60
+ try:
61
+ text = ""
62
+ with open(file_path, 'rb') as pdf_file:
63
+ pdf_reader = PdfReader(pdf_file)
64
+
65
+ # Extract text from all pages
66
+ for page_num in range(len(pdf_reader.pages)):
67
+ page = pdf_reader.pages[page_num]
68
+ text += page.extract_text() + "\n"
69
+
70
+ return text.strip(), "pdf", None
71
+
72
+ except Exception as e:
73
+ return "", "pdf", e
74
+
75
+ @staticmethod
76
+ def parse_docx(file_path: str) -> Tuple[str, str, Optional[Exception]]:
77
+ """Extract text from DOCX file"""
78
+ try:
79
+ doc = Document(file_path)
80
+ text = ""
81
+
82
+ # Extract text from all paragraphs
83
+ for paragraph in doc.paragraphs:
84
+ text += paragraph.text + "\n"
85
+
86
+ # Also extract text from tables if present
87
+ for table in doc.tables:
88
+ for row in table.rows:
89
+ for cell in row.cells:
90
+ text += cell.text + "\n"
91
+
92
+ return text.strip(), "docx", None
93
+
94
+ except Exception as e:
95
+ return "", "docx", e
96
+
97
+ @staticmethod
98
+ def parse_txt(file_path: str) -> Tuple[str, str, Optional[Exception]]:
99
+ """Extract text from plain text file"""
100
+ try:
101
+ with open(file_path, 'r', encoding='utf-8') as txt_file:
102
+ text = txt_file.read()
103
+
104
+ return text.strip(), "txt", None
105
+
106
+ except UnicodeDecodeError:
107
+ # Try with different encoding
108
+ try:
109
+ with open(file_path, 'r', encoding='latin-1') as txt_file:
110
+ text = txt_file.read()
111
+ return text.strip(), "txt", None
112
+ except Exception as e:
113
+ return "", "txt", e
114
+
115
+ except Exception as e:
116
+ return "", "txt", e
117
+
118
+ @staticmethod
119
+ def parse_raw_text(text: str) -> Tuple[str, str, Optional[Exception]]:
120
+ """Process raw text input"""
121
+ try:
122
+ cleaned_text = text.strip()
123
+ if not cleaned_text:
124
+ return "", "raw", ValueError("Empty text provided")
125
+ return cleaned_text, "raw", None
126
+ except Exception as e:
127
+ return "", "raw", e
128
+
129
+ class TextCleaner:
130
+ """Clean and normalize extracted text"""
131
+
132
+ @staticmethod
133
+ def clean(text: str) -> str:
134
+ """
135
+ Clean and normalize text.
136
+ Removes extra whitespace, normalizes line breaks, etc.
137
+ """
138
+ # Remove extra whitespace
139
+ text = ' '.join(text.split())
140
+
141
+ # Normalize line breaks
142
+ text = text.replace('\r\n', '\n').replace('\r', '\n')
143
+
144
+ return text
145
+
146
+ @staticmethod
147
+ def get_text_stats(text: str) -> dict:
148
+ """Get statistics about text"""
149
+ words = text.split()
150
+ sentences = text.split('.')
151
+
152
+ return {
153
+ "character_count": len(text),
154
+ "word_count": len(words),
155
+ "sentence_count": len([s for s in sentences if s.strip()]),
156
+ "average_word_length": len(text) / len(words) if words else 0,
157
+ "average_sentence_length": len(words) / len(sentences) if sentences else 0,
158
+ "emoji_count": sum(len(m) for m in _EMOJI_RE.findall(text)),
159
+ "em_dash_count": text.count('\u2014'),
160
+ "arrow_count": text.count('\u2192'),
161
+ }
backend/utils/highlighter.py ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Tuple
2
+ from dataclasses import dataclass
3
+
4
+ @dataclass
5
+ class Highlight:
6
+ """Represents a highlighted section of text"""
7
+ start: int
8
+ end: int
9
+ text: str
10
+ reason: str
11
+ confidence: float
12
+ color: str = "yellow"
13
+ source_detector: str = ""
14
+
15
+ class TextHighlighter:
16
+ """Utility for highlighting suspicious text sections"""
17
+
18
+ # Color mapping for confidence levels
19
+ CONFIDENCE_COLORS = {
20
+ "very_low": "#90EE90", # Light green
21
+ "low": "#FFE4B5", # Moccasin
22
+ "medium": "#FFD700", # Gold
23
+ "high": "#FF6347", # Tomato
24
+ "very_high": "#DC143C", # Crimson
25
+ }
26
+
27
+ @staticmethod
28
+ def get_color_for_confidence(confidence: float) -> str:
29
+ """Get color for a confidence score"""
30
+ if confidence < 0.2:
31
+ return TextHighlighter.CONFIDENCE_COLORS["very_low"]
32
+ elif confidence < 0.4:
33
+ return TextHighlighter.CONFIDENCE_COLORS["low"]
34
+ elif confidence < 0.6:
35
+ return TextHighlighter.CONFIDENCE_COLORS["medium"]
36
+ elif confidence < 0.8:
37
+ return TextHighlighter.CONFIDENCE_COLORS["high"]
38
+ else:
39
+ return TextHighlighter.CONFIDENCE_COLORS["very_high"]
40
+
41
+ @staticmethod
42
+ def create_html_report(
43
+ text: str,
44
+ highlights: List[Highlight],
45
+ title: str = "AI Detection Report"
46
+ ) -> str:
47
+ """
48
+ Create an HTML report with highlighted text.
49
+
50
+ Args:
51
+ text: Original text
52
+ highlights: List of highlighted sections
53
+ title: Report title
54
+
55
+ Returns:
56
+ HTML string
57
+ """
58
+ # Sort highlights by position
59
+ sorted_highlights = sorted(highlights, key=lambda h: h.start)
60
+
61
+ # Build highlighted text
62
+ html_text = ""
63
+ last_pos = 0
64
+
65
+ for highlight in sorted_highlights:
66
+ # Add text before highlight
67
+ if highlight.start > last_pos:
68
+ html_text += TextHighlighter._escape_html(
69
+ text[last_pos:highlight.start]
70
+ )
71
+
72
+ # Add highlighted section
73
+ color = TextHighlighter.get_color_for_confidence(highlight.confidence)
74
+ html_text += (
75
+ f'<span style="background-color: {color}; '
76
+ f'padding: 2px 4px; border-radius: 3px;" '
77
+ f'title="{highlight.reason}">'
78
+ f'{TextHighlighter._escape_html(highlight.text)}</span>'
79
+ )
80
+
81
+ last_pos = highlight.end
82
+
83
+ # Add remaining text
84
+ if last_pos < len(text):
85
+ html_text += TextHighlighter._escape_html(text[last_pos:])
86
+
87
+ # Build full HTML document
88
+ html = f"""
89
+ <!DOCTYPE html>
90
+ <html>
91
+ <head>
92
+ <meta charset="UTF-8">
93
+ <title>{title}</title>
94
+ <style>
95
+ body {{
96
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
97
+ margin: 20px;
98
+ background-color: #f5f5f5;
99
+ }}
100
+ .report-container {{
101
+ background-color: white;
102
+ padding: 20px;
103
+ border-radius: 8px;
104
+ box-shadow: 0 2px 4px rgba(0,0,0,0.1);
105
+ max-width: 900px;
106
+ margin: 0 auto;
107
+ }}
108
+ h1 {{
109
+ color: #333;
110
+ border-bottom: 3px solid #007bff;
111
+ padding-bottom: 10px;
112
+ }}
113
+ .highlighted-text {{
114
+ line-height: 1.8;
115
+ margin: 20px 0;
116
+ padding: 15px;
117
+ background-color: #fafafa;
118
+ border-left: 4px solid #007bff;
119
+ white-space: pre-wrap;
120
+ word-wrap: break-word;
121
+ }}
122
+ .legend {{
123
+ display: flex;
124
+ gap: 20px;
125
+ margin: 20px 0;
126
+ flex-wrap: wrap;
127
+ }}
128
+ .legend-item {{
129
+ display: flex;
130
+ align-items: center;
131
+ gap: 8px;
132
+ }}
133
+ .legend-color {{
134
+ width: 20px;
135
+ height: 20px;
136
+ border-radius: 3px;
137
+ }}
138
+ </style>
139
+ </head>
140
+ <body>
141
+ <div class="report-container">
142
+ <h1>{title}</h1>
143
+
144
+ <div class="legend">
145
+ <div class="legend-item">
146
+ <div class="legend-color" style="background-color: #90EE90;"></div>
147
+ <span>Very Low Confidence (&lt;20%)</span>
148
+ </div>
149
+ <div class="legend-item">
150
+ <div class="legend-color" style="background-color: #FFE4B5;"></div>
151
+ <span>Low Confidence (20-40%)</span>
152
+ </div>
153
+ <div class="legend-item">
154
+ <div class="legend-color" style="background-color: #FFD700;"></div>
155
+ <span>Medium Confidence (40-60%)</span>
156
+ </div>
157
+ <div class="legend-item">
158
+ <div class="legend-color" style="background-color: #FF6347;"></div>
159
+ <span>High Confidence (60-80%)</span>
160
+ </div>
161
+ <div class="legend-item">
162
+ <div class="legend-color" style="background-color: #DC143C;"></div>
163
+ <span>Very High Confidence (&gt;80%)</span>
164
+ </div>
165
+ </div>
166
+
167
+ <div class="highlighted-text">
168
+ {html_text}
169
+ </div>
170
+ </div>
171
+ </body>
172
+ </html>
173
+ """
174
+
175
+ return html
176
+
177
+ @staticmethod
178
+ def _escape_html(text: str) -> str:
179
+ """Escape HTML special characters"""
180
+ return (text
181
+ .replace("&", "&amp;")
182
+ .replace("<", "&lt;")
183
+ .replace(">", "&gt;")
184
+ .replace('"', "&quot;")
185
+ .replace("'", "&#39;"))
186
+
187
+ @staticmethod
188
+ def create_text_report(
189
+ text: str,
190
+ highlights: List[Highlight]
191
+ ) -> str:
192
+ """
193
+ Create a plain text report with highlighted sections marked with [[ ]].
194
+ """
195
+ # Sort highlights by position (reverse to maintain positions)
196
+ sorted_highlights = sorted(highlights, key=lambda h: h.start, reverse=True)
197
+
198
+ result = text
199
+ for highlight in sorted_highlights:
200
+ marked_text = f"[[{highlight.text}]]"
201
+ result = (
202
+ result[:highlight.start] +
203
+ marked_text +
204
+ result[highlight.end:]
205
+ )
206
+
207
+ return result
backend/utils/submission_logger.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from datetime import datetime, timezone
4
+
5
+ # Log file lives at the project root (next to run.bat / run.sh)
6
+ _LOG_PATH = os.path.join(
7
+ os.path.dirname(__file__), # backend/utils/
8
+ "..", "..", # β†’ project root
9
+ "submissions.jsonl"
10
+ )
11
+ _LOG_PATH = os.path.normpath(_LOG_PATH)
12
+
13
+
14
+ def log_submission(
15
+ filename: str,
16
+ overall_ai_score: float,
17
+ overall_confidence: str,
18
+ status_label: str,
19
+ detector_results: dict,
20
+ text_stats: dict,
21
+ text_preview: str = "",
22
+ ) -> None:
23
+ """
24
+ Append one JSON line to submissions.jsonl for every analysis.
25
+ Safe to call from multiple threads β€” each write is a single os.write.
26
+ """
27
+ record = {
28
+ "timestamp": datetime.now(timezone.utc).isoformat(),
29
+ "filename": filename,
30
+ "overall_ai_score": round(overall_ai_score, 4),
31
+ "overall_confidence": overall_confidence,
32
+ "status_label": status_label,
33
+ "text_preview": text_preview[:200],
34
+ "text_stats": text_stats,
35
+ "detectors": {
36
+ name: {
37
+ "score": round(res.get("score", 0), 4),
38
+ "confidence": res.get("confidence"),
39
+ "prediction": res.get("metadata", {}).get("prediction"),
40
+ "ai_probability": round(res.get("metadata", {}).get("ai_probability", res.get("score", 0)), 4),
41
+ "human_probability": round(res.get("metadata", {}).get("human_probability", 1 - res.get("score", 0)), 4),
42
+ }
43
+ for name, res in detector_results.items()
44
+ },
45
+ }
46
+
47
+ line = json.dumps(record, ensure_ascii=False) + "\n"
48
+ with open(_LOG_PATH, "a", encoding="utf-8") as f:
49
+ f.write(line)
frontend/index.html ADDED
@@ -0,0 +1,1099 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>AI Slop Detector - Analyze AI-Generated Content</title>
7
+ <style>
8
+ :root {
9
+ --primary-color: #007bff;
10
+ --success-color: #28a745;
11
+ --danger-color: #dc3545;
12
+ --warning-color: #ffc107;
13
+ --info-color: #17a2b8;
14
+ --dark-color: #343a40;
15
+ --light-color: #f8f9fa;
16
+ --border-color: #dee2e6;
17
+ --shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
18
+ --shadow-lg: 0 4px 16px rgba(0, 0, 0, 0.15);
19
+ }
20
+
21
+ * {
22
+ margin: 0;
23
+ padding: 0;
24
+ box-sizing: border-box;
25
+ }
26
+
27
+ body {
28
+ font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
29
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
30
+ min-height: 100vh;
31
+ padding: 20px;
32
+ color: #333;
33
+ }
34
+
35
+ .container {
36
+ max-width: 1200px;
37
+ margin: 0 auto;
38
+ }
39
+
40
+ header {
41
+ text-align: center;
42
+ color: white;
43
+ margin-bottom: 40px;
44
+ animation: slideDown 0.6s ease;
45
+ }
46
+
47
+ header h1 {
48
+ font-size: 2.5rem;
49
+ margin-bottom: 10px;
50
+ font-weight: 700;
51
+ }
52
+
53
+ header p {
54
+ font-size: 1.1rem;
55
+ opacity: 0.9;
56
+ }
57
+
58
+ .content {
59
+ display: grid;
60
+ grid-template-columns: 1fr 1fr;
61
+ gap: 30px;
62
+ margin-bottom: 40px;
63
+ }
64
+
65
+ @media (max-width: 900px) {
66
+ .content {
67
+ grid-template-columns: 1fr;
68
+ }
69
+ }
70
+
71
+ .card {
72
+ background: white;
73
+ border-radius: 12px;
74
+ padding: 30px;
75
+ box-shadow: var(--shadow-lg);
76
+ animation: fadeIn 0.6s ease 0.2s backwards;
77
+ }
78
+
79
+ .card h2 {
80
+ font-size: 1.5rem;
81
+ margin-bottom: 20px;
82
+ color: var(--dark-color);
83
+ border-bottom: 3px solid var(--primary-color);
84
+ padding-bottom: 15px;
85
+ }
86
+
87
+ /* Upload Section */
88
+ .upload-area {
89
+ border: 3px dashed var(--primary-color);
90
+ border-radius: 8px;
91
+ padding: 40px 20px;
92
+ text-align: center;
93
+ cursor: pointer;
94
+ transition: all 0.3s ease;
95
+ background: linear-gradient(135deg, rgba(0, 123, 255, 0.05) 0%, rgba(118, 75, 162, 0.05) 100%);
96
+ }
97
+
98
+ .upload-area:hover,
99
+ .upload-area.dragover {
100
+ border-color: var(--danger-color);
101
+ background: linear-gradient(135deg, rgba(220, 53, 69, 0.1) 0%, rgba(118, 75, 162, 0.1) 100%);
102
+ transform: translateY(-2px);
103
+ }
104
+
105
+ .upload-area p {
106
+ color: #666;
107
+ margin: 10px 0;
108
+ font-size: 0.95rem;
109
+ }
110
+
111
+ .upload-icon {
112
+ font-size: 2.5rem;
113
+ margin-bottom: 10px;
114
+ }
115
+
116
+ .file-input {
117
+ display: none;
118
+ }
119
+
120
+ .input-group {
121
+ margin-bottom: 20px;
122
+ }
123
+
124
+ label {
125
+ display: block;
126
+ margin-bottom: 8px;
127
+ font-weight: 600;
128
+ color: var(--dark-color);
129
+ }
130
+
131
+ textarea,
132
+ input[type="text"],
133
+ input[type="file"] {
134
+ width: 100%;
135
+ padding: 12px;
136
+ border: 1px solid var(--border-color);
137
+ border-radius: 6px;
138
+ font-family: 'Segoe UI', sans-serif;
139
+ font-size: 0.95rem;
140
+ transition: border-color 0.3s ease;
141
+ }
142
+
143
+ textarea {
144
+ resize: vertical;
145
+ min-height: 150px;
146
+ font-family: 'Courier New', monospace;
147
+ }
148
+
149
+ textarea:focus,
150
+ input[type="text"]:focus {
151
+ outline: none;
152
+ border-color: var(--primary-color);
153
+ box-shadow: 0 0 0 3px rgba(0, 123, 255, 0.1);
154
+ }
155
+
156
+ .button {
157
+ background: var(--primary-color);
158
+ color: white;
159
+ padding: 12px 24px;
160
+ border: none;
161
+ border-radius: 6px;
162
+ font-size: 1rem;
163
+ font-weight: 600;
164
+ cursor: pointer;
165
+ transition: all 0.3s ease;
166
+ display: inline-block;
167
+ text-align: center;
168
+ }
169
+
170
+ .button:hover {
171
+ background: #0056b3;
172
+ transform: translateY(-2px);
173
+ box-shadow: var(--shadow-lg);
174
+ }
175
+
176
+ .button:active {
177
+ transform: translateY(0);
178
+ }
179
+
180
+ .button.secondary {
181
+ background: var(--dark-color);
182
+ }
183
+
184
+ .button.secondary:hover {
185
+ background: #212529;
186
+ }
187
+
188
+ .button:disabled {
189
+ opacity: 0.5;
190
+ cursor: not-allowed;
191
+ transform: none;
192
+ }
193
+
194
+ /* Results Section */
195
+ .results-container {
196
+ margin-top: 30px;
197
+ }
198
+
199
+ .result-item {
200
+ background: var(--light-color);
201
+ border-left: 5px solid var(--primary-color);
202
+ padding: 15px;
203
+ margin-bottom: 15px;
204
+ border-radius: 6px;
205
+ cursor: pointer;
206
+ transition: all 0.3s ease;
207
+ }
208
+
209
+ .result-item:hover {
210
+ transform: translateX(5px);
211
+ box-shadow: var(--shadow);
212
+ }
213
+
214
+ .result-header {
215
+ display: flex;
216
+ justify-content: space-between;
217
+ align-items: center;
218
+ margin-bottom: 10px;
219
+ }
220
+
221
+ .result-filename {
222
+ font-weight: 600;
223
+ color: var(--dark-color);
224
+ }
225
+
226
+ .score-badge {
227
+ display: inline-block;
228
+ padding: 6px 12px;
229
+ border-radius: 20px;
230
+ font-weight: 600;
231
+ font-size: 0.9rem;
232
+ color: white;
233
+ }
234
+
235
+ .score-human {
236
+ background: var(--success-color);
237
+ }
238
+
239
+ .score-suspicious {
240
+ background: var(--warning-color);
241
+ color: #333;
242
+ }
243
+
244
+ .score-ai {
245
+ background: var(--danger-color);
246
+ }
247
+
248
+ .result-meta {
249
+ font-size: 0.85rem;
250
+ color: #666;
251
+ display: flex;
252
+ gap: 20px;
253
+ flex-wrap: wrap;
254
+ }
255
+
256
+ /* Analysis Results Display */
257
+ .analysis-result {
258
+ background: white;
259
+ border-radius: 12px;
260
+ padding: 30px;
261
+ box-shadow: var(--shadow-lg);
262
+ margin-top: 20px;
263
+ animation: slideUp 0.5s ease;
264
+ }
265
+
266
+ .overall-score {
267
+ text-align: center;
268
+ padding: 30px;
269
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
270
+ border-radius: 8px;
271
+ color: white;
272
+ margin-bottom: 30px;
273
+ }
274
+
275
+ .overall-score h3 {
276
+ font-size: 1.2rem;
277
+ margin-bottom: 10px;
278
+ opacity: 0.9;
279
+ }
280
+
281
+ .score-percentage {
282
+ font-size: 3rem;
283
+ font-weight: 700;
284
+ margin: 10px 0;
285
+ }
286
+
287
+ .score-confidence {
288
+ font-size: 1.1rem;
289
+ opacity: 0.95;
290
+ text-transform: uppercase;
291
+ letter-spacing: 1px;
292
+ }
293
+
294
+ .detector-results {
295
+ display: grid;
296
+ grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
297
+ gap: 20px;
298
+ margin: 20px 0;
299
+ }
300
+
301
+ .detector-card {
302
+ background: var(--light-color);
303
+ border: 2px solid var(--border-color);
304
+ border-radius: 8px;
305
+ padding: 20px;
306
+ transition: all 0.3s ease;
307
+ }
308
+
309
+ .detector-card:hover {
310
+ border-color: var(--primary-color);
311
+ box-shadow: var(--shadow-lg);
312
+ transform: translateY(-3px);
313
+ }
314
+
315
+ .detector-name {
316
+ font-weight: 600;
317
+ font-size: 1.1rem;
318
+ margin-bottom: 4px;
319
+ color: var(--dark-color);
320
+ }
321
+
322
+ .detector-model {
323
+ font-size: 0.78rem;
324
+ color: #999;
325
+ margin-bottom: 14px;
326
+ font-style: italic;
327
+ }
328
+
329
+ .detector-score-row {
330
+ display: flex;
331
+ align-items: baseline;
332
+ gap: 10px;
333
+ margin-bottom: 6px;
334
+ }
335
+
336
+ .detector-score {
337
+ font-size: 2rem;
338
+ font-weight: 700;
339
+ line-height: 1;
340
+ }
341
+
342
+ .detector-prediction-badge {
343
+ display: inline-block;
344
+ padding: 3px 10px;
345
+ border-radius: 12px;
346
+ font-size: 0.78rem;
347
+ font-weight: 600;
348
+ text-transform: uppercase;
349
+ letter-spacing: 0.5px;
350
+ }
351
+
352
+ .prediction-ai {
353
+ background: rgba(220, 53, 69, 0.12);
354
+ color: #c82333;
355
+ border: 1px solid rgba(220, 53, 69, 0.3);
356
+ }
357
+
358
+ .prediction-human {
359
+ background: rgba(40, 167, 69, 0.12);
360
+ color: #1e7e34;
361
+ border: 1px solid rgba(40, 167, 69, 0.3);
362
+ }
363
+
364
+ .prob-bar-wrap {
365
+ margin: 12px 0 10px;
366
+ }
367
+
368
+ .prob-bar-labels {
369
+ display: flex;
370
+ justify-content: space-between;
371
+ font-size: 0.78rem;
372
+ color: #666;
373
+ margin-bottom: 4px;
374
+ }
375
+
376
+ .prob-bar-track {
377
+ height: 10px;
378
+ border-radius: 5px;
379
+ background: #e9ecef;
380
+ overflow: hidden;
381
+ display: flex;
382
+ }
383
+
384
+ .prob-bar-ai {
385
+ height: 100%;
386
+ background: linear-gradient(90deg, #f8645a, #dc3545);
387
+ transition: width 0.6s ease;
388
+ border-radius: 5px 0 0 5px;
389
+ }
390
+
391
+ .prob-bar-human {
392
+ height: 100%;
393
+ background: linear-gradient(90deg, #28a745, #20c963);
394
+ transition: width 0.6s ease;
395
+ border-radius: 0 5px 5px 0;
396
+ }
397
+
398
+ .prob-values {
399
+ display: flex;
400
+ justify-content: space-between;
401
+ margin-top: 4px;
402
+ font-size: 0.82rem;
403
+ font-weight: 600;
404
+ }
405
+
406
+ .prob-ai-val { color: #dc3545; }
407
+ .prob-human-val { color: #28a745; }
408
+
409
+ .detector-confidence {
410
+ font-size: 0.82rem;
411
+ color: #777;
412
+ margin-top: 10px;
413
+ }
414
+
415
+ .confidence-dot {
416
+ display: inline-block;
417
+ width: 8px;
418
+ height: 8px;
419
+ border-radius: 50%;
420
+ margin-right: 5px;
421
+ background: #aaa;
422
+ }
423
+
424
+ .conf-very_low .confidence-dot { background: #adb5bd; }
425
+ .conf-low .confidence-dot { background: #6c757d; }
426
+ .conf-medium .confidence-dot { background: #ffc107; }
427
+ .conf-high .confidence-dot { background: #fd7e14; }
428
+ .conf-very_high .confidence-dot { background: #dc3545; }
429
+
430
+ .detector-explanation {
431
+ font-size: 0.85rem;
432
+ color: #666;
433
+ line-height: 1.5;
434
+ margin-top: 10px;
435
+ padding-top: 10px;
436
+ border-top: 1px solid var(--border-color);
437
+ }
438
+
439
+ .stats-grid {
440
+ display: grid;
441
+ grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
442
+ gap: 15px;
443
+ margin: 20px 0;
444
+ }
445
+
446
+ .stat-box {
447
+ background: var(--light-color);
448
+ padding: 15px;
449
+ border-radius: 8px;
450
+ text-align: center;
451
+ border: 1px solid var(--border-color);
452
+ }
453
+
454
+ .stat-label {
455
+ font-size: 0.85rem;
456
+ color: #666;
457
+ font-weight: 500;
458
+ }
459
+
460
+ .stat-value {
461
+ font-size: 1.8rem;
462
+ font-weight: 700;
463
+ color: var(--primary-color);
464
+ margin-top: 8px;
465
+ }
466
+
467
+ /* Tabs */
468
+ .tabs {
469
+ display: flex;
470
+ gap: 10px;
471
+ border-bottom: 2px solid var(--border-color);
472
+ margin: 20px 0;
473
+ }
474
+
475
+ .tab-button {
476
+ background: none;
477
+ border: none;
478
+ padding: 10px 20px;
479
+ cursor: pointer;
480
+ font-weight: 600;
481
+ color: #666;
482
+ border-bottom: 3px solid transparent;
483
+ transition: all 0.3s ease;
484
+ }
485
+
486
+ .tab-button.active {
487
+ color: var(--primary-color);
488
+ border-bottom-color: var(--primary-color);
489
+ }
490
+
491
+ .tab-content {
492
+ display: none;
493
+ }
494
+
495
+ .tab-content.active {
496
+ display: block;
497
+ animation: fadeIn 0.3s ease;
498
+ }
499
+
500
+ /* Status Messages */
501
+ .alert {
502
+ padding: 15px;
503
+ border-radius: 6px;
504
+ margin-bottom: 20px;
505
+ animation: slideDown 0.3s ease;
506
+ }
507
+
508
+ .alert-success {
509
+ background: rgba(40, 167, 69, 0.1);
510
+ color: #155724;
511
+ border-left: 4px solid var(--success-color);
512
+ }
513
+
514
+ .alert-error {
515
+ background: rgba(220, 53, 69, 0.1);
516
+ color: #721c24;
517
+ border-left: 4px solid var(--danger-color);
518
+ }
519
+
520
+ .alert-info {
521
+ background: rgba(23, 162, 184, 0.1);
522
+ color: #0c5460;
523
+ border-left: 4px solid var(--info-color);
524
+ }
525
+
526
+ .loading {
527
+ display: inline-block;
528
+ width: 20px;
529
+ height: 20px;
530
+ border: 3px solid rgba(255, 255, 255, 0.3);
531
+ border-radius: 50%;
532
+ border-top-color: white;
533
+ animation: spin 1s linear infinite;
534
+ }
535
+
536
+ .spinner {
537
+ text-align: center;
538
+ padding: 40px;
539
+ }
540
+
541
+ @keyframes spin {
542
+ to { transform: rotate(360deg); }
543
+ }
544
+
545
+ @keyframes slideDown {
546
+ from {
547
+ opacity: 0;
548
+ transform: translateY(-20px);
549
+ }
550
+ to {
551
+ opacity: 1;
552
+ transform: translateY(0);
553
+ }
554
+ }
555
+
556
+ @keyframes slideUp {
557
+ from {
558
+ opacity: 0;
559
+ transform: translateY(20px);
560
+ }
561
+ to {
562
+ opacity: 1;
563
+ transform: translateY(0);
564
+ }
565
+ }
566
+
567
+ @keyframes fadeIn {
568
+ from { opacity: 0; }
569
+ to { opacity: 1; }
570
+ }
571
+
572
+ /* Empty State */
573
+ .empty-state {
574
+ text-align: center;
575
+ padding: 60px 20px;
576
+ color: #999;
577
+ }
578
+
579
+ .empty-state-icon {
580
+ font-size: 3rem;
581
+ margin-bottom: 20px;
582
+ }
583
+
584
+ .empty-state p {
585
+ font-size: 1.1rem;
586
+ margin-bottom: 20px;
587
+ }
588
+
589
+ /* Footer */
590
+ footer {
591
+ text-align: center;
592
+ color: white;
593
+ opacity: 0.8;
594
+ margin-top: 40px;
595
+ padding: 20px;
596
+ }
597
+ </style>
598
+ </head>
599
+ <body>
600
+ <div class="container">
601
+ <header>
602
+ <h1>πŸ” AI Slop Detector</h1>
603
+ <p>Analyze documents and text for AI-generated content</p>
604
+ </header>
605
+
606
+ <div class="content">
607
+ <!-- Upload Section -->
608
+ <div class="card" id="uploadCard">
609
+ <h2>πŸ“€ Upload File</h2>
610
+ <div class="upload-area" id="uploadArea">
611
+ <div class="upload-icon">πŸ“„</div>
612
+ <p><strong>Click or drag to upload</strong></p>
613
+ <p style="font-size: 0.85rem;">Supported: PDF, DOCX, TXT (Max 50MB)</p>
614
+ </div>
615
+ <input type="file" id="fileInput" class="file-input" accept=".pdf,.docx,.doc,.txt">
616
+
617
+ <div style="margin-top: 20px;">
618
+ <div class="input-group">
619
+ <label for="fileName">Optional: Custom Name</label>
620
+ <input type="text" id="fileName" placeholder="e.g., document.pdf">
621
+ </div>
622
+ <button class="button" style="width: 100%;" id="uploadButton" disabled>
623
+ Upload & Analyze
624
+ </button>
625
+ </div>
626
+ </div>
627
+
628
+ <!-- Text Analysis Section -->
629
+ <div class="card" id="textCard">
630
+ <h2>✏️ Analyze Text</h2>
631
+ <div class="input-group">
632
+ <label for="rawText">Paste your text here</label>
633
+ <textarea id="rawText" placeholder="Enter or paste text to analyze..."></textarea>
634
+ </div>
635
+ <button class="button" style="width: 100%;" id="analyzeButton">
636
+ Analyze Text
637
+ </button>
638
+ </div>
639
+ </div>
640
+
641
+ <!-- Loading Indicator -->
642
+ <div id="loadingIndicator" style="display: none;" class="spinner">
643
+ <div class="loading"></div>
644
+ <p style="margin-top: 10px; color: white;">Analyzing...</p>
645
+ </div>
646
+
647
+ <!-- Alert Messages -->
648
+ <div id="alertContainer"></div>
649
+
650
+ <!-- Results Section -->
651
+ <div id="resultsSection" style="display: none;">
652
+ <div class="card">
653
+ <h2>πŸ“Š Analysis Results</h2>
654
+
655
+ <div class="tabs">
656
+ <button class="tab-button active" data-tab="current">Current Analysis</button>
657
+ <button class="tab-button" data-tab="history">History</button>
658
+ <button class="tab-button" data-tab="statistics">Statistics</button>
659
+ </div>
660
+
661
+ <!-- Current Analysis Tab -->
662
+ <div class="tab-content active" id="currentTab">
663
+ <div id="analysisResult"></div>
664
+ </div>
665
+
666
+ <!-- History Tab -->
667
+ <div class="tab-content" id="historyTab">
668
+ <div id="historyList"></div>
669
+ </div>
670
+
671
+ <!-- Statistics Tab -->
672
+ <div class="tab-content" id="statisticsTab">
673
+ <div id="statistics"></div>
674
+ </div>
675
+ </div>
676
+ </div>
677
+
678
+ <footer>
679
+ <p>πŸ›‘οΈ AI Slop Detector - Powered by Advanced AI Detection Algorithms</p>
680
+ <p style="font-size: 0.85rem; margin-top: 10px;">Multi-detector ensemble for accurate AI-generated content detection</p>
681
+ </footer>
682
+ </div>
683
+
684
+ <script>
685
+ // API Base URL
686
+ const API_BASE = '/api';
687
+
688
+ // State
689
+ let currentResult = null;
690
+ let selectedFile = null;
691
+
692
+ // ============================================================================
693
+ // File Upload Handler
694
+ // ============================================================================
695
+
696
+ const uploadArea = document.getElementById('uploadArea');
697
+ const fileInput = document.getElementById('fileInput');
698
+ const uploadButton = document.getElementById('uploadButton');
699
+ const fileName = document.getElementById('fileName');
700
+
701
+ // Click to upload
702
+ uploadArea.addEventListener('click', () => fileInput.click());
703
+
704
+ // Drag and drop
705
+ uploadArea.addEventListener('dragover', (e) => {
706
+ e.preventDefault();
707
+ uploadArea.classList.add('dragover');
708
+ });
709
+
710
+ uploadArea.addEventListener('dragleave', () => {
711
+ uploadArea.classList.remove('dragover');
712
+ });
713
+
714
+ uploadArea.addEventListener('drop', (e) => {
715
+ e.preventDefault();
716
+ uploadArea.classList.remove('dragover');
717
+ handleFileSelect(e.dataTransfer.files);
718
+ });
719
+
720
+ // File input change
721
+ fileInput.addEventListener('change', (e) => {
722
+ handleFileSelect(e.target.files);
723
+ });
724
+
725
+ function handleFileSelect(files) {
726
+ if (files.length === 0) return;
727
+
728
+ selectedFile = files[0];
729
+ uploadArea.innerHTML = `<div style="color: #666;">βœ“ ${selectedFile.name} selected</div>`;
730
+ uploadButton.disabled = false;
731
+ }
732
+
733
+ // Upload and analyze file
734
+ uploadButton.addEventListener('click', async () => {
735
+ if (!selectedFile) return;
736
+
737
+ showLoading(true);
738
+ hideAlert();
739
+
740
+ try {
741
+ const formData = new FormData();
742
+ formData.append('file', selectedFile);
743
+
744
+ const response = await fetch(`${API_BASE}/analyze/file`, {
745
+ method: 'POST',
746
+ body: formData
747
+ });
748
+
749
+ const data = await response.json();
750
+
751
+ if (response.ok) {
752
+ currentResult = data;
753
+ displayAnalysisResult(data);
754
+ showAlert('Analysis completed successfully!', 'success');
755
+ loadHistory();
756
+ loadStatistics();
757
+ } else {
758
+ showAlert(data.message || 'Analysis failed', 'error');
759
+ }
760
+ } catch (error) {
761
+ showAlert(`Error: ${error.message}`, 'error');
762
+ } finally {
763
+ showLoading(false);
764
+ }
765
+ });
766
+
767
+ // ============================================================================
768
+ // Text Analysis Handler
769
+ // ============================================================================
770
+
771
+ const analyzeButton = document.getElementById('analyzeButton');
772
+ const rawText = document.getElementById('rawText');
773
+
774
+ analyzeButton.addEventListener('click', async () => {
775
+ const text = rawText.value.trim();
776
+
777
+ if (!text) {
778
+ showAlert('Please enter some text to analyze', 'error');
779
+ return;
780
+ }
781
+
782
+ if (text.length < 10) {
783
+ showAlert('Text must be at least 10 characters', 'error');
784
+ return;
785
+ }
786
+
787
+ showLoading(true);
788
+ hideAlert();
789
+
790
+ try {
791
+ const response = await fetch(`${API_BASE}/analyze/text`, {
792
+ method: 'POST',
793
+ headers: {
794
+ 'Content-Type': 'application/json',
795
+ },
796
+ body: JSON.stringify({ text: text })
797
+ });
798
+
799
+ const data = await response.json();
800
+
801
+ if (response.ok) {
802
+ currentResult = data;
803
+ displayAnalysisResult(data);
804
+ showAlert('Analysis completed successfully!', 'success');
805
+ loadHistory();
806
+ loadStatistics();
807
+ } else {
808
+ showAlert(data.message || 'Analysis failed', 'error');
809
+ }
810
+ } catch (error) {
811
+ showAlert(`Error: ${error.message}`, 'error');
812
+ } finally {
813
+ showLoading(false);
814
+ }
815
+ });
816
+
817
+ // ============================================================================
818
+ // Display Functions
819
+ // ============================================================================
820
+
821
+ function displayAnalysisResult(result) {
822
+ document.getElementById('resultsSection').style.display = 'block';
823
+
824
+ const html = `
825
+ <div class="analysis-result">
826
+ <div class="overall-score">
827
+ <h3>Overall AI Likelihood</h3>
828
+ <div class="score-percentage">${result.overall_ai_score_percentage}</div>
829
+ <div class="score-confidence">${result.overall_confidence}</div>
830
+ <div style="margin-top: 15px; font-size: 1rem;">${result.status_label}</div>
831
+ </div>
832
+
833
+ <h3 style="margin-bottom: 15px;">πŸ“ˆ Detector Results</h3>
834
+ <div class="detector-results">
835
+ ${Object.entries(result.detector_results).map(([name, detector]) => {
836
+ const meta = detector.metadata || {};
837
+ const aiProb = meta.ai_probability != null ? meta.ai_probability : detector.score;
838
+ const humanProb = meta.human_probability != null ? meta.human_probability : (1 - detector.score);
839
+ const prediction = meta.prediction || null;
840
+ const isAI = prediction ? prediction.toLowerCase().includes('ai') : detector.score >= 0.5;
841
+ const scoreColor = detector.score >= 0.7 ? '#dc3545' : detector.score >= 0.3 ? '#fd7e14' : '#28a745';
842
+ const conf = detector.confidence || 'n/a';
843
+ return `
844
+ <div class="detector-card">
845
+ <div class="detector-name">${formatDetectorName(name)}</div>
846
+ <div class="detector-model">${meta.model || 'N/A'}</div>
847
+
848
+ <div class="detector-score-row">
849
+ <div class="detector-score" style="color:${scoreColor}">${(detector.score * 100).toFixed(1)}%</div>
850
+ ${prediction ? `<span class="detector-prediction-badge ${isAI ? 'prediction-ai' : 'prediction-human'}">${prediction}</span>` : ''}
851
+ </div>
852
+
853
+ <div class="prob-bar-wrap">
854
+ <div class="prob-bar-labels">
855
+ <span>AI</span>
856
+ <span>Human</span>
857
+ </div>
858
+ <div class="prob-bar-track">
859
+ <div class="prob-bar-ai" style="width:${(aiProb * 100).toFixed(1)}%"></div>
860
+ <div class="prob-bar-human" style="width:${(humanProb * 100).toFixed(1)}%"></div>
861
+ </div>
862
+ <div class="prob-values">
863
+ <span class="prob-ai-val">${(aiProb * 100).toFixed(1)}%</span>
864
+ <span class="prob-human-val">${(humanProb * 100).toFixed(1)}%</span>
865
+ </div>
866
+ </div>
867
+
868
+ <div class="detector-confidence conf-${conf}">
869
+ <span class="confidence-dot"></span>Confidence: ${conf.replace('_', ' ')}
870
+ </div>
871
+ <div class="detector-explanation">${detector.explanation || 'No details'}</div>
872
+ </div>`;
873
+ }).join('')}
874
+ </div>
875
+
876
+ <h3 style="margin-top: 30px; margin-bottom: 15px;">πŸ“Š Text Statistics</h3>
877
+ <div class="stats-grid">
878
+ <div class="stat-box">
879
+ <div class="stat-label">Characters</div>
880
+ <div class="stat-value">${result.text_stats.character_count.toLocaleString()}</div>
881
+ </div>
882
+ <div class="stat-box">
883
+ <div class="stat-label">Words</div>
884
+ <div class="stat-value">${result.text_stats.word_count.toLocaleString()}</div>
885
+ </div>
886
+ <div class="stat-box">
887
+ <div class="stat-label">Sentences</div>
888
+ <div class="stat-value">${result.text_stats.sentence_count}</div>
889
+ </div>
890
+ <div class="stat-box">
891
+ <div class="stat-label">Avg Word Length</div>
892
+ <div class="stat-value">${result.text_stats.average_word_length.toFixed(1)}</div>
893
+ </div>
894
+ <div class="stat-box">
895
+ <div class="stat-label">Emojis</div>
896
+ <div class="stat-value">${result.text_stats.emoji_count ?? 0}</div>
897
+ </div>
898
+ <div class="stat-box">
899
+ <div class="stat-label">Em Dashes (β€”)</div>
900
+ <div class="stat-value">${result.text_stats.em_dash_count ?? 0}</div>
901
+ </div>
902
+ <div class="stat-box">
903
+ <div class="stat-label">Arrows (β†’)</div>
904
+ <div class="stat-value">${result.text_stats.arrow_count ?? 0}</div>
905
+ </div>
906
+ </div>
907
+
908
+ <div style="margin-top: 20px;">
909
+ <strong>Detectors Used:</strong> ${result.enabled_detectors.join(', ')}
910
+ </div>
911
+ </div>
912
+ `;
913
+
914
+ document.getElementById('analysisResult').innerHTML = html;
915
+ document.querySelector('[data-tab="current"]').click();
916
+ }
917
+
918
+ function formatDetectorName(name) {
919
+ const names = {
920
+ 'roberta': 'RoBERTa Detector',
921
+ 'perplexity': 'Perplexity Analysis',
922
+ 'llmdet': 'LLMDet Detector',
923
+ 'hf_classifier': 'HF Classifier',
924
+ 'outfox': 'OUTFOX Statistical',
925
+ 'tmr_detector': 'TMR (Target Mining RoBERTa)',
926
+ 'pattern': 'Pattern Signals (EM Dash + Emoji)',
927
+ };
928
+ return names[name] || name.toUpperCase();
929
+ }
930
+
931
+ async function loadHistory() {
932
+ try {
933
+ const response = await fetch(`${API_BASE}/results?limit=10&sort=recent`);
934
+ const data = await response.json();
935
+
936
+ if (!data.results || data.results.length === 0) {
937
+ document.getElementById('historyList').innerHTML = `
938
+ <div class="empty-state">
939
+ <div class="empty-state-icon">πŸ“‚</div>
940
+ <p>No previous analyses yet</p>
941
+ </div>
942
+ `;
943
+ return;
944
+ }
945
+
946
+ const html = data.results.map(result => `
947
+ <div class="result-item" onclick="viewResult(${result.id})">
948
+ <div class="result-header">
949
+ <span class="result-filename">${result.filename}</span>
950
+ <span class="score-badge ${getScoreBadgeClass(result.overall_ai_score)}">
951
+ ${(result.overall_ai_score * 100).toFixed(1)}%
952
+ </span>
953
+ </div>
954
+ <div class="result-meta">
955
+ <span>πŸ“… ${new Date(result.upload_timestamp).toLocaleDateString()}</span>
956
+ <span>πŸ“ ${result.word_count} words</span>
957
+ </div>
958
+ </div>
959
+ `).join('');
960
+
961
+ document.getElementById('historyList').innerHTML = html;
962
+ } catch (error) {
963
+ console.error('Error loading history:', error);
964
+ }
965
+ }
966
+
967
+ async function loadStatistics() {
968
+ try {
969
+ const response = await fetch(`${API_BASE}/statistics/summary`);
970
+ const data = await response.json();
971
+
972
+ if (!data.summary) {
973
+ return;
974
+ }
975
+
976
+ const stats = data.summary;
977
+ const html = `
978
+ <div class="stats-grid" style="margin-bottom: 20px;">
979
+ <div class="stat-box">
980
+ <div class="stat-label">Total Analyses</div>
981
+ <div class="stat-value">${stats.total_analyses}</div>
982
+ </div>
983
+ <div class="stat-box">
984
+ <div class="stat-label">Average AI Score</div>
985
+ <div class="stat-value">${(stats.average_ai_score * 100).toFixed(1)}%</div>
986
+ </div>
987
+ <div class="stat-box">
988
+ <div class="stat-label">Text Analyzed</div>
989
+ <div class="stat-value">${(stats.total_text_analyzed / 1000).toFixed(0)}K chars</div>
990
+ </div>
991
+ </div>
992
+ <div class="stats-grid">
993
+ <div class="stat-box">
994
+ <div class="stat-label">Likely Human</div>
995
+ <div class="stat-value" style="color: #28a745;">${stats.likely_human}</div>
996
+ </div>
997
+ <div class="stat-box">
998
+ <div class="stat-label">Suspicious</div>
999
+ <div class="stat-value" style="color: #ffc107;">${stats.suspicious}</div>
1000
+ </div>
1001
+ <div class="stat-box">
1002
+ <div class="stat-label">Likely AI</div>
1003
+ <div class="stat-value" style="color: #dc3545;">${stats.likely_ai}</div>
1004
+ </div>
1005
+ </div>
1006
+ `;
1007
+
1008
+ document.getElementById('statistics').innerHTML = html;
1009
+ } catch (error) {
1010
+ console.error('Error loading statistics:', error);
1011
+ }
1012
+ }
1013
+
1014
+ function getScoreBadgeClass(score) {
1015
+ if (score < 0.3) return 'score-human';
1016
+ if (score < 0.7) return 'score-suspicious';
1017
+ return 'score-ai';
1018
+ }
1019
+
1020
+ async function viewResult(resultId) {
1021
+ try {
1022
+ const response = await fetch(`${API_BASE}/results/${resultId}`);
1023
+ const data = await response.json();
1024
+
1025
+ if (response.ok && data.result) {
1026
+ currentResult = data.result;
1027
+ displayAnalysisResult({
1028
+ ...data.result,
1029
+ detector_results: data.result.detector_results || {},
1030
+ enabled_detectors: [],
1031
+ overall_ai_score_percentage: `${(data.result.overall_ai_score * 100).toFixed(1)}%`,
1032
+ status_label: 'View from history',
1033
+ text_stats: {
1034
+ character_count: data.result.text_length,
1035
+ word_count: data.result.word_count,
1036
+ sentence_count: 0,
1037
+ average_word_length: 0
1038
+ }
1039
+ });
1040
+ }
1041
+ } catch (error) {
1042
+ console.error('Error viewing result:', error);
1043
+ }
1044
+ }
1045
+
1046
+ // ============================================================================
1047
+ // Utility Functions
1048
+ // ============================================================================
1049
+
1050
+ function showLoading(show) {
1051
+ document.getElementById('loadingIndicator').style.display = show ? 'block' : 'none';
1052
+ uploadButton.disabled = show;
1053
+ analyzeButton.disabled = show;
1054
+ }
1055
+
1056
+ function showAlert(message, type = 'info') {
1057
+ const alertHTML = `
1058
+ <div class="alert alert-${type}">
1059
+ ${message}
1060
+ </div>
1061
+ `;
1062
+ const container = document.getElementById('alertContainer');
1063
+ container.innerHTML = alertHTML;
1064
+
1065
+ // Auto-dismiss after 5 seconds
1066
+ setTimeout(() => hideAlert(), 5000);
1067
+ }
1068
+
1069
+ function hideAlert() {
1070
+ document.getElementById('alertContainer').innerHTML = '';
1071
+ }
1072
+
1073
+ // Tab switching
1074
+ document.querySelectorAll('.tab-button').forEach(button => {
1075
+ button.addEventListener('click', (e) => {
1076
+ const tabName = e.target.dataset.tab;
1077
+
1078
+ // Hide all tabs
1079
+ document.querySelectorAll('.tab-content').forEach(tab => {
1080
+ tab.classList.remove('active');
1081
+ });
1082
+
1083
+ // Remove active class from all buttons
1084
+ document.querySelectorAll('.tab-button').forEach(btn => {
1085
+ btn.classList.remove('active');
1086
+ });
1087
+
1088
+ // Show selected tab
1089
+ document.getElementById(tabName + 'Tab').classList.add('active');
1090
+ e.target.classList.add('active');
1091
+ });
1092
+ });
1093
+
1094
+ // Load initial data
1095
+ loadHistory();
1096
+ loadStatistics();
1097
+ </script>
1098
+ </body>
1099
+ </html>