# CEFR Auto-Grader Web App - Quick Start Guide ## Application Status ✅ **RUNNING** - Fully functional ## Quick Access - **Web Interface**: http://localhost:5000 - **LAN Access**: http://192.168.1.11:5000 ## Starting the Application If the app is not running, start it from the project root: ```bash cd /home/fwl/src/textmining source .venv/bin/activate python web_app/app.py ``` Or run in background: ```bash nohup python web_app/app.py > web_app/flask.log 2>&1 & ``` ## Model Information - **Architecture**: Metric Proto K3 - **Base Model**: KB/bert-base-swedish-cased - **Device**: CUDA (GPU) - **Performance**: 84.1% macro F1, 87.3% accuracy ## Testing Examples | Sentence | Predicted Level | Confidence | |----------|----------------|------------| | "Hej." | A1 | 98.9% | | "Jag heter Anna." | A1 | 98.9% | | "Jag studerar svenska." | A1 | 99.1% | | "Den komplexa algoritmen..." | B2 | 99.0% | | "Det metodologiska ramverket..." | C1 | 99.1% | ## Features - 📝 Large text input area - 🔍 Automatic sentence segmentation - 🎨 Color-coded CEFR levels (A1-C2) - 📊 Statistics dashboard - 📈 Level distribution visualization - 📋 Detailed results table - ⚡ Real-time processing ## Files - `app.py` - Flask application - `model.py` - Model loading & inference - `templates/index.html` - Web interface - `static/css/style.css` - Styling - `static/js/app.js` - Frontend logic ## Troubleshooting If predictions are all the same level: 1. Check model loaded: `grep "Loading model" web_app/flask.log` 2. Verify model path: `ls runs/metric-proto-k3/metric_proto.pt` 3. Restart from project root: `cd /home/fwl/src/textmining` ## API Endpoint ```bash curl -X POST http://localhost:5000/assess \ -H "Content-Type: application/json" \ -d '{"text": "Jag heter Anna."}' ```