swe-cefr-sp / web_app /STARTUP.md
fffffwl's picture
Initial HF Space for Swedish CEFR web app
0b8530c

CEFR Auto-Grader Web App - Quick Start Guide

Application Status

βœ… RUNNING - Fully functional

Quick Access

Starting the Application

If the app is not running, start it from the project root:

cd /home/fwl/src/textmining
source .venv/bin/activate
python web_app/app.py

Or run in background:

nohup python web_app/app.py > web_app/flask.log 2>&1 &

Model Information

  • Architecture: Metric Proto K3
  • Base Model: KB/bert-base-swedish-cased
  • Device: CUDA (GPU)
  • Performance: 84.1% macro F1, 87.3% accuracy

Testing Examples

Sentence Predicted Level Confidence
"Hej." A1 98.9%
"Jag heter Anna." A1 98.9%
"Jag studerar svenska." A1 99.1%
"Den komplexa algoritmen..." B2 99.0%
"Det metodologiska ramverket..." C1 99.1%

Features

  • πŸ“ Large text input area
  • πŸ” Automatic sentence segmentation
  • 🎨 Color-coded CEFR levels (A1-C2)
  • πŸ“Š Statistics dashboard
  • πŸ“ˆ Level distribution visualization
  • πŸ“‹ Detailed results table
  • ⚑ Real-time processing

Files

  • app.py - Flask application
  • model.py - Model loading & inference
  • templates/index.html - Web interface
  • static/css/style.css - Styling
  • static/js/app.js - Frontend logic

Troubleshooting

If predictions are all the same level:

  1. Check model loaded: grep "Loading model" web_app/flask.log
  2. Verify model path: ls runs/metric-proto-k3/metric_proto.pt
  3. Restart from project root: cd /home/fwl/src/textmining

API Endpoint

curl -X POST http://localhost:5000/assess \
  -H "Content-Type: application/json" \
  -d '{"text": "Jag heter Anna."}'