Spaces:
Sleeping
title: STOP
sdk: docker
app_port: 7860
colorFrom: red
colorTo: indigo
description: >-
STOP/NOT_STOP text classification using Linear SVM deployed with FastAPI and
Docker.
STOP Classifier API
This Hugging Face Space hosts a low-latency text classification service deployed with Docker and FastAPI.
The service uses a highly efficient Linear Support Vector Machine (SVM) model trained on text features extracted via TF-IDF to classify messages as either intending to end communication (STOP) or not (NOT_STOP). As confirmed by the training script, the SVM model provides millisecond-level inference, which is ideal for the required low-latency API.
Project Structure
The deployment uses the following structure:
.
βββ app.py
βββ Dockerfile
βββ requirements.txt
βββ README.md
βββ checkpoint/
βββ tfidf_vectorizer.pkl
βββ svm_stop_classifier.pkl
API Endpoints
The FastAPI application provides two primary endpoints for prediction:
1. Health Check (GET)
- Path:
/ - Method:
GET - Description: A simple endpoint to confirm the service is running and the models are loaded.
2. Single Prediction (GET)
- Path:
/predict?text=<your_text> - Method:
GET - Description: Classifies a single text string passed as a query parameter. This is suitable for quick, individual queries.
- Example Query:
/predict?text=please%20discontinue%20all%20contact
3. Batch Prediction (POST)
Path:
/predictMethod:
POSTDescription: Classifies a list of text strings in a single request. This is the recommended approach for high-throughput, low-latency production use cases due to reduced overhead.
Request Body (JSON):
{ "texts": [ "do not ever text me again", "I will stop by your office tomorrow" ] }