Spaces:
Sleeping
Sleeping
File size: 1,917 Bytes
9fd78c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
title: STOP
sdk: docker
app_port: 7860
colorFrom: red
colorTo: indigo
description: STOP/NOT_STOP text classification using Linear SVM deployed with FastAPI and Docker.
---
# STOP Classifier API
This Hugging Face Space hosts a low-latency text classification service deployed with Docker and FastAPI.
The service uses a highly efficient Linear Support Vector Machine (SVM) model trained on text features extracted via TF-IDF to classify messages as either intending to end communication (`STOP`) or not (`NOT_STOP`). As confirmed by the training script, the SVM model provides millisecond-level inference, which is ideal for the required low-latency API.
## Project Structure
The deployment uses the following structure:
```
.
βββ app.py
βββ Dockerfile
βββ requirements.txt
βββ README.md
βββ checkpoint/
βββ tfidf_vectorizer.pkl
βββ svm_stop_classifier.pkl
```
## API Endpoints
The FastAPI application provides two primary endpoints for prediction:
### 1. Health Check (GET)
* **Path:** `/`
* **Method:** `GET`
* **Description:** A simple endpoint to confirm the service is running and the models are loaded.
### 2. Single Prediction (GET)
* **Path:** `/predict?text=<your_text>`
* **Method:** `GET`
* **Description:** Classifies a single text string passed as a query parameter. This is suitable for quick, individual queries.
* **Example Query:** `/predict?text=please%20discontinue%20all%20contact`
### 3. Batch Prediction (POST)
* **Path:** `/predict`
* **Method:** `POST`
* **Description:** Classifies a list of text strings in a single request. This is the recommended approach for high-throughput, low-latency production use cases due to reduced overhead.
* **Request Body (JSON):**
```json
{
"texts": [
"do not ever text me again",
"I will stop by your office tomorrow"
]
}
|