STOP / README.md
Nightfury16's picture
Initial commit: Dockerized SVM classifier with FastAPI and Gradio UI.
9fd78c4
metadata
title: STOP
sdk: docker
app_port: 7860
colorFrom: red
colorTo: indigo
description: >-
  STOP/NOT_STOP text classification using Linear SVM deployed with FastAPI and
  Docker.

STOP Classifier API

This Hugging Face Space hosts a low-latency text classification service deployed with Docker and FastAPI.

The service uses a highly efficient Linear Support Vector Machine (SVM) model trained on text features extracted via TF-IDF to classify messages as either intending to end communication (STOP) or not (NOT_STOP). As confirmed by the training script, the SVM model provides millisecond-level inference, which is ideal for the required low-latency API.

Project Structure

The deployment uses the following structure:

.
β”œβ”€β”€ app.py           
β”œβ”€β”€ Dockerfile       
β”œβ”€β”€ requirements.txt 
β”œβ”€β”€ README.md        
└── checkpoint/
    β”œβ”€β”€ tfidf_vectorizer.pkl
    └── svm_stop_classifier.pkl

API Endpoints

The FastAPI application provides two primary endpoints for prediction:

1. Health Check (GET)

  • Path: /
  • Method: GET
  • Description: A simple endpoint to confirm the service is running and the models are loaded.

2. Single Prediction (GET)

  • Path: /predict?text=<your_text>
  • Method: GET
  • Description: Classifies a single text string passed as a query parameter. This is suitable for quick, individual queries.
  • Example Query: /predict?text=please%20discontinue%20all%20contact

3. Batch Prediction (POST)

  • Path: /predict

  • Method: POST

  • Description: Classifies a list of text strings in a single request. This is the recommended approach for high-throughput, low-latency production use cases due to reduced overhead.

  • Request Body (JSON):

    {
      "texts": [
        "do not ever text me again",
        "I will stop by your office tomorrow"
      ]
    }