context-Classifier2 / README.md
parthraninga's picture
Upload 9 files
2afd81c verified
metadata
title: Content Classifier
emoji: πŸ”
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
app_port: 7860

πŸ” Content Classifier

A powerful FastAPI-based content classification service using ONNX for threat detection and sentiment analysis.

πŸš€ Features

  • Threat Detection: Classify content for potential threats
  • Sentiment Analysis: Analyze text sentiment (positive/negative)
  • ONNX Runtime: High-performance model inference
  • REST API: Easy-to-use HTTP endpoints
  • Auto Documentation: Interactive Swagger UI at /docs
  • Health Monitoring: Built-in health checks

πŸ“‘ API Endpoints

Endpoint Method Description
/predict POST Classify text content
/health GET Check API health status
/model-info GET Get model information
/docs GET Interactive API documentation

πŸ”§ Usage

Example Request

curl -X POST "https://YOUR-SPACE-NAME.hf.space/predict" \
     -H "Content-Type: application/json" \
     -d '{"text": "This is a sample text to classify"}'

Example Response

{
    "is_threat": false,
    "final_confidence": 0.75,
    "threat_prediction": 0.25,
    "sentiment_analysis": {
        "label": "POSITIVE",
        "score": 0.5
    },
    "onnx_prediction": {
        "threat_probability": 0.25,
        "raw_output": [[0.75, 0.25]],
        "output_shape": [1, 2]
    },
    "models_used": ["contextClassifier.onnx"],
    "raw_predictions": {
        "onnx": {
            "threat_probability": 0.25,
            "raw_output": [[0.75, 0.25]]
        },
        "sentiment": {
            "label": "POSITIVE",
            "score": 0.5
        }
    }
}

πŸ“Š Response Format

The API returns a structured response with:

  • is_threat: Boolean indicating if content is threatening
  • final_confidence: Confidence score (0.0 to 1.0)
  • threat_prediction: Raw threat probability
  • sentiment_analysis: Sentiment classification and score
  • onnx_prediction: Raw ONNX model output
  • models_used: List of models used for prediction
  • raw_predictions: Complete prediction data

πŸ› οΈ Local Development

  1. Install dependencies:

    pip install -r requirements.txt
    
  2. Place your model: Ensure contextClassifier.onnx is in the project root

  3. Run the API:

    python app.py
    
  4. Visit: http://localhost:7860/docs

🐳 Docker

# Build
docker build -t content-classifier .

# Run
docker run -p 7860:7860 content-classifier

πŸ“ Model Requirements

Your contextClassifier.onnx model should:

  • Accept text-based inputs
  • Return classification predictions
  • Be compatible with ONNX Runtime

βš™οΈ Configuration

Customize the preprocessing and postprocessing functions in app.py based on your specific model requirements.

πŸ” Monitoring

  • Health Check: /health - Monitor API status
  • Model Info: /model-info - View model details
  • Logs: Check application logs for debugging

πŸ“œ License

MIT License - feel free to use and modify!