sentiment-api / README.md
Syed Arfan
Refactor README.md for improved formatting and clarity
5fb7b67
|
raw
history blame
3.57 kB

Sentiment Analysis API

A production-ready REST API for analyzing text sentiment using transformer models, containerized with Docker.

Docker FastAPI Python

Features

  • Fast sentiment analysis using DistilBERT transformer model
  • RESTful API with automatic interactive documentation
  • Docker containerized for consistent deployment
  • Health check endpoints for production monitoring
  • Input validation with Pydantic models
  • Sub-100ms inference after model warm-up

Tech Stack

  • Backend: FastAPI (async Python web framework)
  • ML Model: DistilBERT via HuggingFace Transformers
  • Containerization: Docker
  • API Documentation: Auto-generated with Swagger UI

Quick Start

Prerequisites

  • Docker Desktop installed
  • 4GB RAM minimum

Run with Docker

# Build image
docker build -t sentiment-api:v1.0 .

# Run container
docker run -d -p 8000:8000 --name sentiment-api sentiment-api:v1.0

# View logs
docker logs sentiment-api

Access API

πŸ”Œ API Usage

Example Request

curl -X POST "http://localhost:8000/analyze" \
  -H "Content-Type: application/json" \
  -d '{"text": "I love this product!"}'

Example Response

{
  "text": "I love this product!",
  "sentiment": "POSITIVE",
  "confidence": 0.9998,
  "processing_time_ms": 45
}

API Endpoints

Method Endpoint Description
GET / Health check with version info
POST /analyze Analyze text sentiment
GET /health Kubernetes-style health endpoint

Project Structure

sentiment-api/
β”œβ”€β”€ src/
β”‚   └── main.py          # FastAPI application
β”œβ”€β”€ tests/               # Unit tests
β”œβ”€β”€ Dockerfile           # Container definition
β”œβ”€β”€ .dockerignore       # Docker build exclusions
β”œβ”€β”€ requirements.txt     # Python dependencies
└── README.md

Development

Local Setup (without Docker)

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run locally
uvicorn src.main:app --reload

Run Tests

pytest tests/ -v

Docker Details

  • Base Image: python:3.11-slim
  • Image Size: ~1.2GB (includes PyTorch + transformers)
  • Health Check: Configured for production monitoring
  • Multi-stage optimized for faster rebuilds

Performance

  • First request: 30-60s (model download)
  • Subsequent requests: < 100ms
  • Memory usage: ~500MB (model in RAM)
  • Concurrent requests: 10-20 (CPU-bound)

Future Enhancements

  • GPU support for faster inference
  • Model caching layer (Redis)
  • Rate limiting
  • Authentication
  • Batch inference endpoint
  • Multiple model support
  • Kubernetes deployment configs

License

MIT License - feel free to use for your projects!

Author

Your Name

Acknowledgments