Spaces:
Sleeping
Sleeping
| title: Iris Classification | |
| emoji: πΊ | |
| colorFrom: green | |
| colorTo: purple | |
| sdk: docker | |
| app_port: 8000 | |
| <!-- Trigger rebuild --> | |
| # FastAPI ML Deployment Tutorial | |
| This repository demonstrates how to serve and deploy a Machine Learning application using FastAPI and Docker. We use the classic Iris dataset to keep the ML part simple and focus on the deployment mechanics. | |
| ## Project Structure | |
| ``` | |
| . | |
| βββ app/ | |
| β βββ __init__.py | |
| β βββ main.py # FastAPI application | |
| β βββ model.py # Model loading and prediction logic | |
| βββ model_training/ | |
| β βββ train.py # Script to train and save the model | |
| βββ models/ # Directory to store the saved model artifact | |
| βββ requirements.txt # Python dependencies | |
| βββ Dockerfile # Container definition | |
| βββ README.md # This tutorial | |
| ``` | |
| ## Prerequisites | |
| - Python 3.9+ | |
| - Docker (optional, for containerization) | |
| ## Step 1: Setup Environment | |
| 1. Clone the repository: | |
| ```bash | |
| git clone <repository-url> | |
| cd ml-deploy-app | |
| ``` | |
| 2. Create a virtual environment: | |
| ```bash | |
| python -m venv venv | |
| source venv/bin/activate # On Windows: venv\Scripts\activate | |
| ``` | |
| 3. Install dependencies: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| ## Step 2: Train the Model | |
| Run the training script to generate the model artifact (`models/iris_model.joblib`): | |
| ```bash | |
| python model_training/train.py | |
| ``` | |
| You should see output indicating the model was saved successfully. | |
| ## Step 3: Run the API Locally | |
| Start the FastAPI server using Uvicorn: | |
| ```bash | |
| uvicorn app.main:app --reload | |
| ``` | |
| The API will be available at `http://127.0.0.1:8000`. | |
| ### Interactive Documentation | |
| Visit `http://127.0.0.1:8000/docs` to see the Swagger UI. You can test the `/predict` endpoint directly from the browser. | |
| **Example Request Body:** | |
| ```json | |
| { | |
| "sepal_length": 5.1, | |
| "sepal_width": 3.5, | |
| "petal_length": 1.4, | |
| "petal_width": 0.2 | |
| } | |
| ``` | |
| ## Step 4: Run with Docker | |
| 1. Build the Docker image: | |
| ```bash | |
| docker build -t iris-app . | |
| ``` | |
| 2. Run the container: | |
| ```bash | |
| docker run -p 8000:8000 iris-app | |
| ``` | |
| The API will be accessible at `http://127.0.0.1:8000` (and `http://127.0.0.1:8000/docs`). | |
| ## Next Steps | |
| - **Hugging Face Spaces**: You can deploy this easily to Hugging Face Spaces by adding a `README.md` with YAML metadata and pushing the code. | |
| - **Cloud Deployment**: This Docker container can be deployed to AWS ECS, Google Cloud Run, or Azure Container Apps. | |
| ## Deployed Endpoint | |
| You can test the deployed API on Hugging Face Spaces: | |
| ```bash | |
| curl -X POST "https://nipun-ml-deploy-app.hf.space/predict" \ | |
| -H "Content-Type: application/json" \ | |
| -d '{"sepal_length": 5.1, "sepal_width": 3.5, "petal_length": 1.4, "petal_width": 0.2}' | |
| ``` | |