nomandiu9 commited on
Commit
0f43e0e
·
0 Parent(s):

Initial commit: Sentiment analysis app (models on HuggingFace)

Browse files
Files changed (10) hide show
  1. .gitignore +26 -0
  2. DEPLOY.md +110 -0
  3. Dockerfile +17 -0
  4. Procfile +1 -0
  5. README.md +56 -0
  6. START_HERE.md +49 -0
  7. app.py +161 -0
  8. index.html +185 -0
  9. requirements.txt +8 -0
  10. vercel.json +15 -0
.gitignore ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # Virtual environments
7
+ venv/
8
+ env/
9
+ ENV/
10
+
11
+ # IDE
12
+ .vscode/
13
+ .idea/
14
+ *.swp
15
+ *.swo
16
+
17
+ # OS
18
+ .DS_Store
19
+ Thumbs.db
20
+
21
+ # Testing
22
+ .pytest_cache/
23
+ .coverage
24
+
25
+ # Logs
26
+ *.log
DEPLOY.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚀 Deployment Guide - Using HuggingFace Models
2
+
3
+ ## ✅ Perfect Setup!
4
+
5
+ You've uploaded your models to HuggingFace at: **`anis80/anisproject`**
6
+
7
+ The app is now configured to download models from there automatically!
8
+
9
+ ---
10
+
11
+ ## 📤 Deploy to Render (Recommended - Free)
12
+
13
+ ### Step 1: Push to GitHub
14
+
15
+ ```powershell
16
+ cd e:\anis
17
+
18
+ # Add GitHub remote (replace YOUR_USERNAME)
19
+ git remote add origin https://github.com/YOUR_USERNAME/sentiment-analysis-backend.git
20
+ git branch -M main
21
+
22
+ # Push to GitHub
23
+ git push -u origin main
24
+ ```
25
+
26
+ ### Step 2: Deploy to Render
27
+
28
+ 1. Go to: **https://render.com**
29
+ 2. Sign up with GitHub
30
+ 3. Create **"New Web Service"**
31
+ 4. Connect your `sentiment-analysis-backend` repository
32
+ 5. Configure:
33
+ - **Name**: `sentiment-analysis-api`
34
+ - **Runtime**: `Python 3`
35
+ - **Build Command**: `pip install -r requirements.txt`
36
+ - **Start Command**: `uvicorn app:app --host 0.0.0.0 --port $PORT`
37
+ - **Instance Type**: **Free**
38
+ 6. Click **"Create Web Service"**
39
+
40
+ ### Step 3: Wait for Deployment (3-5 minutes)
41
+
42
+ Your API will be live at: `https://sentiment-analysis-api-XXXX.onrender.com`
43
+
44
+ ---
45
+
46
+ ## 🌐 Alternative: Deploy to HuggingFace Spaces
47
+
48
+ Since your models are already on HuggingFace, you can also deploy there:
49
+
50
+ ### Create Space
51
+
52
+ 1. Go to: https://huggingface.co/new-space
53
+ 2. Name: `sentiment-analysis-api`
54
+ 3. SDK: **Docker**
55
+ 4. Hardware: **CPU basic (Free)**
56
+
57
+ ### Push Code
58
+
59
+ ```powershell
60
+ cd e:\anis
61
+
62
+ # Add HuggingFace remote
63
+ git remote add hf https://huggingface.co/spaces/anis80/sentiment-analysis-api
64
+ git push hf main
65
+ ```
66
+
67
+ ---
68
+
69
+ ## ✅ Benefits of This Approach
70
+
71
+ - ✅ **No large files in repository** (only ~10KB of code)
72
+ - ✅ **Models downloaded on startup** from HuggingFace
73
+ - ✅ **Works on any platform** (Render, HuggingFace, Railway, etc.)
74
+ - ✅ **Easy model updates** - just update on HuggingFace
75
+ - ✅ **Free deployment** on both platforms
76
+
77
+ ---
78
+
79
+ ## 🧪 Test Locally First
80
+
81
+ ```powershell
82
+ cd e:\anis
83
+ pip install -r requirements.txt
84
+ python app.py
85
+ ```
86
+
87
+ Visit: http://localhost:7860/docs
88
+
89
+ ---
90
+
91
+ ## 📝 Update Frontend
92
+
93
+ Once deployed, update `index.html` lines 72-73:
94
+
95
+ ```javascript
96
+ const predictApi = 'https://YOUR_DEPLOYMENT_URL/predict';
97
+ const statusApi = 'https://YOUR_DEPLOYMENT_URL/status';
98
+ ```
99
+
100
+ Then deploy frontend to Vercel!
101
+
102
+ ---
103
+
104
+ ## 💰 Cost: $0/month 🎉
105
+
106
+ Both Render and HuggingFace offer free tiers perfect for this project!
107
+
108
+ ---
109
+
110
+ **Ready to deploy?** Choose Render or HuggingFace and follow the steps above! 🚀
Dockerfile ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Copy requirements first for better caching
6
+ COPY requirements.txt .
7
+ RUN pip install --no-cache-dir -r requirements.txt
8
+
9
+ # Copy models and application
10
+ COPY models-20260120T133709Z-3-002 models-20260120T133709Z-3-002
11
+ COPY app.py .
12
+
13
+ # Expose port 7860 (HuggingFace Spaces default)
14
+ EXPOSE 7860
15
+
16
+ # Run the application
17
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
Procfile ADDED
@@ -0,0 +1 @@
 
 
1
+ web: uvicorn app:app --host 0.0.0.0 --port $PORT
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sentiment Analysis API
2
+
3
+ A FastAPI-based sentiment analysis service using ensemble machine learning models (KNN, Random Forest, Extra Trees).
4
+
5
+ ## Features
6
+
7
+ - **Fast predictions** using pre-trained ML models
8
+ - **RESTful API** with automatic documentation
9
+ - **CORS enabled** for web frontend integration
10
+ - **Ensemble learning** for improved accuracy
11
+
12
+ ## API Endpoints
13
+
14
+ ### `GET /`
15
+ Health check and API information
16
+
17
+ ### `GET /status`
18
+ Returns model status and readiness
19
+
20
+ ### `POST /predict`
21
+ Analyzes sentiment of input text
22
+
23
+ **Request body:**
24
+ ```json
25
+ {
26
+ "text": "Your text here"
27
+ }
28
+ ```
29
+
30
+ **Response:**
31
+ ```json
32
+ {
33
+ "predicted_sentiment": "positive",
34
+ "input_text": "Your text here"
35
+ }
36
+ ```
37
+
38
+ ## Models
39
+
40
+ This API uses three pre-trained models:
41
+ - **Label Encoder**: Encodes sentiment labels
42
+ - **TF-IDF Vectorizer**: Converts text to numerical features
43
+ - **Voting Classifier**: Ensemble of KNN, Random Forest, and Extra Trees
44
+
45
+ ## Local Development
46
+
47
+ ```bash
48
+ pip install -r requirements.txt
49
+ python app.py
50
+ ```
51
+
52
+ The API will be available at `http://localhost:7860`
53
+
54
+ ## Documentation
55
+
56
+ Interactive API documentation available at `/docs` (Swagger UI) and `/redoc` (ReDoc)
START_HERE.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🎯 Quick Deploy - 3 Simple Steps
2
+
3
+ ## ✅ Your Setup
4
+ - Models: `anis80/anisproject` on HuggingFace ✅
5
+ - Backend: Ready to deploy (downloads models automatically)
6
+ - Frontend: Ready for Vercel
7
+
8
+ ---
9
+
10
+ ## 🚀 Deploy in 3 Steps
11
+
12
+ ### Step 1: Push to GitHub (2 min)
13
+ ```powershell
14
+ cd e:\anis
15
+ git remote add origin https://github.com/YOUR_USERNAME/sentiment-backend.git
16
+ git branch -M main
17
+ git push -u origin main
18
+ ```
19
+
20
+ ### Step 2: Deploy Backend (3 min)
21
+
22
+ **Option A - Render (Recommended):**
23
+ 1. https://render.com → Sign up with GitHub
24
+ 2. New Web Service → Select your repo
25
+ 3. Start: `uvicorn app:app --host 0.0.0.0 --port $PORT`
26
+ 4. Plan: Free → Deploy
27
+
28
+ **Option B - HuggingFace:**
29
+ ```powershell
30
+ git remote add hf https://huggingface.co/spaces/anis80/sentiment-api
31
+ git push hf main
32
+ ```
33
+
34
+ ### Step 3: Deploy Frontend (2 min)
35
+ 1. Update `index.html` lines 72-73 with your backend URL
36
+ 2. Push to GitHub
37
+ 3. https://vercel.com → Import repo → Deploy
38
+
39
+ ---
40
+
41
+ ## ✅ Test
42
+ - Backend: `https://YOUR_URL/docs`
43
+ - Frontend: Enter text → Click Analyze → See result!
44
+
45
+ ---
46
+
47
+ **Total Time: ~7 minutes | Cost: $0/month** 🎉
48
+
49
+ See `final_deployment.md` for detailed instructions.
app.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from pydantic import BaseModel
4
+ import joblib
5
+ import os
6
+ from pathlib import Path
7
+ from huggingface_hub import hf_hub_download
8
+
9
+ # Initialize FastAPI app
10
+ app = FastAPI(
11
+ title="Sentiment Analysis API",
12
+ description="API for sentiment analysis using ensemble ML models",
13
+ version="1.0.0"
14
+ )
15
+
16
+ # Configure CORS - allow all origins for Vercel frontend
17
+ app.add_middleware(
18
+ CORSMiddleware,
19
+ allow_origins=["*"], # In production, replace with your Vercel domain
20
+ allow_credentials=True,
21
+ allow_methods=["*"],
22
+ allow_headers=["*"],
23
+ )
24
+
25
+ # Define request/response models
26
+ class TextInput(BaseModel):
27
+ text: str
28
+
29
+ class PredictionResponse(BaseModel):
30
+ predicted_sentiment: str
31
+ input_text: str
32
+
33
+ class StatusResponse(BaseModel):
34
+ status: str
35
+ model_name: str
36
+ message: str
37
+
38
+ # Global variables for models
39
+ label_encoder = None
40
+ tfidf_vectorizer = None
41
+ voting_classifier = None
42
+ MODEL_NAME = "Voting Classifier (KNN + RF + ET)"
43
+
44
+ # HuggingFace Model Hub configuration
45
+ REPO_ID = "anis80/anisproject" # Your HuggingFace model repository
46
+ MODEL_FILES = {
47
+ "label_encoder": "label_encoder.joblib",
48
+ "tfidf_vectorizer": "tfidf_vectorizer.joblib",
49
+ "voting_classifier": "voting_classifier_knn_rf_et-001.joblib"
50
+ }
51
+
52
+ def download_model_from_hub(filename: str) -> str:
53
+ """Download a model file from HuggingFace Model Hub"""
54
+ try:
55
+ print(f"📥 Downloading {filename} from HuggingFace Model Hub...")
56
+ file_path = hf_hub_download(
57
+ repo_id=REPO_ID,
58
+ filename=filename,
59
+ cache_dir="./model_cache"
60
+ )
61
+ print(f"✅ Downloaded {filename}")
62
+ return file_path
63
+ except Exception as e:
64
+ print(f"❌ Error downloading {filename}: {str(e)}")
65
+ raise e
66
+
67
+ # Load models on startup
68
+ @app.on_event("startup")
69
+ async def load_models():
70
+ global label_encoder, tfidf_vectorizer, voting_classifier
71
+
72
+ try:
73
+ print(f"🚀 Starting model loading from HuggingFace: {REPO_ID}")
74
+
75
+ # Download and load each model
76
+ label_encoder_path = download_model_from_hub(MODEL_FILES["label_encoder"])
77
+ label_encoder = joblib.load(label_encoder_path)
78
+
79
+ tfidf_path = download_model_from_hub(MODEL_FILES["tfidf_vectorizer"])
80
+ tfidf_vectorizer = joblib.load(tfidf_path)
81
+
82
+ classifier_path = download_model_from_hub(MODEL_FILES["voting_classifier"])
83
+ voting_classifier = joblib.load(classifier_path)
84
+
85
+ print("✅ All models loaded successfully from HuggingFace Model Hub!")
86
+
87
+ except Exception as e:
88
+ print(f"❌ Error loading models: {str(e)}")
89
+ print(f"⚠️ Make sure models are uploaded to: https://huggingface.co/{REPO_ID}")
90
+ raise e
91
+
92
+ # Health check endpoint
93
+ @app.get("/")
94
+ async def root():
95
+ return {
96
+ "message": "Sentiment Analysis API is running",
97
+ "model_source": f"HuggingFace: {REPO_ID}",
98
+ "endpoints": {
99
+ "predict": "/predict",
100
+ "status": "/status",
101
+ "docs": "/docs"
102
+ }
103
+ }
104
+
105
+ # Status endpoint
106
+ @app.get("/status", response_model=StatusResponse)
107
+ async def get_status():
108
+ if voting_classifier is None:
109
+ raise HTTPException(status_code=503, detail="Models not loaded")
110
+
111
+ return StatusResponse(
112
+ status="ready",
113
+ model_name=MODEL_NAME,
114
+ message=f"All models loaded from {REPO_ID}"
115
+ )
116
+
117
+ # Prediction endpoint
118
+ @app.post("/predict", response_model=PredictionResponse)
119
+ async def predict_sentiment(input_data: TextInput):
120
+ try:
121
+ # Validate models are loaded
122
+ if None in [label_encoder, tfidf_vectorizer, voting_classifier]:
123
+ raise HTTPException(
124
+ status_code=503,
125
+ detail="Models not loaded. Please try again later."
126
+ )
127
+
128
+ # Validate input
129
+ if not input_data.text or not input_data.text.strip():
130
+ raise HTTPException(
131
+ status_code=400,
132
+ detail="Text input cannot be empty"
133
+ )
134
+
135
+ # Preprocess and transform the text
136
+ text_tfidf = tfidf_vectorizer.transform([input_data.text])
137
+
138
+ # Make prediction
139
+ prediction = voting_classifier.predict(text_tfidf)
140
+
141
+ # Decode the prediction
142
+ sentiment = label_encoder.inverse_transform(prediction)[0]
143
+
144
+ return PredictionResponse(
145
+ predicted_sentiment=sentiment,
146
+ input_text=input_data.text
147
+ )
148
+
149
+ except HTTPException:
150
+ raise
151
+ except Exception as e:
152
+ print(f"Prediction error: {str(e)}")
153
+ raise HTTPException(
154
+ status_code=500,
155
+ detail=f"Prediction failed: {str(e)}"
156
+ )
157
+
158
+ # For local testing
159
+ if __name__ == "__main__":
160
+ import uvicorn
161
+ uvicorn.run(app, host="0.0.0.0", port=7860)
index.html ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Sentiment Analysis</title>
7
+ <script src="https://cdn.tailwindcss.com"></script>
8
+ </head>
9
+ <body class="bg-gray-900 text-white min-h-screen flex items-center justify-center font-sans p-4">
10
+
11
+ <div class="container mx-auto max-w-2xl w-full p-8 bg-gray-800 rounded-2xl shadow-2xl border border-gray-700">
12
+
13
+ <h1 class="text-4xl font-bold text-center mb-2 text-cyan-400">
14
+ Sentiment Analysis Engine
15
+ </h1>
16
+ <p id="modelName" class="text-center text-gray-400 mb-8 h-6">
17
+ (Loading model...)
18
+ </p>
19
+
20
+ <div id="errorDiv" class="hidden p-3 mb-4 bg-red-800 border border-red-600 text-red-100 rounded-lg">
21
+ <span id="errorMessage"></span>
22
+ </div>
23
+
24
+ <div class="mb-6 relative">
25
+ <label for="textInput" class="block text-lg font-medium text-gray-300 mb-2">
26
+ Enter your text:
27
+ </label>
28
+ <textarea id="textInput" rows="5"
29
+ class="w-full p-4 bg-gray-700 border border-gray-600 rounded-lg text-white text-base focus:ring-2 focus:ring-cyan-500 focus:outline-none transition duration-200"
30
+ placeholder="Type something... (e.g., 'I love this product!')"></textarea>
31
+ </div>
32
+
33
+ <div class="flex flex-col sm:flex-row gap-4">
34
+ <button id="analyzeButton"
35
+ class="w-full bg-cyan-600 hover:bg-cyan-500 text-white text-lg font-bold py-3 px-6 rounded-lg shadow-lg transition duration-300 ease-in-out transform hover:scale-105 flex items-center justify-center">
36
+ <svg id="spinner" class="animate-spin -ml-1 mr-3 h-5 w-5 text-white hidden" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
37
+ <circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
38
+ <path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
39
+ </svg>
40
+ <span id="buttonText">Analyze Sentiment</span>
41
+ </button>
42
+ <button id="clearButton"
43
+ class="w-full sm:w-auto bg-gray-600 hover:bg-gray-500 text-white font-bold py-3 px-6 rounded-lg transition duration-300">
44
+ Clear
45
+ </button>
46
+ </div>
47
+
48
+ <div id="result" class="mt-8 p-6 bg-gray-700 rounded-lg hidden border border-gray-600">
49
+ <h2 class="text-2xl font-semibold mb-4 text-gray-200">Analysis Result:</h2>
50
+ <div id="sentiment" class="text-4xl font-extrabold text-center flex items-center justify-center gap-4">
51
+ <span id="sentimentIcon"></span>
52
+ <span id="sentimentText"></span>
53
+ </div>
54
+ </div>
55
+ </div>
56
+
57
+ <script>
58
+ // Select all DOM elements
59
+ const textInput = document.getElementById('textInput');
60
+ const analyzeButton = document.getElementById('analyzeButton');
61
+ const clearButton = document.getElementById('clearButton');
62
+ const resultDiv = document.getElementById('result');
63
+ const sentimentText = document.getElementById('sentimentText');
64
+ const sentimentIcon = document.getElementById('sentimentIcon');
65
+ const modelNameEl = document.getElementById('modelName');
66
+ const errorDiv = document.getElementById('errorDiv');
67
+ const errorMessage = document.getElementById('errorMessage');
68
+ const spinner = document.getElementById('spinner');
69
+ const buttonText = document.getElementById('buttonText');
70
+
71
+ // API endpoints
72
+ const predictApi = '/predict';
73
+ const statusApi = '/status';
74
+
75
+ // Icon SVGs
76
+ const icons = {
77
+ positive: `<svg xmlns="http://www.w3.org/2000/svg" class="h-10 w-10 text-green-400" viewBox="0 0 20 20" fill="currentColor"><path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clip-rule="evenodd" /></svg>`,
78
+ negative: `<svg xmlns="http://www.w3.org/2000/svg" class="h-10 w-10 text-red-400" viewBox="0 0 20 20" fill="currentColor"><path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM8.707 7.293a1 1 0 00-1.414 1.414L8.586 10l-1.293 1.293a1 1 0 101.414 1.414L10 11.414l1.293 1.293a1 1 0 001.414-1.414L11.414 10l1.293-1.293a1 1 0 00-1.414-1.414L10 8.586 8.707 7.293z" clip-rule="evenodd" /></svg>`,
79
+ neutral: `<svg xmlns="http://www.w3.org/2000/svg" class="h-10 w-10 text-yellow-400" viewBox="0 0 20 20" fill="currentColor"><path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM7 10a1 1 0 11-2 0 1 1 0 012 0zm7 0a1 1 0 11-2 0 1 1 0 012 0z" clip-rule="evenodd" /></svg>`,
80
+ other: `<svg xmlns="http://www.w3.org/2000/svg" class="h-10 w-10 text-blue-400" viewBox="0 0 20 20" fill="currentColor"><path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7-4a1 1 0 11-2 0 1 1 0 012 0zM9 9a1 1 0 000 2v3a1 1 0 001 1h1a1 1 0 100-2v-3a1 1 0 00-1-1H9z" clip-rule="evenodd" /></svg>`
81
+ };
82
+
83
+ // Fetch model status on page load
84
+ window.addEventListener('load', async () => {
85
+ try {
86
+ const response = await fetch(statusApi);
87
+ if (!response.ok) throw new Error('Status check failed');
88
+ const data = await response.json();
89
+ modelNameEl.textContent = `(Active Model: ${data.model_name || 'N/A'})`;
90
+ } catch (error) {
91
+ modelNameEl.textContent = '(Could not load model status)';
92
+ console.error('Error fetching status:', error);
93
+ }
94
+ });
95
+
96
+ // Analyze button click event
97
+ analyzeButton.addEventListener('click', async () => {
98
+ const text = textInput.value;
99
+ if (!text) {
100
+ showError('Please enter some text to analyze.');
101
+ return;
102
+ }
103
+
104
+ setLoading(true);
105
+
106
+ try {
107
+ const response = await fetch(predictApi, {
108
+ method: 'POST',
109
+ headers: { 'Content-Type': 'application/json' },
110
+ body: JSON.stringify({ text: text })
111
+ });
112
+
113
+ if (!response.ok) {
114
+ throw new Error('Network response was not ok');
115
+ }
116
+
117
+ const data = await response.json();
118
+
119
+ if (data.predicted_sentiment) {
120
+ displayResult(data.predicted_sentiment);
121
+ } else if (data.error) {
122
+ showError('Error from server: ' + data.error);
123
+ }
124
+
125
+ } catch (error) {
126
+ console.error('Error:', error);
127
+ showError('Could not connect to the analysis server. Is it running?');
128
+ } finally {
129
+ setLoading(false);
130
+ }
131
+ });
132
+
133
+ // Clear button click event
134
+ clearButton.addEventListener('click', () => {
135
+ textInput.value = '';
136
+ resultDiv.classList.add('hidden');
137
+ errorDiv.classList.add('hidden');
138
+ });
139
+
140
+ // Function to manage loading state
141
+ function setLoading(isLoading) {
142
+ analyzeButton.disabled = isLoading;
143
+ if (isLoading) {
144
+ spinner.classList.remove('hidden');
145
+ buttonText.textContent = 'Analyzing...';
146
+ } else {
147
+ spinner.classList.add('hidden');
148
+ buttonText.textContent = 'Analyze Sentiment';
149
+ }
150
+ }
151
+
152
+ // Function to show error messages
153
+ function showError(message) {
154
+ errorMessage.textContent = message;
155
+ errorDiv.classList.remove('hidden');
156
+ resultDiv.classList.add('hidden');
157
+ }
158
+
159
+ // Function to display results
160
+ function displayResult(sentiment) {
161
+ sentimentText.textContent = sentiment;
162
+ const sentimentKey = sentiment.toLowerCase();
163
+ let iconSvg = icons.other; // Default
164
+ let colorClass = 'text-blue-400';
165
+
166
+ if (sentimentKey.includes('positive')) {
167
+ iconSvg = icons.positive;
168
+ colorClass = 'text-green-400';
169
+ } else if (sentimentKey.includes('negative')) {
170
+ iconSvg = icons.negative;
171
+ colorClass = 'text-red-400';
172
+ } else if (sentimentKey.includes('neutral')) {
173
+ iconSvg = icons.neutral;
174
+ colorClass = 'text-yellow-400';
175
+ }
176
+
177
+ sentimentIcon.innerHTML = iconSvg;
178
+ sentimentText.className = colorClass; // Only set color class
179
+
180
+ resultDiv.classList.remove('hidden');
181
+ errorDiv.classList.add('hidden');
182
+ }
183
+ </script>
184
+ </body>
185
+ </html>
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.109.0
2
+ uvicorn[standard]==0.27.0
3
+ pydantic==2.5.3
4
+ joblib==1.3.2
5
+ scikit-learn==1.4.0
6
+ numpy==1.26.3
7
+ scipy==1.11.4
8
+ huggingface-hub==0.20.3
vercel.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "version": 2,
3
+ "builds": [
4
+ {
5
+ "src": "index.html",
6
+ "use": "@vercel/static"
7
+ }
8
+ ],
9
+ "routes": [
10
+ {
11
+ "src": "/(.*)",
12
+ "dest": "/index.html"
13
+ }
14
+ ]
15
+ }