File size: 6,186 Bytes
0795670
 
 
 
 
 
 
 
 
 
20a17f8
0795670
 
b807f44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
---
title: TweetSentimnet
emoji: πŸ†
colorFrom: red
colorTo: gray
sdk: gradio
sdk_version: 5.18.0
app_file: app.py
pinned: false
short_description: TweetSentimnet
model_link : https://huggingface.co/ktr008/sentiment
---

# **Fine-Tuned Sentiment Analysis Deployment Guide**  

This guide explains how to **fine-tune, save, upload, and deploy** a sentiment analysis model using **Hugging Face Transformers, Gradio, and Hugging Face Spaces**.  

---

## **1. Prerequisites**  
Before proceeding, ensure you have the following installed:  

### **Install Required Libraries**
```bash
pip install gradio transformers torch scipy numpy
```

If you're using **TensorFlow-based models**, also install:
```bash
pip install tensorflow
```

### **Hugging Face Authentication**
Login to Hugging Face CLI:
```bash
huggingface-cli login
```
(You'll need an **access token** from [Hugging Face](https://huggingface.co/settings/tokens).)

---

## **2. Fine-Tune Your Sentiment Analysis Model**
### **Training a Custom Sentiment Model**
If you haven't already fine-tuned a model, you can do so using `Trainer` from Hugging Face:  

```python
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer
from datasets import load_dataset

# Load dataset
dataset = load_dataset("imdb")  # Example dataset

# Load tokenizer and model
model_name = "cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)

# Tokenize dataset
def preprocess(examples):
    return tokenizer(examples["text"], truncation=True, padding="max_length")

tokenized_datasets = dataset.map(preprocess, batched=True)

# Training Arguments
training_args = TrainingArguments(
    output_dir="./fine_tuned_sentiment_model",
    evaluation_strategy="epoch",
    save_strategy="epoch",
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    num_train_epochs=3,
    weight_decay=0.01,
    logging_dir="./logs",
)

# Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["test"],
)

# Train Model
trainer.train()

# Save Model
model.save_pretrained("./fine_tuned_sentiment_model")
tokenizer.save_pretrained("./fine_tuned_sentiment_model")
```

---

## **3. Upload Model to Hugging Face Hub**
Once you've fine-tuned your model, upload it to **Hugging Face Model Hub**:  

### **1. Install `huggingface_hub`**
```bash
pip install huggingface_hub
```

### **2. Push Model to Hugging Face**
```python
from huggingface_hub import notebook_login
from transformers import AutoModelForSequenceClassification, AutoTokenizer

notebook_login()  # Authenticate

# Define model name
repo_name = "your-username/sentiment-analysis-model"

# Load fine-tuned model
model = AutoModelForSequenceClassification.from_pretrained("./fine_tuned_sentiment_model")
tokenizer = AutoTokenizer.from_pretrained("./fine_tuned_sentiment_model")

# Push model to Hugging Face Hub
model.push_to_hub(repo_name)
tokenizer.push_to_hub(repo_name)
```

Your fine-tuned model is now available at **https://huggingface.co/your-username/sentiment-analysis-model**.

---

## **4. Deploy Sentiment Model Using Gradio**
To create a **Gradio-based web interface**, follow these steps:

### **1. Create `app.py`**
Save the following script as `app.py`:

```python
import gradio as gr
import numpy as np
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
from scipy.special import softmax

# Load fine-tuned model from Hugging Face Hub
MODEL_NAME = "your-username/sentiment-analysis-model"  # Replace with your model repo
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
config = AutoConfig.from_pretrained(MODEL_NAME)

# Preprocess function
def preprocess(text):
    new_text = []
    for t in text.split(" "):
        t = '@user' if t.startswith('@') and len(t) > 1 else t
        t = 'http' if t.startswith('http') else t
        new_text.append(t)
    return " ".join(new_text)

# Sentiment Prediction Function
def predict_sentiment(text):
    text = preprocess(text)
    encoded_input = tokenizer(text, return_tensors='pt')
    output = model(**encoded_input)
    scores = output[0][0].detach().numpy()
    scores = softmax(scores)

    # Get sentiment labels and scores
    ranking = np.argsort(scores)[::-1]
    result = {config.id2label[ranking[i]]: round(float(scores[ranking[i]]) * 100, 2) for i in range(scores.shape[0])}
    return result

# Gradio Interface
interface = gr.Interface(
    fn=predict_sentiment,
    inputs=gr.Textbox(lines=3, placeholder="Enter text..."),
    outputs=gr.Label(),
    title="Fine-Tuned Sentiment Analysis",
    description="Enter a sentence to analyze its sentiment (Positive, Neutral, Negative).",
)

# Launch the app
interface.launch()
```

---

## **5. Upload to Hugging Face Spaces**
### **1. Create a Hugging Face Space**
- Go to [Hugging Face Spaces](https://huggingface.co/spaces).  
- Click **Create new Space**.  
- Choose **Gradio** as the SDK.  
- Set the repository name (e.g., `sentiment-analysis-app`).  
- Click **Create Space**.  

### **2. Upload Files**
- Upload `app.py` in the Space repository.
- Create and upload a `requirements.txt` file with:
  ```
  gradio
  transformers
  torch
  scipy
  numpy
  ```

### **3. Deploy the Model**
Once the files are uploaded, Hugging Face will **automatically install dependencies** and **launch the app**. You can access it via the **public URL** provided by Hugging Face.

---

## **6. Testing & Sharing**
Once deployed, test the model by entering different texts and see the predicted sentiment. Share the **public Hugging Face Space link** with others to let them use it.

---

## **7. Summary**
### βœ… **Fine-tune a sentiment analysis model**  
### βœ… **Upload it to Hugging Face Model Hub**  
### βœ… **Deploy it using Gradio & Hugging Face Spaces**  
### βœ… **Make it publicly accessible for users**  

πŸš€ **Your fine-tuned sentiment analysis model is now LIVE!** πŸŽ‰