SentimentAnalysis / README.md
ktr008's picture
Update README.md
20a17f8 verified

A newer version of the Gradio SDK is available: 6.6.0

Upgrade
metadata
title: TweetSentimnet
emoji: πŸ†
colorFrom: red
colorTo: gray
sdk: gradio
sdk_version: 5.18.0
app_file: app.py
pinned: false
short_description: TweetSentimnet
model_link: https://huggingface.co/ktr008/sentiment

Fine-Tuned Sentiment Analysis Deployment Guide

This guide explains how to fine-tune, save, upload, and deploy a sentiment analysis model using Hugging Face Transformers, Gradio, and Hugging Face Spaces.


1. Prerequisites

Before proceeding, ensure you have the following installed:

Install Required Libraries

pip install gradio transformers torch scipy numpy

If you're using TensorFlow-based models, also install:

pip install tensorflow

Hugging Face Authentication

Login to Hugging Face CLI:

huggingface-cli login

(You'll need an access token from Hugging Face.)


2. Fine-Tune Your Sentiment Analysis Model

Training a Custom Sentiment Model

If you haven't already fine-tuned a model, you can do so using Trainer from Hugging Face:

from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer
from datasets import load_dataset

# Load dataset
dataset = load_dataset("imdb")  # Example dataset

# Load tokenizer and model
model_name = "cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)

# Tokenize dataset
def preprocess(examples):
    return tokenizer(examples["text"], truncation=True, padding="max_length")

tokenized_datasets = dataset.map(preprocess, batched=True)

# Training Arguments
training_args = TrainingArguments(
    output_dir="./fine_tuned_sentiment_model",
    evaluation_strategy="epoch",
    save_strategy="epoch",
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    num_train_epochs=3,
    weight_decay=0.01,
    logging_dir="./logs",
)

# Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["test"],
)

# Train Model
trainer.train()

# Save Model
model.save_pretrained("./fine_tuned_sentiment_model")
tokenizer.save_pretrained("./fine_tuned_sentiment_model")

3. Upload Model to Hugging Face Hub

Once you've fine-tuned your model, upload it to Hugging Face Model Hub:

1. Install huggingface_hub

pip install huggingface_hub

2. Push Model to Hugging Face

from huggingface_hub import notebook_login
from transformers import AutoModelForSequenceClassification, AutoTokenizer

notebook_login()  # Authenticate

# Define model name
repo_name = "your-username/sentiment-analysis-model"

# Load fine-tuned model
model = AutoModelForSequenceClassification.from_pretrained("./fine_tuned_sentiment_model")
tokenizer = AutoTokenizer.from_pretrained("./fine_tuned_sentiment_model")

# Push model to Hugging Face Hub
model.push_to_hub(repo_name)
tokenizer.push_to_hub(repo_name)

Your fine-tuned model is now available at https://huggingface.co/your-username/sentiment-analysis-model.


4. Deploy Sentiment Model Using Gradio

To create a Gradio-based web interface, follow these steps:

1. Create app.py

Save the following script as app.py:

import gradio as gr
import numpy as np
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
from scipy.special import softmax

# Load fine-tuned model from Hugging Face Hub
MODEL_NAME = "your-username/sentiment-analysis-model"  # Replace with your model repo
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
config = AutoConfig.from_pretrained(MODEL_NAME)

# Preprocess function
def preprocess(text):
    new_text = []
    for t in text.split(" "):
        t = '@user' if t.startswith('@') and len(t) > 1 else t
        t = 'http' if t.startswith('http') else t
        new_text.append(t)
    return " ".join(new_text)

# Sentiment Prediction Function
def predict_sentiment(text):
    text = preprocess(text)
    encoded_input = tokenizer(text, return_tensors='pt')
    output = model(**encoded_input)
    scores = output[0][0].detach().numpy()
    scores = softmax(scores)

    # Get sentiment labels and scores
    ranking = np.argsort(scores)[::-1]
    result = {config.id2label[ranking[i]]: round(float(scores[ranking[i]]) * 100, 2) for i in range(scores.shape[0])}
    return result

# Gradio Interface
interface = gr.Interface(
    fn=predict_sentiment,
    inputs=gr.Textbox(lines=3, placeholder="Enter text..."),
    outputs=gr.Label(),
    title="Fine-Tuned Sentiment Analysis",
    description="Enter a sentence to analyze its sentiment (Positive, Neutral, Negative).",
)

# Launch the app
interface.launch()

5. Upload to Hugging Face Spaces

1. Create a Hugging Face Space

  • Go to Hugging Face Spaces.
  • Click Create new Space.
  • Choose Gradio as the SDK.
  • Set the repository name (e.g., sentiment-analysis-app).
  • Click Create Space.

2. Upload Files

  • Upload app.py in the Space repository.
  • Create and upload a requirements.txt file with:
    gradio
    transformers
    torch
    scipy
    numpy
    

3. Deploy the Model

Once the files are uploaded, Hugging Face will automatically install dependencies and launch the app. You can access it via the public URL provided by Hugging Face.


6. Testing & Sharing

Once deployed, test the model by entering different texts and see the predicted sentiment. Share the public Hugging Face Space link with others to let them use it.


7. Summary

βœ… Fine-tune a sentiment analysis model

βœ… Upload it to Hugging Face Model Hub

βœ… Deploy it using Gradio & Hugging Face Spaces

βœ… Make it publicly accessible for users

πŸš€ Your fine-tuned sentiment analysis model is now LIVE! πŸŽ‰