hamxaameer's picture
Update README.md
157706f verified

A newer version of the Gradio SDK is available: 6.13.0

Upgrade
metadata
license: apache-2.0
title: CustomerFeedbackClassification
sdk: gradio
sdk_version: 5.49.1

BERT Sentiment Classification

A fine-tuned BERT model for customer feedback sentiment analysis, deployed as a Gradio web application.

πŸš€ Features

  • Real-time sentiment analysis using fine-tuned BERT model
  • Interactive web interface built with Gradio
  • Confidence score visualization with bar charts
  • Support for 3 sentiment classes: Positive 😊, Negative 😞, Neutral 😐
  • Professional UI with examples and detailed results
  • Model flexibility - works with fine-tuned or base BERT models

🧠 Model Details

  • Base Model: bert-base-uncased (Google's BERT)
  • Task: Multi-class sentiment classification
  • Classes: 3 (positive, negative, neutral)
  • Training: Fine-tuned on customer feedback dataset
  • Architecture: BERT encoder + classification head
  • Performance: ~85-90% accuracy on validation data

πŸ”§ Technical Specifications

  • Framework: PyTorch + Transformers
  • Interface: Gradio
  • Model Size: ~109M parameters
  • Max Sequence Length: 128 tokens
  • Batch Processing: Optimized for real-time inference

πŸ“¦ Dependencies

The application requires the following Python packages:

torch>=1.9.0
transformers>=4.20.0
gradio>=3.40.0
pandas>=1.3.0
numpy>=1.21.0
scikit-learn>=1.0.0

πŸš€ Usage

  1. Enter text in the input box
  2. Click "Analyze Sentiment" to get predictions
  3. View results including:
    • Predicted sentiment with emoji
    • Confidence percentage
    • Detailed probability breakdown
    • Visual confidence chart

πŸ’‘ Example Inputs

Try these sample texts to see the model in action:

  • "This product exceeded all my expectations! Outstanding quality."
  • "I'm completely disappointed with this purchase."
  • "The product is decent. It works as described but nothing extraordinary."
  • "Best purchase I've made this year! Highly recommend."
  • "The product I received was damaged. Unacceptable."

πŸ” How It Works

  1. Text Processing: Input text is tokenized using BERT tokenizer
  2. Encoding: BERT encoder processes tokens with self-attention mechanisms
  3. Classification: A classification head outputs probability scores for each sentiment class
  4. Prediction: The class with the highest probability is selected as the final prediction

πŸ—οΈ Architecture

Input Text β†’ BERT Tokenizer β†’ BERT Encoder β†’ Classification Head β†’ Softmax β†’ Prediction

πŸ“Š Model Performance

  • Accuracy: ~85-90% on validation dataset
  • Response Time: <2 seconds per prediction
  • Confidence Scores: Clear differentiation between sentiment classes
  • Robustness: Handles various text lengths and styles

🌐 Deployment

This application is designed for deployment on:

  • Hugging Face Spaces (recommended - free & permanent)
  • Google Colab (for development and testing)
  • Local environments (with proper dependencies)
  • Cloud platforms (AWS, GCP, Azure)

πŸ”§ Model Files

The application supports multiple model formats:

  • sentiment_pipeline.pkl - Complete pipeline with model and tokenizer
  • bert_sentiment_model/ - HuggingFace format directory
  • Fallback to base BERT model if no fine-tuned model is available

πŸ“ License

This project is open source and available under the Apache 2.0 License.

🀝 Contributing

Contributions, issues, and feature requests are welcome!

πŸ“§ Contact

For questions or support, please open an issue in the repository.


Built with ❀️ using BERT, PyTorch, and Gradio