Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available: 6.13.0
metadata
license: apache-2.0
title: CustomerFeedbackClassification
sdk: gradio
sdk_version: 5.49.1
BERT Sentiment Classification
A fine-tuned BERT model for customer feedback sentiment analysis, deployed as a Gradio web application.
π Features
- Real-time sentiment analysis using fine-tuned BERT model
- Interactive web interface built with Gradio
- Confidence score visualization with bar charts
- Support for 3 sentiment classes: Positive π, Negative π, Neutral π
- Professional UI with examples and detailed results
- Model flexibility - works with fine-tuned or base BERT models
π§ Model Details
- Base Model: bert-base-uncased (Google's BERT)
- Task: Multi-class sentiment classification
- Classes: 3 (positive, negative, neutral)
- Training: Fine-tuned on customer feedback dataset
- Architecture: BERT encoder + classification head
- Performance: ~85-90% accuracy on validation data
π§ Technical Specifications
- Framework: PyTorch + Transformers
- Interface: Gradio
- Model Size: ~109M parameters
- Max Sequence Length: 128 tokens
- Batch Processing: Optimized for real-time inference
π¦ Dependencies
The application requires the following Python packages:
torch>=1.9.0
transformers>=4.20.0
gradio>=3.40.0
pandas>=1.3.0
numpy>=1.21.0
scikit-learn>=1.0.0
π Usage
- Enter text in the input box
- Click "Analyze Sentiment" to get predictions
- View results including:
- Predicted sentiment with emoji
- Confidence percentage
- Detailed probability breakdown
- Visual confidence chart
π‘ Example Inputs
Try these sample texts to see the model in action:
- "This product exceeded all my expectations! Outstanding quality."
- "I'm completely disappointed with this purchase."
- "The product is decent. It works as described but nothing extraordinary."
- "Best purchase I've made this year! Highly recommend."
- "The product I received was damaged. Unacceptable."
π How It Works
- Text Processing: Input text is tokenized using BERT tokenizer
- Encoding: BERT encoder processes tokens with self-attention mechanisms
- Classification: A classification head outputs probability scores for each sentiment class
- Prediction: The class with the highest probability is selected as the final prediction
ποΈ Architecture
Input Text β BERT Tokenizer β BERT Encoder β Classification Head β Softmax β Prediction
π Model Performance
- Accuracy: ~85-90% on validation dataset
- Response Time: <2 seconds per prediction
- Confidence Scores: Clear differentiation between sentiment classes
- Robustness: Handles various text lengths and styles
π Deployment
This application is designed for deployment on:
- Hugging Face Spaces (recommended - free & permanent)
- Google Colab (for development and testing)
- Local environments (with proper dependencies)
- Cloud platforms (AWS, GCP, Azure)
π§ Model Files
The application supports multiple model formats:
sentiment_pipeline.pkl- Complete pipeline with model and tokenizerbert_sentiment_model/- HuggingFace format directory- Fallback to base BERT model if no fine-tuned model is available
π License
This project is open source and available under the Apache 2.0 License.
π€ Contributing
Contributions, issues, and feature requests are welcome!
π§ Contact
For questions or support, please open an issue in the repository.
Built with β€οΈ using BERT, PyTorch, and Gradio