| | --- |
| | library_name: transformers |
| | license: mit |
| | --- |
| | |
| | # Model Card for `goatley/sentiment-final-model` |
| |
|
| | This model is a fine-tuned **DistilBERT** model for **binary sentiment classification** (positive/negative) of English text reviews. |
| | It was developed as part of an advanced NLP dashboard project demonstrating applied skills in deep learning, NLP engineering, and full-stack app deployment. |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | - **Developed by:** Keith Goatley |
| | - **License:** MIT |
| | - **Model type:** DistilBERT-based Sequence Classification (Binary) |
| | - **Language(s):** English |
| | - **Fine-tuned from:** `distilbert-base-uncased` |
| | - **Base model:** Hugging Face Transformers v4 |
| | - **Framework:** PyTorch |
| |
|
| | ### Model Sources |
| |
|
| | - **Repository:** [GitHub Repository](https://github.com/Keithgoatley/sentiment-analysis-app) |
| | - **Demo:** [Hugging Face Space (when deployed)](https://huggingface.co/spaces/goatley/sentiment-analysis-dashboard) |
| |
|
| | ## Uses |
| |
|
| | ### Direct Use |
| |
|
| | - Classifying short text reviews (e.g., Amazon product reviews) into **positive** or **negative** sentiment. |
| |
|
| | ### Downstream Use |
| |
|
| | - Embedding inside sentiment-driven recommendation engines |
| | - As a component of multi-task NLP dashboards |
| | - Fine-tuning for domain-specific sentiment (e.g., medical, finance, hospitality reviews) |
| |
|
| | ### Out-of-Scope Use |
| |
|
| | - Not designed for languages other than English. |
| | - Not suited for emotion detection beyond binary sentiment. |
| |
|
| | ## Bias, Risks, and Limitations |
| |
|
| | This model was fine-tuned on Amazon reviews, which may carry biases toward product-related expressions and cultural language patterns. |
| | Users should be cautious when applying the model outside typical e-commerce datasets. |
| |
|
| | ### Recommendations |
| |
|
| | For more robust domain generalization: |
| | - Further fine-tuning on task-specific datasets is advised. |
| |
|
| | ## How to Get Started with the Model |
| |
|
| | ```python |
| | from transformers import pipeline |
| | |
| | classifier = pipeline( |
| | "sentiment-analysis", |
| | model="goatley/sentiment-final-model", |
| | tokenizer="goatley/sentiment-final-model" |
| | ) |
| | |
| | classifier(["I love this!", "This was awful."]) |
| | |
| | |
| | Training Details |
| | Training Data |
| | Subset of Amazon Reviews Dataset |
| | |
| | Balanced 50/50 split of positive and negative reviews |
| | |
| | Approximately 5,000 examples used for fine-tuning |
| | |
| | Training Procedure |
| | Fine-tuned for 3 epochs |
| | |
| | Learning rate scheduling with warmup |
| | |
| | Optimizer: AdamW |
| | |
| | Batch size: 16 |
| | |
| | Device: CPU-based training (GitHub Codespaces) |
| | |
| | Training Hyperparameters |
| | Learning Rate: 5e-5 |
| | |
| | Optimizer: AdamW |
| | |
| | Max Sequence Length: 512 |
| | |
| | Epochs: 3 |
| | |
| | Evaluation |
| | Testing Data |
| | Held-out test split from the Amazon Reviews dataset |
| | |
| | Metrics |
| | |
| | Metric Score |
| | Test Accuracy 85% |
| | Evaluation was performed using basic classification metrics (accuracy, precision, recall, F1-score). |
| | |
| | Environmental Impact |
| | Hardware Type: CPU (GitHub Codespaces) |
| | |
| | Hours Used: ~2 hours |
| | |
| | Cloud Provider: GitHub (Microsoft Azure backend) |
| | |
| | Compute Region: North America |
| | |
| | Carbon Emitted: Negligible (very small dataset + CPU-only fine-tuning) |
| | |
| | Technical Specifications |
| | Model Architecture and Objective |
| | Architecture: DistilBERT Transformer Encoder |
| | |
| | Task Objective: Sequence classification with 2 labels (positive, negative) |
| | |
| | Compute Infrastructure |
| | Training performed on GitHub Codespaces virtual machines. |
| | |
| | No GPUs were used. |
| | |
| | Software Environment |
| | Hugging Face transformers==4.51.3 |
| | |
| | Datasets datasets==3.5.0 |
| | |
| | PyTorch torch==2.6.0 |
| | |
| | Citation |
| | If you use this model or find it helpful, please cite: |
| | |
| | APA: |
| | |
| | Goatley, K. (2025). Sentiment Analysis Fine-Tuned DistilBERT Model [Model]. Hugging Face. https://huggingface.co/goatley/sentiment-final-model |
| | |
| | BibTeX: |
| | @misc{goatley2025sentiment, |
| | author = {Keith Goatley}, |
| | title = {Sentiment |
| | |
| | Analysis Fine-Tuned DistilBERT Model}, |
| | year = {2025}, |
| | publisher = {Hugging Face}, |
| | howpublished = {\url{https://huggingface.co/goatley/sentiment-final-model}} |
| | } |
| | |
| | Model Card Authors |
| | Keith Goatley |
| | |
| | Contact |
| | For questions or inquiries, please contact via: |
| | |
| | GitHub: https://github.com/Keithgoatley |
| | |
| | Hugging Face: https://huggingface.co/goatley |
| | |
| | |