| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | --- |
| | # DistilBERT for Sarcasm Detection π |
| |
|
| | This is a fine-tuned [DistilBERT](https://huggingface.co/distilbert-base-uncased) model on the **News Headlines Dataset for Sarcasm Detection**. |
| |
|
| | ## π Dataset |
| | - **Source:** [News Headlines Dataset for Sarcasm Detection](https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection) |
| | - **Task:** Binary classification (`0 = Not Sarcastic`, `1 = Sarcastic`) |
| | - **Size:** ~28,000 headlines |
| |
|
| | ## π§ Model Training |
| | - Framework: Hugging Face Transformers |
| | - Tokenizer: `distilbert-base-uncased` |
| | - Training epochs: 3 |
| | - Optimizer: AdamW |
| | - Batch size: 16 |
| |
|
| | ## π Performance |
| | | Model | Accuracy | |
| | |--------------|----------| |
| | | **DistilBERT (ours)** | **93.1%** | |
| | | GRU | 85.3% | |
| | | LSTM | 84.6% | |
| | | Logistic Regression | 83.4% | |
| | | SVM | 82.9% | |
| | | Naive Bayes | 82.7% | |
| |
|
| | ## π Usage |
| |
|
| | ```python |
| | from transformers import pipeline |
| | |
| | # Load the model from HF Hub |
| | classifier = pipeline("text-classification", model="YamenRM/sarcasm_model") |
| | |
| | # Example |
| | text = "Oh great, another Monday morning meeting!" |
| | print(classifier(text)) |
| | ``` |
| | ### Output: |
| | [{'label': 'SARCASTIC', 'score': 0.93}] |
| |
|
| | ## β¨ Author |
| |
|
| | Trained and uploaded by **YamenRM .** |