| # DistilBERT Fine-Tuned for Sequence Classification |
|
|
| ## Model Overview |
| This is a fine-tuned version of the DistilBERT model designed for sequence classification tasks. It is inspired by the r/AmItheAsshole subreddit, where it has been trained on textual data to assess and classify user-submitted stories. |
|
|
| - **Base Model**: [DistilBERT](https://huggingface.co/distilbert-base-uncased) |
| - **Fine-Tuned For**: Sequence classification (e.g., sentiment analysis, AITA-type categorization) |
| - **Dataset**: https://huggingface.co/datasets/MattBoraske/Reddit-AITA-2018-to-2022 |
| - **Task**: Sequence classification with predefined labels. |
|
|
| ## Model Details |
| - **Architecture**: Transformer-based model (DistilBERT) |
| - **Input Format**: Text sequences |
| - **Output Format**: Classification labels with confidence scores |
| - **Labels**: |
| - `LABEL_0`: The Asshole |
| - `LABEL_1`: Not the Asshole |
|
|
| ## Intended Use |
| This model is intended to provide insights and assessments for user-submitted textual scenarios. It works well for binary classification tasks. |
|
|
| ### Example Usage |
|
|
| ```python |
| from transformers import pipeline |
| |
| classifier = pipeline( |
| "text-classification", |
| model="your-username/your-model-name" |
| ) |
| |
| text = "I did not invite my friend for my wedding. AITA ?" |
| result = classifier(text) |
| print(result) |
| |