Instructions to use MidhunKanadan/SentimentBERT-AIWriting with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MidhunKanadan/SentimentBERT-AIWriting with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="MidhunKanadan/SentimentBERT-AIWriting")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("MidhunKanadan/SentimentBERT-AIWriting") model = AutoModelForSequenceClassification.from_pretrained("MidhunKanadan/SentimentBERT-AIWriting") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -3,6 +3,8 @@ license: apache-2.0
|
|
| 3 |
---
|
| 4 |
# SentimentBERT-AIWriting
|
| 5 |
|
|
|
|
|
|
|
| 6 |
This model is a fine-tuned version of `bert-base-uncased` for sentiment classification, particularly tailored for AI-assisted argumentative writing. It classifies text into three categories: positive, negative, and neutral. The model was trained on a diverse dataset of statements collected from various domains to ensure robustness and accuracy across different contexts.
|
| 7 |
|
| 8 |
## Model Description
|
|
|
|
| 3 |
---
|
| 4 |
# SentimentBERT-AIWriting
|
| 5 |
|
| 6 |
+

|
| 7 |
+
|
| 8 |
This model is a fine-tuned version of `bert-base-uncased` for sentiment classification, particularly tailored for AI-assisted argumentative writing. It classifies text into three categories: positive, negative, and neutral. The model was trained on a diverse dataset of statements collected from various domains to ensure robustness and accuracy across different contexts.
|
| 9 |
|
| 10 |
## Model Description
|