Instructions to use IDA-SERICS/PropagandaDetection with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IDA-SERICS/PropagandaDetection with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="IDA-SERICS/PropagandaDetection")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("IDA-SERICS/PropagandaDetection") model = AutoModelForSequenceClassification.from_pretrained("IDA-SERICS/PropagandaDetection") - Notebooks
- Google Colab
- Kaggle
claudio sstt commited on
Commit ·
51ed51a
1
Parent(s): 2c5f930
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,12 @@ metrics:
|
|
| 6 |
pipeline_tag: text-classification
|
| 7 |
---
|
| 8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
## References
|
| 10 |
|
| 11 |
```
|
|
|
|
| 6 |
pipeline_tag: text-classification
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# PropagandaDetection
|
| 10 |
+
|
| 11 |
+
The model is a Transformer network based on a DistilBERT pre-trained model.
|
| 12 |
+
The pre-trained model is fine-tuned on the SemEval 2023 Task 3 training dataset for the propaganda detection task. To fine-tune the Transformer Distilbert-Base-Uncased, the following hyperparameters are used: the batch size of $16$; learning rate of $2e^{-5}$; AdamW optimizer; $4$ epochs. Tests provide an accuracy of around $90\%$.
|
| 13 |
+
|
| 14 |
+
|
| 15 |
## References
|
| 16 |
|
| 17 |
```
|