Instructions to use IDA-SERICS/PropagandaDetection with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IDA-SERICS/PropagandaDetection with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="IDA-SERICS/PropagandaDetection")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("IDA-SERICS/PropagandaDetection") model = AutoModelForSequenceClassification.from_pretrained("IDA-SERICS/PropagandaDetection") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("IDA-SERICS/PropagandaDetection")
model = AutoModelForSequenceClassification.from_pretrained("IDA-SERICS/PropagandaDetection")Quick Links
PropagandaDetection
The model is a Transformer network based on a DistilBERT pre-trained model. The pre-trained model is fine-tuned on the SemEval 2023 Task 3 training dataset for the propaganda detection task.
Hyperparameters :
Batch size = 16; Learning rate = 2e-5; AdamW optimizer; Epochs = 4.
Accuracy = 90 % on SemEval 2023 test set.
References
@inproceedings{bangerter2023unisa,
title={Unisa at SemEval-2023 task 3: a shap-based method for propaganda detection},
author={Bangerter, Micaela and Fenza, Giuseppe and Gallo, Mariacristina and Loia, Vincenzo and Volpe, Alberto and De Maio, Carmen and Stanzione, Claudio},
booktitle={Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)},
pages={885--891},
year={2023}
}
- Downloads last month
- 668
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="IDA-SERICS/PropagandaDetection")