Instructions to use 1024m/SMM4H-Task3-BartL-2A30 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use 1024m/SMM4H-Task3-BartL-2A30 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="1024m/SMM4H-Task3-BartL-2A30")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("1024m/SMM4H-Task3-BartL-2A30") model = AutoModelForSequenceClassification.from_pretrained("1024m/SMM4H-Task3-BartL-2A30") - Notebooks
- Google Colab
- Kaggle
Ctrl+K