Instructions to use Cameron/BERT-eec-emotion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Cameron/BERT-eec-emotion with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Cameron/BERT-eec-emotion")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Cameron/BERT-eec-emotion") model = AutoModelForSequenceClassification.from_pretrained("Cameron/BERT-eec-emotion") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 3434c22c0fa965d3cde9c025821b3f425a7d690999e20f9c07f782acabe315c1
- Size of remote file:
- 433 MB
- SHA256:
- b7dc203a098abfb74e1e9e939afac7261637ad0173bc3d921a23b0ea5c05741b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.