Text Classification
Transformers
TensorBoard
Safetensors
bert
Generated from Trainer
sentiment-analysis
text-embeddings-inference
Instructions to use DerivedFunction01/bert-imdb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DerivedFunction01/bert-imdb with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="DerivedFunction01/bert-imdb")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DerivedFunction01/bert-imdb") model = AutoModelForSequenceClassification.from_pretrained("DerivedFunction01/bert-imdb") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7c1847e9dadd9a6076d60b98cf7ec1a78d3b67d08e87c0e86c14d5542ca2646f
- Size of remote file:
- 433 MB
- SHA256:
- 9ae707d294d58e8ed007acdc8d87f3c74570c0432f1bea80f8df94ce791d5f65
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.