Zero-Shot Classification
sentence-transformers
PyTorch
JAX
ONNX
Safetensors
OpenVINO
Transformers
English
roberta
text-classification
Instructions to use cross-encoder/nli-distilroberta-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use cross-encoder/nli-distilroberta-base with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("cross-encoder/nli-distilroberta-base") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use cross-encoder/nli-distilroberta-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-classification", model="cross-encoder/nli-distilroberta-base")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("cross-encoder/nli-distilroberta-base") model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/nli-distilroberta-base") - Notebooks
- Google Colab
- Kaggle
Push tokenizer again
#4
by tomaarsen HF Staff - opened
In this PR, I'm repushing the tokenizer with a newer version of transformers, with the goal of also generating the tokenizer.json used by the tokenizers-backed fast tokenizers. There should not be any changes in the performance of the tokenizer, the only difference is that Transformers and Sentence Transformers can now directly use the fast tokenizer without having to convert it from the slow tokenizer. This should make loading the model faster.
- Tom Aarsen
tomaarsen changed pull request status to merged