Zero-Shot Classification
sentence-transformers
PyTorch
ONNX
Safetensors
Transformers
English
deberta-v2
text-classification
Instructions to use cross-encoder/nli-deberta-v3-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use cross-encoder/nli-deberta-v3-large with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("cross-encoder/nli-deberta-v3-large") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use cross-encoder/nli-deberta-v3-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-classification", model="cross-encoder/nli-deberta-v3-large")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("cross-encoder/nli-deberta-v3-large") model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/nli-deberta-v3-large") - Notebooks
- Google Colab
- Kaggle
Push tokenizer again
#4
by tomaarsen HF Staff - opened
In this PR, I'm repushing the tokenizer with a newer version of transformers, with the goal of also generating the tokenizer.json used by the tokenizers-backed fast tokenizers. There should not be any changes in the performance of the tokenizer, the only difference is that Transformers and Sentence Transformers can now directly use the fast tokenizer without having to convert it from the slow tokenizer. This should make loading the model faster.
- Tom Aarsen
tomaarsen changed pull request status to merged