stanfordnlp/snli
Viewer • Updated • 570k • 33.5k • 91
How to use arjunsah21/embedder-snli with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("feature-extraction", model="arjunsah21/embedder-snli") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("arjunsah21/embedder-snli")
model = AutoModel.from_pretrained("arjunsah21/embedder-snli")# Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("arjunsah21/embedder-snli")
model = AutoModel.from_pretrained("arjunsah21/embedder-snli")A lightweight BERT-style encoder trained from scratch on SNLI entailment pairs using an in-batch contrastive loss and mean pooling.
datasets library)[PAD] [UNK] [CLS] [SEP] [MASK]BertModel (trained from scratch)from transformers import BertModel, BertTokenizerFast
model_id = "your-username/embedder-snli" tokenizer = BertTokenizerFast.from_pretrained(model_id) model = BertModel.from_pretrained(model_id)
@inproceedings{bowman2015snli, title={A large annotated corpus for learning natural language inference}, author={Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher and Manning, Christopher D.}, booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, year={2015} }
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="arjunsah21/embedder-snli")