Text Classification
Transformers
PyTorch
bert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use philschmid/bert-mini-sst2-distilled with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use philschmid/bert-mini-sst2-distilled with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="philschmid/bert-mini-sst2-distilled")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("philschmid/bert-mini-sst2-distilled") model = AutoModelForSequenceClassification.from_pretrained("philschmid/bert-mini-sst2-distilled") - Notebooks
- Google Colab
- Kaggle
Adding ONNX file of this model
#4
by xingfudezhongzi - opened
Beep boop I am the ONNX export bot π€ποΈ. On behalf of xingfudezhongzi, I would like to add to this repository the model converted to ONNX.
What is ONNX? It stands for "Open Neural Network Exchange", and is the most commonly used open standard for machine learning interoperability. You can find out more at onnx.ai!
The exported ONNX model can be then be consumed by various backends as TensorRT or TVM, or simply be used in a few lines with π€ Optimum through ONNX Runtime, check out how here!