Text Classification
Transformers
PyTorch
bert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use fxmarty/tiny-bert-sst2-distilled-clone with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use fxmarty/tiny-bert-sst2-distilled-clone with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="fxmarty/tiny-bert-sst2-distilled-clone")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("fxmarty/tiny-bert-sst2-distilled-clone") model = AutoModelForSequenceClassification.from_pretrained("fxmarty/tiny-bert-sst2-distilled-clone") - Notebooks
- Google Colab
- Kaggle
Adding ONNX file of this model
#16
by fxmarty - opened
Beep boop I am the ONNX export bot 🤖🏎️. On behalf of fxmarty, I would like to add to this repository the model converted to ONNX.
What is ONNX? It stands for "Open Neural Network Exchange", and is the most commonly used open standard for machine learning interoperability. You can find out more at onnx.ai!
The exported ONNX model can be then be consumed by various backends as TensorRT or TVM, or simply be used in a few lines with 🤗 Optimum through ONNX Runtime, check out how here!