Instructions to use Deepchecks/parrot_fluency_model_onnx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Deepchecks/parrot_fluency_model_onnx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Deepchecks/parrot_fluency_model_onnx")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Deepchecks/parrot_fluency_model_onnx") model = AutoModelForSequenceClassification.from_pretrained("Deepchecks/parrot_fluency_model_onnx") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Deepchecks/parrot_fluency_model_onnx")
model = AutoModelForSequenceClassification.from_pretrained("Deepchecks/parrot_fluency_model_onnx")Quick Links
This model represents an ONNX-optimized version of the original parrot_fluency_model model. It has been specifically tailored for GPUs and may exhibit variations in performance when run on CPUs.
Dependencies
Please install the following dependency before you begin working with the model:
pip install optimum[onnxruntime-gpu]
How to use
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline
# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/parrot_fluency_model_onnx')
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/parrot_fluency_model_onnx')
# prepare the pipeline and generate inferences
user_inputs = ['Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence.',
'Pass on what you have learned. Strength, mastery, hmm… but weakness, folly, failure, also. Yes, failure, most of all. The greatest teacher, failure is.',
'Whispering dreams, forgotten desires, chaotic thoughts, dance with words, meaning elusive, swirling amidst.']
pip = pipeline(task='text-classification', model=model, tokenizer=tokenizer, device=device, accelerator="ort")
res = pip(user_inputs, batch_size=64, truncation="only_first")
- Downloads last month
- 4,162
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Deepchecks/parrot_fluency_model_onnx")