Instructions to use mbartolo/electra-large-synqa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mbartolo/electra-large-synqa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="mbartolo/electra-large-synqa")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("mbartolo/electra-large-synqa") model = AutoModelForQuestionAnswering.from_pretrained("mbartolo/electra-large-synqa") - Notebooks
- Google Colab
- Kaggle
Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator
#3
by autoevaluator HF Staff - opened
Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋! We've added a new verifyToken field to your evaluation results to verify that they are produced by the model evaluator. Accept this PR to ensure that your results remain listed as verified on the Hub leaderboard.