Instructions to use facebook/w2v-bert-2.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/w2v-bert-2.0 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="facebook/w2v-bert-2.0")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("facebook/w2v-bert-2.0") model = AutoModel.from_pretrained("facebook/w2v-bert-2.0") - Notebooks
- Google Colab
- Kaggle
Any quantization possible?
#18
by supercharge19 - opened
Can quantized versions be made available, or these models are difficult to quantize?
I don't really know tbh, I think it should probably work with out-of-the-box tools
most of deep learning models can be quantized, however, they don't yield good quality outputs, at least not for all outputs (multi lang), that is why onnx models suck.
@anzorq sorry mate, not yet.