Instructions to use facebook/xlm-v-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/xlm-v-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="facebook/xlm-v-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-v-base") model = AutoModelForMaskedLM.from_pretrained("facebook/xlm-v-base") - Inference
- Notebooks
- Google Colab
- Kaggle
loading xlm v as feature extractor and feature tokenizer
#2
by chao0619 - opened
with code
from transformers import XLMRobertaModel, XLMRobertaTokenizer
extractor = XLMRobertaModel(roberta_path)
tokenizer = XLMRobertaTokenizer(roberta_path)
then we can get word embedding from it
how can i do the same with XLM-V Modell
Although i have changed the path to XLM-V Path
A warning is thrown outsome weights were not initialized from checkpoint. You should probabely train this model on a down-stream task to be able to use it for predictions and inference