Instructions to use facebook/w2v-bert-2.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/w2v-bert-2.0 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="facebook/w2v-bert-2.0")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("facebook/w2v-bert-2.0") model = AutoModel.from_pretrained("facebook/w2v-bert-2.0") - Notebooks
- Google Colab
- Kaggle
How to finetune the w2v-bert2.0 with multi-GPUs?
#25
by kssmmm - opened
I implemented fine-tuning on a single GPU by following the steps in the blog. Obviously, this method of training is very slow. However, when I modified CUDA_VISIBLE_DEVICES to two GPUs, the following problem occurre:
The first warning doesn't seem to have any impact; it also appeared when I was training with a single GPU. The second warning appears to only occur during multi-GPUs training and seems to be the cause of the final failure. Do you have any solutions?
