Instructions to use kaesve/BERT_patent_reference_extraction with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kaesve/BERT_patent_reference_extraction with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="kaesve/BERT_patent_reference_extraction")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("kaesve/BERT_patent_reference_extraction") model = AutoModelForMaskedLM.from_pretrained("kaesve/BERT_patent_reference_extraction") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 3987497d1a4ec094ac1f5f3982717ab58f02e2418110b5ea041ea43cf3939441
- Size of remote file:
- 433 MB
- SHA256:
- e845dc69d7f7d09490342e0e219b29659d965bb6941348da84f837841a92a52b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.