Instructions to use onlplab/alephbert-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onlplab/alephbert-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="onlplab/alephbert-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("onlplab/alephbert-base") model = AutoModelForMaskedLM.from_pretrained("onlplab/alephbert-base") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a01b24c113a6a2db357c27c3e683246d13ca88dad9e77fc9e73882b6b27f3046
- Size of remote file:
- 504 MB
- SHA256:
- 871f8c6c2a25e331d284b9787def5cf354802f0fea1d1677a712ebc672c8926c
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.