Instructions to use JunxiongWang/BiGS_128 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JunxiongWang/BiGS_128 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="JunxiongWang/BiGS_128")# Load model directly from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("JunxiongWang/BiGS_128", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- d9bb6548dbf81a59694e45348600bdfa3aac004523fc0c8d7e4bc76dc54d840f
- Size of remote file:
- 1.39 GB
- SHA256:
- 814c911d575ae509ddd8a3c4cf0ef49b84428ead91d251a0126bb6998270d980
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.