Instructions to use hfl/rbt4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hfl/rbt4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="hfl/rbt4")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("hfl/rbt4") model = AutoModelForMaskedLM.from_pretrained("hfl/rbt4") - Notebooks
- Google Colab
- Kaggle
Commit History
allow flax 0d7172c
add fast tokenizer config cde4e10
hfl-rc commited on
Update config.json 69663d0
update info 639a3e7
ymcui commited on
First version of the rbt4 model and tokenizer. de0b1a2
ymcui commited on