Instructions to use indiejoseph/bert-base-cantonese with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use indiejoseph/bert-base-cantonese with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="indiejoseph/bert-base-cantonese")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("indiejoseph/bert-base-cantonese") model = AutoModelForMaskedLM.from_pretrained("indiejoseph/bert-base-cantonese") - Notebooks
- Google Colab
- Kaggle
bert-base-cantonese
This model is a continue pre-train version of bert-base-chinese on Cantonese Common Crawl dataset with 198m tokens.
Model description
This model has extended 500 more Chinese characters which very common in Cantonese, such as 冧, 噉, 麪, 笪, 冚, 乸 etc.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
Training results
Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 636