How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("fill-mask", model="kykim/bert-kor-base")
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("kykim/bert-kor-base")
model = AutoModelForMaskedLM.from_pretrained("kykim/bert-kor-base")
Quick Links

Bert base model for Korean

  • 70GB Korean text dataset and 42000 lower-cased subwords are used
  • Check the model performance and other language models for Korean in github
from transformers import BertTokenizerFast, BertModel

tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base")
model_bert = BertModel.from_pretrained("kykim/bert-kor-base")
Downloads last month
90,395
Inference Providers NEW
Mask token: [MASK]

Model tree for kykim/bert-kor-base

Finetunes
4 models
Quantizations
1 model

Spaces using kykim/bert-kor-base 32