xglm_subword_regularization / tokenizer_config.json
gsaltintas's picture
Upload tokenizer file tokenizer_config.json - Upload model files
ab391d5 verified
raw
history blame contribute delete
433 Bytes
{"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "sp_model_kwargs": {}, "special_tokens_map_file": "hf_models/xglm-564M/special_tokens_map.json", "additional_special_tokens": ["<madeupword0>", "<madeupword1>", "<madeupword2>", "<madeupword3>", "<madeupword4>", "<madeupword5>", "<madeupword6>"], "tokenizer_file": null, "tokenizer_class": "XGLMTokenizer"}