mrsteyk commited on
Commit
890a863
·
1 Parent(s): 3826aaa

Upload 6 files

Browse files
added_tokens.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ " ": 50257,
3
+ " ": 50258,
4
+ " ": 50259,
5
+ " ": 50260,
6
+ " ": 50261,
7
+ " ": 50262,
8
+ " ": 50263,
9
+ " ": 50264,
10
+ " ": 50265,
11
+ " ": 50266,
12
+ " ": 50267,
13
+ " ": 50268,
14
+ " ": 50269,
15
+ " ": 50270,
16
+ " ": 50271,
17
+ " ": 50272,
18
+ " ": 50273,
19
+ " ": 50274,
20
+ " ": 50275,
21
+ " ": 50276,
22
+ " ": 50277,
23
+ " ": 50278,
24
+ " ": 50279,
25
+ " ": 50280
26
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff