Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
kohbanye
/
SmilesTokenizer_PubChem_1M
like
0
PyTorch
roberta
Model card
Files
Files and versions
xet
Community
main
SmilesTokenizer_PubChem_1M
334 MB
3 contributors
History:
12 commits
kohbanye
fix: correct vocab size in config.json and update token IDs in tokenizer.json
de5c2cf
7 months ago
.gitattributes
Safe
690 Bytes
initial commit
over 4 years ago
README.md
205 Bytes
update README
7 months ago
added_tokens.json
Safe
25 Bytes
duplicate smiles-tokenizer 1m model
over 4 years ago
config.json
664 Bytes
fix: correct vocab size in config.json and update token IDs in tokenizer.json
7 months ago
merges.txt
Safe
52 Bytes
duplicate smiles-tokenizer 1m model
over 4 years ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch.LongStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
334 MB
xet
duplicate smiles-tokenizer 1m model
over 4 years ago
special_tokens_map.json
Safe
420 Bytes
duplicate smiles-tokenizer 1m model
over 4 years ago
tokenizer.json
15.4 kB
fix: correct vocab size in config.json and update token IDs in tokenizer.json
7 months ago
tokenizer_config.json
Safe
1.27 kB
duplicate smiles-tokenizer 1m model
over 4 years ago
vocab.json
Safe
6.96 kB
duplicate smiles-tokenizer 1m model
over 4 years ago