Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

kohbanye
/
SmilesTokenizer_PubChem_1M

PyTorch
roberta
Model card Files Files and versions
xet
Community
SmilesTokenizer_PubChem_1M
334 MB
  • 3 contributors
History: 12 commits
kohbanye's picture
kohbanye
fix: correct vocab size in config.json and update token IDs in tokenizer.json
de5c2cf 7 months ago
  • .gitattributes
    690 Bytes
    initial commit over 4 years ago
  • README.md
    205 Bytes
    update README 7 months ago
  • added_tokens.json
    25 Bytes
    duplicate smiles-tokenizer 1m model over 4 years ago
  • config.json
    664 Bytes
    fix: correct vocab size in config.json and update token IDs in tokenizer.json 7 months ago
  • merges.txt
    52 Bytes
    duplicate smiles-tokenizer 1m model over 4 years ago
  • pytorch_model.bin

    Detected Pickle imports (4)

    • "collections.OrderedDict",
    • "torch.FloatStorage",
    • "torch.LongStorage",
    • "torch._utils._rebuild_tensor_v2"

    What is a pickle import?

    334 MB
    xet
    duplicate smiles-tokenizer 1m model over 4 years ago
  • special_tokens_map.json
    420 Bytes
    duplicate smiles-tokenizer 1m model over 4 years ago
  • tokenizer.json
    15.4 kB
    fix: correct vocab size in config.json and update token IDs in tokenizer.json 7 months ago
  • tokenizer_config.json
    1.27 kB
    duplicate smiles-tokenizer 1m model over 4 years ago
  • vocab.json
    6.96 kB
    duplicate smiles-tokenizer 1m model over 4 years ago