File size: 1,240 Bytes
b4ed439
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
956aaaf
b4ed439
3e4de1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6796f13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  - name: aa_seqs
    dtype: string
  splits:
  - name: train
    num_bytes: 61101706188
    num_examples: 9920628
  download_size: 5540646354
  dataset_size: 61101706188
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
10 million random examples from Uniref50 representative sequences (October 2023) and computed [selfies](https://github.com/aspuru-guzik-group/selfies) strings. The strings are stored as input ids from a custom selfies tokenizer. A BERT tokenizer with this vocabulary has been uploaded to this dataset under the files.

You can access the tokenizer like this:

```python
import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer

repo_path = 'Synthyra/ProteinSelfies'
local_path = 'ProteinSelfies'
files = ['special_tokens_map.json', 'tokenizer_config.json', 'vocab.txt']
os.makedirs(local_path, exist_ok=True)

for file in files:
    hf_hub_download(
        repo_id=repo_path,
        filename=file,
        repo_type='dataset',
        local_dir=local_path
    )

tokenizer = AutoTokenizer.from_pretrained(local_path)
```

Intended for atom-wise protein language modeling.