ZhiyuanChen commited on
Commit
d90b00f
·
verified ·
1 Parent(s): 78423c5

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: rna
3
+ tags:
4
+ - Biology
5
+ - RNA
6
+ license: agpl-3.0
7
+ datasets:
8
+ - multimolecule/rnacentral
9
+ library_name: multimolecule
10
+ pipeline_tag: fill-mask
11
+ mask_token: "<mask>"
12
+ widget:
13
+ - example_title: "HIV-1"
14
+ text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU"
15
+ output:
16
+ - label: "<eos>"
17
+ score: 0.19942431151866913
18
+ - label: "I"
19
+ score: 0.1465310901403427
20
+ - label: "*"
21
+ score: 0.1448192000389099
22
+ - label: "<unk>"
23
+ score: 0.14174020290374756
24
+ - label: "<cls>"
25
+ score: 0.13194777071475983
26
+ - example_title: "microRNA-21"
27
+ text: "UAGC<mask>UAUCAGACUGAUGUUG"
28
+ output:
29
+ - label: "<eos>"
30
+ score: 0.19946657121181488
31
+ - label: "I"
32
+ score: 0.14641942083835602
33
+ - label: "*"
34
+ score: 0.14452320337295532
35
+ - label: "<unk>"
36
+ score: 0.14180712401866913
37
+ - label: "<cls>"
38
+ score: 0.13223469257354736
39
+ ---
40
+
41
+ # ncRNABert
42
+
43
+ Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective.
44
+
45
+ ## Disclaimer
46
+
47
+ This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
48
+
49
+ The OFFICIAL repository of ncRNABert is at [wangleiofficial/ncRNABert](https://github.com/wangleiofficial/ncRNABert).
50
+
51
+ > [!WARNING]
52
+ > The MultiMolecule team is aware of a potential risk in reproducing the results of ncRNABert.
53
+ >
54
+ > The ncRNABert apply `softmax` in the `-2` dimension when computing the attention probs. This makes the output of `attention_probs @ value_layer` unreliable when the input sequences are not of the same length (i.e., have padding tokens).
55
+ > MultiMolecule applied a workaround to ensure that the attention masks are applied correctly, but this may lead to different results compared to the original implementation.
56
+
57
+ > [!CAUTION]
58
+ > The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.
59
+ >
60
+ > The original implementation of ncRNABert does not prepend `<bos>` (`<cls>`) and append `<eos>` tokens to the input sequence.
61
+ > This should not affect the performance of the model in most cases, but it can lead to unexpected behavior in some cases.
62
+ >
63
+ > Please set `bos_token=None, eos_token=None` in the tokenizer and set `bos_token_id=None, eos_token_id=None` in the model configuration if you want the exact behavior of the original implementation.
64
+
65
+ > [!TIP]
66
+ > The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
67
+
68
+ **The team releasing ncRNABert did not write this model card for this model so this model card has been written by the MultiMolecule team.**
69
+
70
+ ## Model Details
71
+
72
+ ncRNABert is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
73
+
74
+ ### Variants
75
+
76
+ - **[multimolecule/ncrnabert](https://huggingface.co/multimolecule/ncrnabert)**: The ncRNABert model pre-trained on single nucleotide data.
77
+ - **[multimolecule/ncrnabert-3mer](https://huggingface.co/multimolecule/ncrnabert-3mer)**: The ncRNABert model pre-trained on 3-mer data.
78
+
79
+ ### Model Specification
80
+
81
+ <table>
82
+ <thead>
83
+ <tr>
84
+ <th>Variants</th>
85
+ <th>Num Layers</th>
86
+ <th>Hidden Size</th>
87
+ <th>Num Heads</th>
88
+ <th>Intermediate Size</th>
89
+ <th>Num Parameters (M)</th>
90
+ <th>FLOPs (G)</th>
91
+ <th>MACs (G)</th>
92
+ <th>Max Num Tokens</th>
93
+ </tr>
94
+ </thead>
95
+ <tbody>
96
+ <tr>
97
+ <td>ncRNABert</td>
98
+ <td rowspan="2">24</td>
99
+ <td rowspan="2">1024</td>
100
+ <td rowspan="2">16</td>
101
+ <td rowspan="2">4096</td>
102
+ <td rowspan="2">303.31</td>
103
+ <td rowspan="2">78.96</td>
104
+ <td rowspan="2">39.46</td>
105
+ <td rowspan="2">512</td>
106
+ </tr>
107
+ <tr>
108
+ <td>ncRNABert-3mer</td>
109
+ </tr>
110
+ </tbody>
111
+ </table>
112
+
113
+ ### Links
114
+
115
+ - **Code**: [multimolecule.ncrnabert](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/ncrnabert)
116
+ - **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral)
117
+ - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased)
118
+ - **Original Repository**: [wangleiofficial/ncRNABert](https://github.com/wangleiofficial/ncRNABert)
119
+
120
+ ## Usage
121
+
122
+ The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
123
+
124
+ ```bash
125
+ pip install multimolecule
126
+ ```
127
+
128
+ ### Direct Use
129
+
130
+ #### Masked Language Modeling
131
+
132
+ You can use this model directly with a pipeline for masked language modeling:
133
+
134
+ ```python
135
+ >>> import multimolecule # you must import multimolecule to register models
136
+ >>> from transformers import pipeline
137
+
138
+ >>> unmasker = pipeline("fill-mask", model="multimolecule/ncrnabert")
139
+ >>> unmasker("gguc<mask>cucugguuagaccagaucugagccu")
140
+ [{'score': 0.19942431151866913,
141
+ 'token': 2,
142
+ 'token_str': '<eos>',
143
+ 'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
144
+ {'score': 0.1465310901403427,
145
+ 'token': 25,
146
+ 'token_str': 'I',
147
+ 'sequence': 'G G U C I C U C U G G U U A G A C C A G A U C U G A G C C U'},
148
+ {'score': 0.1448192000389099,
149
+ 'token': 23,
150
+ 'token_str': '*',
151
+ 'sequence': 'G G U C * C U C U G G U U A G A C C A G A U C U G A G C C U'},
152
+ {'score': 0.14174020290374756,
153
+ 'token': 3,
154
+ 'token_str': '<unk>',
155
+ 'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
156
+ {'score': 0.13194777071475983,
157
+ 'token': 1,
158
+ 'token_str': '<cls>',
159
+ 'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'}]
160
+ ```
161
+
162
+ ### Downstream Use
163
+
164
+ #### Extract Features
165
+
166
+ Here is how to use this model to get the features of a given sequence in PyTorch:
167
+
168
+ ```python
169
+ from multimolecule import RnaTokenizer, NcRnaBertModel
170
+
171
+
172
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
173
+ model = NcRnaBertModel.from_pretrained("multimolecule/ncrnabert")
174
+
175
+ text = "UAGCUUAUCAGACUGAUGUUG"
176
+ input = tokenizer(text, return_tensors="pt")
177
+
178
+ output = model(**input)
179
+ ```
180
+
181
+ #### Sequence Classification / Regression
182
+
183
+ > [!NOTE]
184
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
185
+
186
+ Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
187
+
188
+ ```python
189
+ import torch
190
+ from multimolecule import RnaTokenizer, NcRnaBertForSequencePrediction
191
+
192
+
193
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
194
+ model = NcRnaBertForSequencePrediction.from_pretrained("multimolecule/ncrnabert")
195
+
196
+ text = "UAGCUUAUCAGACUGAUGUUG"
197
+ input = tokenizer(text, return_tensors="pt")
198
+ label = torch.tensor([1])
199
+
200
+ output = model(**input, labels=label)
201
+ ```
202
+
203
+ #### Token Classification / Regression
204
+
205
+ > [!NOTE]
206
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
207
+
208
+ Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
209
+
210
+ ```python
211
+ import torch
212
+ from multimolecule import RnaTokenizer, NcRnaBertForTokenPrediction
213
+
214
+
215
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
216
+ model = NcRnaBertForTokenPrediction.from_pretrained("multimolecule/ncrnabert")
217
+
218
+ text = "UAGCUUAUCAGACUGAUGUUG"
219
+ input = tokenizer(text, return_tensors="pt")
220
+ label = torch.randint(2, (len(text), ))
221
+
222
+ output = model(**input, labels=label)
223
+ ```
224
+
225
+ #### Contact Classification / Regression
226
+
227
+ > [!NOTE]
228
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
229
+
230
+ Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
231
+
232
+ ```python
233
+ import torch
234
+ from multimolecule import RnaTokenizer, NcRnaBertForContactPrediction
235
+
236
+
237
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/ncrnabert")
238
+ model = NcRnaBertForContactPrediction.from_pretrained("multimolecule/ncrnabert")
239
+
240
+ text = "UAGCUUAUCAGACUGAUGUUG"
241
+ input = tokenizer(text, return_tensors="pt")
242
+ label = torch.randint(2, (len(text), len(text)))
243
+
244
+ output = model(**input, labels=label)
245
+ ```
246
+
247
+ ## Training Details
248
+
249
+ ncRNABert used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
250
+
251
+ ### Training Data
252
+
253
+ The ncRNABert model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral).
254
+ RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.
255
+
256
+ ### Training Procedure
257
+
258
+ #### Preprocessing
259
+
260
+ ncRNABert used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
261
+
262
+ - 15% of the tokens are masked.
263
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
264
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
265
+ - In the 10% remaining cases, the masked tokens are left as is.
266
+
267
+ ## Contact
268
+
269
+ Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
270
+
271
+ ## License
272
+
273
+ This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
274
+
275
+ ```spdx
276
+ SPDX-License-Identifier: AGPL-3.0-or-later
277
+ ```
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "NcRnaBertForPreTraining"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "head": null,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout": 0.0,
11
+ "hidden_size": 1024,
12
+ "id2label": null,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
+ "label2id": null,
16
+ "layer_norm_eps": 1e-12,
17
+ "lm_head": null,
18
+ "mask_token_id": 4,
19
+ "max_position_embeddings": 512,
20
+ "model_type": "ncrnabert",
21
+ "nmers": null,
22
+ "null_token_id": 5,
23
+ "num_attention_heads": 16,
24
+ "num_hidden_layers": 24,
25
+ "num_labels": 1,
26
+ "pad_token_id": 0,
27
+ "position_embedding_type": "rotary",
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.50.0",
30
+ "type_vocab_size": 2,
31
+ "unk_token_id": 3,
32
+ "use_cache": true,
33
+ "vocab_size": 26
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33ed75d61a04a236971b1eda7a03028e115062ff3b66c06e0fcae19fab7b721f
3
+ size 1213308032
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33053b6b7d5e0be9a55a2889df390e9150d2343c6516e74f532fec5fb2afd910
3
+ size 1213380458
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<null>"
4
+ ],
5
+ "bos_token": "<cls>",
6
+ "cls_token": "<cls>",
7
+ "eos_token": "<eos>",
8
+ "mask_token": "<mask>",
9
+ "pad_token": "<pad>",
10
+ "sep_token": "<eos>",
11
+ "unk_token": "<unk>"
12
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<pad>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<cls>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "<eos>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "<mask>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "<null>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "additional_special_tokens": [
53
+ "<null>"
54
+ ],
55
+ "bos_token": "<cls>",
56
+ "clean_up_tokenization_spaces": true,
57
+ "cls_token": "<cls>",
58
+ "codon": false,
59
+ "eos_token": "<eos>",
60
+ "extra_special_tokens": {},
61
+ "mask_token": "<mask>",
62
+ "model_max_length": 512,
63
+ "nmers": 1,
64
+ "pad_token": "<pad>",
65
+ "replace_T_with_U": true,
66
+ "sep_token": "<eos>",
67
+ "tokenizer_class": "RnaTokenizer",
68
+ "unk_token": "<unk>"
69
+ }
vocab.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <pad>
2
+ <cls>
3
+ <eos>
4
+ <unk>
5
+ <mask>
6
+ <null>
7
+ A
8
+ C
9
+ G
10
+ U
11
+ N
12
+ R
13
+ Y
14
+ S
15
+ W
16
+ K
17
+ M
18
+ B
19
+ D
20
+ H
21
+ V
22
+ .
23
+ X
24
+ *
25
+ -
26
+ I