Yin Fang
commited on
Commit
·
20c2de0
1
Parent(s):
a35016a
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,10 +6,10 @@ tags:
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# MolGen-large-opt
|
| 9 |
-
MolGen-large-opt was introduced in the paper ["Molecular Language Model as Multi-task Generator"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen).
|
| 10 |
|
| 11 |
## Model description
|
| 12 |
-
MolGen-large-opt is the first pre-trained model that only produces chemically valid molecules.
|
| 13 |
With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms.
|
| 14 |
Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder.
|
| 15 |
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization.
|
|
@@ -17,30 +17,30 @@ Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-
|
|
| 17 |

|
| 18 |
|
| 19 |
## Intended uses
|
| 20 |
-
You can use the
|
| 21 |
|
| 22 |
### How to use
|
| 23 |
-
Molecule
|
| 24 |
```python
|
| 25 |
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
| 26 |
|
| 27 |
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt")
|
| 28 |
>>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt")
|
| 29 |
|
| 30 |
-
>>> sf_input = tokenizer("[C][
|
| 31 |
>>> # beam search
|
| 32 |
>>> molecules = model.generate(input_ids=sf_input["input_ids"],
|
| 33 |
attention_mask=sf_input["attention_mask"],
|
| 34 |
-
max_length=
|
| 35 |
min_length=5,
|
| 36 |
num_return_sequences=5,
|
| 37 |
num_beams=5)
|
| 38 |
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules]
|
| 39 |
-
['[C][
|
| 40 |
-
'[C][
|
| 41 |
-
'[C][
|
| 42 |
-
'[C][
|
| 43 |
-
'[C][
|
| 44 |
```
|
| 45 |
|
| 46 |
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# MolGen-large-opt
|
| 9 |
+
MolGen-large-opt was introduced in the paper ["Molecular Language Model as Multi-task Generator"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen).
|
| 10 |
|
| 11 |
## Model description
|
| 12 |
+
MolGen-large-opt is the fine-tuned version of MolGen-large. MolGen-large is the first pre-trained model that only produces chemically valid molecules.
|
| 13 |
With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms.
|
| 14 |
Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder.
|
| 15 |
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization.
|
|
|
|
| 17 |

|
| 18 |
|
| 19 |
## Intended uses
|
| 20 |
+
You can use the fine-tuned model for molecule optimization for downstream tasks. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you.
|
| 21 |
|
| 22 |
### How to use
|
| 23 |
+
Molecule optimization example:
|
| 24 |
```python
|
| 25 |
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
| 26 |
|
| 27 |
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt")
|
| 28 |
>>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt")
|
| 29 |
|
| 30 |
+
>>> sf_input = tokenizer("[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]", return_tensors="pt")
|
| 31 |
>>> # beam search
|
| 32 |
>>> molecules = model.generate(input_ids=sf_input["input_ids"],
|
| 33 |
attention_mask=sf_input["attention_mask"],
|
| 34 |
+
max_length=35,
|
| 35 |
min_length=5,
|
| 36 |
num_return_sequences=5,
|
| 37 |
num_beams=5)
|
| 38 |
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules]
|
| 39 |
+
['[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
|
| 40 |
+
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
|
| 41 |
+
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
|
| 42 |
+
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
|
| 43 |
+
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]']
|
| 44 |
```
|
| 45 |
|
| 46 |
|