doammii commited on
Commit
3c14934
·
verified ·
1 Parent(s): 2f1efad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -8,19 +8,14 @@ tags:
8
  - molecular generation
9
  - machine learning
10
  - transformers
11
- - generation
12
  - de novo molecular design
13
  ---
14
- # Llamol
15
-
16
- <p align="center">
17
- <img src="assets/llamol.png" width="300" height="300" alt="LLamol">
18
- </p>
19
 
 
20
  This is the official repository for the paper ["LLamol: A Dynamic Multi-Conditional Generative Transformer for De Novo Molecular Design"](https://arxiv.org/abs/2311.14407).
 
21
  In this repository are the weights for LLamol (`out/llama2-M-Full-RSS.pt`) and the dataset OrganiX13.
22
 
23
- Image made with [Hotspot.ai](https://hotpot.ai/art-generator)
24
  ## Installation
25
  Install using Mamba to be fast: https://mamba.readthedocs.io/en/latest/micromamba-installation.html
26
 
@@ -31,20 +26,20 @@ $ micromamba env create -f torch2-env.yaml
31
  $ micromamba activate torch2-llamol
32
  $ python sample.py
33
  ```
 
34
  # Download and preprocess the OrganiX13 dataset:
35
- If you want to train with the full 13 Million dataset do the following steps. These are *not* necessary if you just want to use the model for inference:
 
36
  1. Download and preprocess the OPV dataset by running `/data/opv/prepare_opv.py`
37
  2. Download and preprocess the ZINC dataset by running `/data/zinc/zinc_complete/run_download.py` followed by `/data/zinc/convert_to_parquet.py`
38
  (we recommend at least 16GB RAM for this)
39
  3. Download and preprocess the ZINC dataset by running `/data/qm9_zinc250k_cep/convert_to_parquet.py`
40
-
41
  4. Run `data/combine_all.py` to combine the dataset to `data/OrganiX13.parquet` (this can take a while, especially on the zinc dataset. In total it took ~2 hours when using my Laptop, which has 16 GB ram and an Intel i7 10th Gen)
42
  5. Run `preprocess_dataset.py` which should create the file `.cache/processed_dataset_None.pkl`
43
 
44
  Now you can use that in the training of the model by specifing the file under the `processed_dataset_ckpt` of the training .yaml files.
45
 
46
 
47
-
48
  # Interactive Demo
49
 
50
  After installation you can play around with the model using the `demonstrator.ipynb` file. Just run all and scroll down to the last cell.
@@ -118,8 +113,6 @@ The different methods are implemented in the `fragment_creator.py` file.
118
  Each of the models were trained with their respective configurations in the `config/train` folder.
119
 
120
  # Thanks
121
-
122
-
123
  - [Karpathy](https://github.com/karpathy/llama2.c) for the implementation of the Llama 2 architecture and training code
124
 
125
  - [DeepChem](https://github.com/deepchem/deepchem) for the SmilesTokenizer
 
8
  - molecular generation
9
  - machine learning
10
  - transformers
 
11
  - de novo molecular design
12
  ---
 
 
 
 
 
13
 
14
+ # Llamol
15
  This is the official repository for the paper ["LLamol: A Dynamic Multi-Conditional Generative Transformer for De Novo Molecular Design"](https://arxiv.org/abs/2311.14407).
16
+ ※ Load with permission from the author of the paper
17
  In this repository are the weights for LLamol (`out/llama2-M-Full-RSS.pt`) and the dataset OrganiX13.
18
 
 
19
  ## Installation
20
  Install using Mamba to be fast: https://mamba.readthedocs.io/en/latest/micromamba-installation.html
21
 
 
26
  $ micromamba activate torch2-llamol
27
  $ python sample.py
28
  ```
29
+
30
  # Download and preprocess the OrganiX13 dataset:
31
+ If you want to train with the full 13 Million dataset do the following steps.
32
+ These are *not* necessary if you just want to use the model for inference:
33
  1. Download and preprocess the OPV dataset by running `/data/opv/prepare_opv.py`
34
  2. Download and preprocess the ZINC dataset by running `/data/zinc/zinc_complete/run_download.py` followed by `/data/zinc/convert_to_parquet.py`
35
  (we recommend at least 16GB RAM for this)
36
  3. Download and preprocess the ZINC dataset by running `/data/qm9_zinc250k_cep/convert_to_parquet.py`
 
37
  4. Run `data/combine_all.py` to combine the dataset to `data/OrganiX13.parquet` (this can take a while, especially on the zinc dataset. In total it took ~2 hours when using my Laptop, which has 16 GB ram and an Intel i7 10th Gen)
38
  5. Run `preprocess_dataset.py` which should create the file `.cache/processed_dataset_None.pkl`
39
 
40
  Now you can use that in the training of the model by specifing the file under the `processed_dataset_ckpt` of the training .yaml files.
41
 
42
 
 
43
  # Interactive Demo
44
 
45
  After installation you can play around with the model using the `demonstrator.ipynb` file. Just run all and scroll down to the last cell.
 
113
  Each of the models were trained with their respective configurations in the `config/train` folder.
114
 
115
  # Thanks
 
 
116
  - [Karpathy](https://github.com/karpathy/llama2.c) for the implementation of the Llama 2 architecture and training code
117
 
118
  - [DeepChem](https://github.com/deepchem/deepchem) for the SmilesTokenizer