jheuschkel commited on
Commit
271c905
·
verified ·
1 Parent(s): 17817df

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - jheuschkel/cds-dataset
5
+ language:
6
+ - en
7
+ pipeline_tag: fill-mask
8
+ tags:
9
+ - codon
10
+ - Codon
11
+ - biology
12
+ - synthetic
13
+ - dna
14
+ - mrna
15
+ - optimization
16
+ - codon-optimization
17
+ - codon-embedding
18
+ - codon-representation
19
+ - codon-language-model
20
+ - codon-language
21
+ misc:
22
+ - codon
23
+ ---
24
+ # Model Card for SynCodonLM
25
+
26
+
27
+
28
+ - This repository contains code to utilize the model, and reproduce results of the preprint [**Advancing Codon Language Modeling with Synonymous Codon Constrained Masking**](https://doi.org/10.1101/2025.08.19.671089).
29
+ - Unlike other Codon Language Models, SynCodonLM was trained with logit-level control, masking logits for non-synonymous codons. This allowed the model to learn codon-specific patterns disentangled from protein-level semantics.
30
+ - [Pre-training dataset of 66 Million CDS is available on Hugging Face here.](https://huggingface.co/datasets/jheuschkel/cds-dataset)
31
+ ---
32
+ ## Installation
33
+
34
+ ```python
35
+ git clone https://github.com/Boehringer-Ingelheim/SynCodonLM.git
36
+ cd SynCodonLM
37
+ pip install -r requirements.txt #maybe not neccesary depending on your env :)
38
+ ```
39
+ ---
40
+ # Usage
41
+ #### SynCodonLM uses token-type ID's to add species-specific codon sontext to it's thinking.
42
+ ###### Before use, find the token type ID (species_token_type) for your species of interest [here](https://github.com/Boehringer-Ingelheim/SynCodonLM/blob/master/SynCodonLM/species_token_type.py)!
43
+ ###### Or use our list of model organisms [below]()
44
+ ---
45
+ ## Embedding a Coding DNA Sequence
46
+ ```python
47
+ from SynCodonLM import CodonEmbeddings
48
+
49
+ model = CodonEmbeddings() #this loads the model & tokenizer using our built-in functions
50
+
51
+ seq = 'ATGTCCACCGGGCGGTGA'
52
+
53
+ mean_pooled_embedding = model.get_mean_embedding(seq, species_token_type=67) #E. coli
54
+ #returns --> tensor of shape [768]
55
+
56
+ raw_output = model.get_raw_embeddings(seq, species_token_type=67) #E. coli
57
+ raw_embedding_final_layer = raw_output.hidden_states[-1] #treat this like a typical Hugging Face model dictionary based output!
58
+ #returns --> tensor of shape [batch size (1), sequence length, 768]
59
+ ```
60
+ ## Codon Optimizing a Protein Sequence
61
+ ###### This has not yet been rigourosly evaluated, although we can confidently say it will generate 'natural looking' coding-DNA sequences.
62
+ ```python
63
+ from SynCodonLM import CodonOptimizer
64
+
65
+ optimizer = CodonOptimizer() #this loads the model & tokenizer using our built-in functions
66
+
67
+ result = optimizer.optimize(
68
+ protein_sequence="MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTLVTTFSYGVQCFSRYPDHMKRHDFFKSAMPEGYVQERTIFFKDDGNYKTRAEVKFEGDTLVNRIELKGIDFKEDGNILGHKLEYNYNSHNVYIMADKQKNGIKVNFKIRHNIEDGSVQLADHYQQNTPIGDGPVLLPDNHYLSTQSALSKDPNEKRDHMVLLEFVTAAGITLGMDELYK", #GFP
69
+ species_token_type=67, #E. coli
70
+ deterministic=True #true by default
71
+ )
72
+ codon_optimized_sequence = result.sequence
73
+ ```
74
+
75
+ ## Citation
76
+ If you use this work, please cite:
77
+ ```bibtex
78
+ @article {Heuschkel2025.08.19.671089,
79
+ author = {Heuschkel, James and Kingsley, Laura and Pefaur, Noah and Nixon, Andrew and Cramer, Steven},
80
+ title = {Advancing Codon Language Modeling with Synonymous Codon Constrained Masking},
81
+ elocation-id = {2025.08.19.671089},
82
+ year = {2025},
83
+ doi = {10.1101/2025.08.19.671089},
84
+ publisher = {Cold Spring Harbor Laboratory},
85
+ abstract = {Codon language models offer a promising framework for modeling protein-coding DNA sequences, yet current approaches often conflate codon usage with amino acid semantics, limiting their ability to capture DNA-level biology. We introduce SynCodonLM, a codon language model that enforces a biologically grounded constraint: masked codons are only predicted from synonymous options, guided by the known protein sequence. This design disentangles codon-level from protein-level semantics, enabling the model to learn nucleotide-specific patterns. The constraint is implemented by masking non-synonymous codons from the prediction space prior to softmax. Unlike existing models, which cluster codons by amino acid identity, SynCodonLM clusters by nucleotide properties, revealing structure aligned with DNA-level biology. Furthermore, SynCodonLM outperforms existing models on 6 of 7 benchmarks sensitive to DNA-level features, including mRNA and protein expression. Our approach advances domain-specific representation learning and opens avenues for sequence design in synthetic biology, as well as deeper insights into diverse bioprocesses.Competing Interest StatementThe authors have declared no competing interest.},
86
+ URL = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089},
87
+ eprint = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089.full.pdf},
88
+ journal = {bioRxiv}
89
+ }
90
+ ```
91
+
92
+ ----
93
+ #### Model Organisms Species Token Type IDs
94
+ | Organism | Token-Type ID |
95
+ |-------------------------|----------------|
96
+ | *E. coli* | 67 |
97
+ | *S. cerevisiae* | 108 |
98
+ | *C. elegans*| 187 |
99
+ | *D. melanogaster*| 178 |
100
+ | *D. rerio* |468 |
101
+ | *M. musculus* | 321 |
102
+ | *A. thaliana* | 266 |
103
+ | *H. sapiens* | 317 |
104
+ | *C. griseus* | 394 |