File size: 2,311 Bytes
dea0260
 
 
 
 
 
 
 
 
 
 
 
 
96bf844
dea0260
 
 
 
 
 
2aa1469
 
dea0260
 
96bf844
dea0260
96bf844
 
 
 
dea0260
dfc7876
 
 
 
 
 
 
 
 
 
 
dea0260
 
 
 
 
 
 
 
96bf844
 
dea0260
 
 
 
 
 
 
 
 
 
 
 
 
 
c76376a
dea0260
c76376a
 
dea0260
c76376a
dea0260
 
 
 
 
dfc7876
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: mit
language: en
tags:
  - peptide
  - biology
  - drug-discovery
  - HELM
  - helm-notation
  - cyclic-peptide
  - peptide-language-model
pipeline_tag: fill-mask
widget:
  - text: "PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$"
---

# HELM-BERT

A language model for peptide representation learning using **HELM (Hierarchical Editing Language for Macromolecules)** notation.

[![GitHub](https://img.shields.io/badge/GitHub-clinfo%2FHELM--BERT-black?logo=github)](https://github.com/clinfo/HELM-BERT)

## Model Description

HELM-BERT is built upon the DeBERTa architecture, designed for peptide sequences in HELM notation:

- **Disentangled Attention**: Decomposes attention into content-content and content-position terms
- **Enhanced Mask Decoder (EMD)**: Injects absolute position embeddings at the decoder stage
- **Span Masking**: Contiguous token masking with geometric distribution
- **nGiE**: n-gram Induced Encoding layer (1D convolution, kernel size 3)

## Model Specifications

| Parameter | Value |
|-----------|-------|
| Parameters | 54.8M |
| Hidden size | 768 |
| Layers | 6 |
| Attention heads | 12 |
| Vocab size | 78 |
| Max token length | 512 |

## How to Use

```python
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)

# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
```

## Training Data

Pretrained on deduplicated peptide sequences from:
- **ChEMBL**: Bioactive molecules database
- **CycPeptMPDB**: Cyclic peptide membrane permeability database
- **Propedia**: Protein-peptide interaction database

## Citation

```bibtex
@article{lee2025helmbert,
  title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
  author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
  journal={arXiv preprint arXiv:2512.23175},
  year={2025},
  url={https://arxiv.org/abs/2512.23175}
}
```

## License

MIT License