Update README
Browse files
README.md
CHANGED
|
@@ -18,6 +18,8 @@ widget:
|
|
| 18 |
|
| 19 |
A language model for peptide representation learning using **HELM (Hierarchical Editing Language for Macromolecules)** notation.
|
| 20 |
|
|
|
|
|
|
|
| 21 |
## Model Description
|
| 22 |
|
| 23 |
HELM-BERT is built upon the DeBERTa architecture, designed for peptide sequences in HELM notation:
|
|
@@ -27,8 +29,6 @@ HELM-BERT is built upon the DeBERTa architecture, designed for peptide sequences
|
|
| 27 |
- **Span Masking**: Contiguous token masking with geometric distribution
|
| 28 |
- **nGiE**: n-gram Induced Encoding layer (1D convolution, kernel size 3)
|
| 29 |
|
| 30 |
-
Please check the [official repository](https://github.com/clinfo/HELM-BERT) for more implementation details and updates.
|
| 31 |
-
|
| 32 |
## Model Specifications
|
| 33 |
|
| 34 |
| Parameter | Value |
|
|
|
|
| 18 |
|
| 19 |
A language model for peptide representation learning using **HELM (Hierarchical Editing Language for Macromolecules)** notation.
|
| 20 |
|
| 21 |
+
[](https://github.com/clinfo/HELM-BERT)
|
| 22 |
+
|
| 23 |
## Model Description
|
| 24 |
|
| 25 |
HELM-BERT is built upon the DeBERTa architecture, designed for peptide sequences in HELM notation:
|
|
|
|
| 29 |
- **Span Masking**: Contiguous token masking with geometric distribution
|
| 30 |
- **nGiE**: n-gram Induced Encoding layer (1D convolution, kernel size 3)
|
| 31 |
|
|
|
|
|
|
|
| 32 |
## Model Specifications
|
| 33 |
|
| 34 |
| Parameter | Value |
|