Commit ·
0ab43a9
1
Parent(s): ffb1723
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ license: mit
|
|
| 12 |
ClinicalMobileBERT is the result of training the [BioMobileBERT](https://huggingface.co/nlpie/bio-mobilebert) model in a continual learning scenario for 3 epochs using a total batch size of 192 on the MIMIC-III notes dataset.
|
| 13 |
|
| 14 |
# Initialisation
|
| 15 |
-
We initialise our model with the pre-trained checkpoints of the [BioMobileBERT](https://huggingface.co/
|
| 16 |
|
| 17 |
# Architecture
|
| 18 |
MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters.
|
|
|
|
| 12 |
ClinicalMobileBERT is the result of training the [BioMobileBERT](https://huggingface.co/nlpie/bio-mobilebert) model in a continual learning scenario for 3 epochs using a total batch size of 192 on the MIMIC-III notes dataset.
|
| 13 |
|
| 14 |
# Initialisation
|
| 15 |
+
We initialise our model with the pre-trained checkpoints of the [BioMobileBERT](https://huggingface.co/nlpie/bio-mobilebert) model available on Huggingface.
|
| 16 |
|
| 17 |
# Architecture
|
| 18 |
MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters.
|