Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- wikipedia
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
metrics:
|
| 8 |
+
- glue
|
| 9 |
+
---
|
| 10 |
+
# Model Card for SzegedAI/bert-medium-mlsm
|
| 11 |
+
|
| 12 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 13 |
+
|
| 14 |
+
This medium-sized BERT model was created using the [Masked Latent Semantic Modeling] (MLSM) pre-training objective, which is a sample efficient alternative for classic Masked Language Modeling (MLM).
|
| 15 |
+
During MLSM, the objective is to recover the latent semantic profile of the masked tokens, as opposed to recovering their exact identity.
|
| 16 |
+
The contextualized latent semantic profile during pre-training is determined by performing sparse coding of the hidden representation of an already pre-trained model (a base-sized BERT model in this particular case).
|
| 17 |
+
|
| 18 |
+
## Model Details
|
| 19 |
+
|
| 20 |
+
### Model Description
|
| 21 |
+
|
| 22 |
+
<!-- Provide a longer summary of what this model is. -->
|
| 23 |
+
|
| 24 |
+
- **Developed by:** SzegedAI
|
| 25 |
+
- **Model type:** transformer encoder
|
| 26 |
+
- **Language:** Engish
|
| 27 |
+
- **License:** MIT
|
| 28 |
+
|
| 29 |
+
### Model Sources
|
| 30 |
+
|
| 31 |
+
<!-- Provide the basic links for the model. -->
|
| 32 |
+
|
| 33 |
+
- **Repository:** [https://github.com/szegedai/MLSM](https://github.com/szegedai/MLSM)
|
| 34 |
+
- **Paper:** [Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling](https://underline.io/events/395/posters/15279/poster/78046-masked-latent-semantic-modeling-an-efficient-pre-training-alternative-to-masked-language-modeling?tab=abstract+%26+voting)
|
| 35 |
+
|
| 36 |
+
## How to Get Started with the Model
|
| 37 |
+
|
| 38 |
+
The pre-trained model can be used in the usual manner, e.g., for fine tuning on a particular sequence classification task, invoke the code:
|
| 39 |
+
|
| 40 |
+
```
|
| 41 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 42 |
+
|
| 43 |
+
tokenizer = AutoTokenizer.from_pretrained('SzegedAI/bert-medium-mlsm')
|
| 44 |
+
model = AutoModelForSequenceClassification.from_pretrained('SzegedAI/bert-medium-mlsm')
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Training Details
|
| 48 |
+
|
| 49 |
+
### Training Data
|
| 50 |
+
|
| 51 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 52 |
+
|
| 53 |
+
The model was pre-trained using a 2022 English Wikipedia dump pre-processed with [wiki-bert-pipeline](https://github.com/spyysalo/wiki-bert-pipeline).
|
| 54 |
+
|
| 55 |
+
### Training Procedure
|
| 56 |
+
|
| 57 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 58 |
+
|
| 59 |
+
#### Preprocessing
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
#### Training Hyperparameters
|
| 63 |
+
|
| 64 |
+
Pre-training was conducted with a batch size of 32 sequences and a gradient accumulation over 32 batches, resulting in an effective batch size of 1024.
|
| 65 |
+
A total of 300,000 update steps were performed using the AdamW optimizer with a linear learning rate scheduling having a peak learning rate of 1e-04.
|
| 66 |
+
A maximum sequence length of 128 tokens was employed over the first 90% of the pre-training, while for the final 10% of the pre-training, the maximum sequence length was increased to 512 tokens.
|
| 67 |
+
|
| 68 |
+
- **Training regime:** fp32
|
| 69 |
+
|
| 70 |
+
## Evaluation
|
| 71 |
+
|
| 72 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 73 |
+
|
| 74 |
+
#### Metrics
|
| 75 |
+
|
| 76 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 77 |
+
|
| 78 |
+
The model was evaluated on GLUE tasks and CoNLL2003 for named entity recognition.
|
| 79 |
+
|
| 80 |
+
### Results
|
| 81 |
+
The evaluation result after fine-tuning the given model on a wide range of tasks.
|
| 82 |
+
On each tasks 10 different fine-tuning were performed, during which the only difference was the random initialization of the task-specific classification head.
|
| 83 |
+
Both the average and the standard deviation are displayed below on each tasks.
|
| 84 |
+
|
| 85 |
+
| Dataset | Metric | Avg. | Std. |
|
| 86 |
+
|---|---|---|---|
|
| 87 |
+
| CoLA | Matthews correlation | 0.403 | 0.012 |
|
| 88 |
+
| CoNLL2003 | F1 | 0.926 | 0.003 |
|
| 89 |
+
| MNLI (matched) | Accuracy | 0.798 | 0.001 |
|
| 90 |
+
| MNLI (mismatched) | Accuracy | 0.808 | 0.002 |
|
| 91 |
+
| MRPC | Accuracy | 0.786 | 0.020 |
|
| 92 |
+
| MRPC | F1 | 0.851 | 0.013 |
|
| 93 |
+
| QNLI | Accuracy | 0.870 | 0.004 |
|
| 94 |
+
| QQP | Accuracy | 0.892 | 0.001 |
|
| 95 |
+
| QQP | F1 | 0.855 | 0.001 |
|
| 96 |
+
| RTE | Accuracy | 0.571 | 0.011 |
|
| 97 |
+
| SST2 | Accuracy | 0.905 | 0.004 |
|
| 98 |
+
| STSB | Pearson correlation | 0.818 | 0.024 |
|
| 99 |
+
| STSB | Spearman correlation | 0.820 | 0.021 |
|
| 100 |
+
| WiC | Accuracy | 0.639 | 0.007 |
|
| 101 |
+
| Average | --- | 0.7815 | --- |
|
| 102 |
+
#### Summary
|
| 103 |
+
|
| 104 |
+
This model was more sample efficient and reached practically the same average performance as an alternatively pre-trained language model of 2.5 times more parameter (of base size)
|
| 105 |
+
that was pre-trained using the classical MLM objective.
|
| 106 |
+
|
| 107 |
+
## Environmental Impact
|
| 108 |
+
|
| 109 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 110 |
+
|
| 111 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 112 |
+
|
| 113 |
+
- **Hardware Type:** RTX A6000
|
| 114 |
+
- **Hours used:** 300
|
| 115 |
+
- **Carbon Emitted:** 42 kg CO2 eq.
|
| 116 |
+
|
| 117 |
+
## Citation
|
| 118 |
+
|
| 119 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 120 |
+
The pre-training objective is introduced in the ACL Findings paper _Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling_.
|
| 121 |
+
|
| 122 |
+
**BibTeX:**
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
|