cglez commited on
Commit
8df3d3b
·
verified ·
1 Parent(s): cd1d97c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -32
README.md CHANGED
@@ -2,72 +2,90 @@
2
  library_name: transformers
3
  language: en
4
  license: apache-2.0
5
- datasets: []
6
- tags: []
 
 
7
  ---
8
 
9
- # Model Card for <Model>
10
 
11
- A pretrained BERT using <Dataset>.
12
 
13
  ## Model Details
14
 
15
- ### Model Description
16
 
17
- A MLM-only pretrained BERT-base using <Dataset>.
 
 
18
 
19
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
20
  - **Funded by:** [ERC](https://erc.europa.eu)
21
- - **Model type:** MLM pretrained BERT
22
- - **Language(s) (NLP):** English
23
- - **License:** Apache license 2.0
24
- - **Pretrained from model:** [BERT base model (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
25
 
26
- ### Model Checkpoints
27
 
28
- [More Information Needed]
 
29
 
30
- ### Model Sources
 
 
 
 
 
 
 
 
 
31
 
32
- - **Paper:** [More Information Needed]
 
 
33
 
34
- ## Uses
 
35
 
36
- See <https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations>.
37
 
38
- ### Checkpoint Use
39
-
40
- [More Information Needed]
41
-
42
- ## Bias, Risks, and Limitations
43
-
44
- See <https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias>.
45
 
46
  ## Training Details
47
 
48
- See <https://huggingface.co/google-bert/bert-base-uncased#training-procedure>.
 
49
 
50
  ### Training Data
51
 
52
- [More Information Needed]
53
-
54
- #### Preprocessing [optional]
55
-
56
- [More Information Needed]
57
 
58
  #### Training Hyperparameters
59
 
60
- - **Training regime:** fp16
61
  - **Batch size:** 32
62
  - **Gradient accumulation steps:** 3
63
 
 
 
 
 
 
 
 
 
 
 
64
  ## Environmental Impact
65
 
66
  - **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
67
- - **Hours used:** [More Information Needed]
68
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
69
  - **Compute Region:** EU
70
- - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
71
 
72
  ## Citation
73
 
 
2
  library_name: transformers
3
  language: en
4
  license: apache-2.0
5
+ datasets:
6
+ - stanfordnlp/sentiment140
7
+ base_model:
8
+ - google-bert/bert-base-uncased
9
  ---
10
 
11
+ # Model Card: BERT-DAPT-Sentiment140
12
 
13
+ A domain-adapted BERT-base model, further pre-trained on the Sentiment140 dataset text.
14
 
15
  ## Model Details
16
 
17
+ ### Description
18
 
19
+ This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
20
+ architecture and was further pre-trained (domain-adapted) using the text in Sentiment140 dataset, excluding its test split.
21
+ Only the masked language modeling (MLM) objective was used during domain adaptation.
22
 
23
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
24
  - **Funded by:** [ERC](https://erc.europa.eu)
25
+ - **Architecture:** BERT-base
26
+ - **Language:** English
27
+ - **License:** Apache 2.0
28
+ - **Base model:** [BERT base model (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
29
 
30
+ ### Checkpoints
31
 
32
+ Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags,
33
+ which correspond to training epochs and steps:
34
 
35
+ | Epoch | Step | Tags | |
36
+ |---|---|---|---|
37
+ | 1 | 15000 | epoch-1 | step-15000 |
38
+ | 2 | 30000 | epoch-2 | step-30000 |
39
+ | 3 | 45000 | epoch-3 | step-45000 |
40
+ | 5 | 75000 | epoch-5 | step-75000 |
41
+ | 10 | 150000 | epoch-10 | step-150000 |
42
+ | 15 | 225000 | epoch-15 | step-225000 |
43
+ | 20 | 300000 | epoch-20 | step-300000 |
44
+ | 25 | 375000 | epoch-25 | step-375000 |
45
 
46
+ To load a model from a specific intermediate checkpoint, use the `revision` parameter with the corresponding tag:
47
+ ```python
48
+ from transformers import AutoModelForMaskedLM
49
 
50
+ model = AutoModelForMaskedLM.from_pretrained("<model-name>", revision="<checkpoint-tag>")
51
+ ```
52
 
53
+ ### Sources
54
 
55
+ - **Paper:** [Information pending]
 
 
 
 
 
 
56
 
57
  ## Training Details
58
 
59
+ For more details on the training procedure, please refer to the base model's documentation:
60
+ [Training procedure](https://huggingface.co/google-bert/bert-base-uncased#training-procedure).
61
 
62
  ### Training Data
63
 
64
+ All texts from Sentiment140 dataset, excluding the test partition.
 
 
 
 
65
 
66
  #### Training Hyperparameters
67
 
68
+ - **Precision:** fp16
69
  - **Batch size:** 32
70
  - **Gradient accumulation steps:** 3
71
 
72
+ ## Uses
73
+
74
+ For typical use cases and limitations, please refer to the base model's guidance:
75
+ [Inteded uses & limitations](https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations).
76
+
77
+ ## Bias, Risks, and Limitations
78
+
79
+ This model inherits potential risks and limitations from the base model. Refer to:
80
+ [Limitations and bias](https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias).
81
+
82
  ## Environmental Impact
83
 
84
  - **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
85
+ - **Runtime:** 37 h
86
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
87
  - **Compute Region:** EU
88
+ - **Carbon Emitted:** 6.88 kg CO2 eq.
89
 
90
  ## Citation
91