cglez commited on
Commit
db049f5
·
verified ·
1 Parent(s): 51f9d23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -26
README.md CHANGED
@@ -2,66 +2,94 @@
2
  library_name: transformers
3
  language: en
4
  license: mit
5
- datasets: []
6
- tags: []
 
 
7
  ---
8
 
9
- # Model Card for <Model>
10
 
11
- A pretrained GPT2 using <Dataset>.
12
 
13
  ## Model Details
14
 
15
- ### Model Description
16
 
17
- A pretrained GPT2 using <Dataset>.
 
18
 
19
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
20
  - **Funded by:** [ERC](https://erc.europa.eu)
21
- - **Model type:** pretrained GPT2
22
- - **Language(s) (NLP):** English
23
  - **License:** MIT
24
- - **Pretrained from model:** [GPT2](https://huggingface.co/openai-community/gpt2)
25
 
26
- ### Model Checkpoints
27
 
28
- [More Information Needed]
29
-
30
- ### Model Sources
31
 
32
- - **Paper:** [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
- ## Intended Uses & Limitations
 
 
35
 
36
- See <https://huggingface.co/openai-community/gpt2#intended-uses--limitations>.
 
37
 
38
- ### Loading Checkpoints
39
 
40
- [More Information Needed]
41
 
42
  ## Training Details
43
 
44
- ### Training Data
45
-
46
- [More Information Needed]
47
 
48
- #### Preprocessing [optional]
49
 
50
- [More Information Needed]
51
 
52
  #### Training Hyperparameters
53
 
54
- - **Training regime:** fp16
55
  - **Batch size:** 8
56
  - **Gradient accumulation steps:** 12
57
 
 
 
 
 
 
 
 
 
 
 
58
  ## Environmental Impact
59
 
60
  - **Hardware Type:** NVIDIA A100 PCIE 40GB
61
- - **Hours used:** [More Information Needed]
62
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
63
  - **Compute Region:** EU
64
- - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
65
 
66
  ## Citation
67
 
 
2
  library_name: transformers
3
  language: en
4
  license: mit
5
+ datasets:
6
+ - community-datasets/ohsumed
7
+ base_model:
8
+ - openai-community/gpt2
9
  ---
10
 
11
+ # Model Card: GPT-2-DAPT-Ohsumed
12
 
13
+ A domain-adapted GPT-2, further pre-trained on the Ohsumed dataset text.
14
 
15
  ## Model Details
16
 
17
+ ### Description
18
 
19
+ This model is based on the [GPT-2](https://huggingface.co/openai-community/gpt2)
20
+ architecture and was further pre-trained (domain-adapted) using the text in Ohsumed dataset, excluding its test split.
21
 
22
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
23
  - **Funded by:** [ERC](https://erc.europa.eu)
24
+ - **Architecture:** GPT-2
25
+ - **Language:** English
26
  - **License:** MIT
27
+ - **Base model:** [GPT-2](https://huggingface.co/openai-community/gpt2)
28
 
29
+ ### Checkpoints
30
 
31
+ Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags,
32
+ which correspond to training epochs and steps:
 
33
 
34
+ | Epoch | Step | Tags | |
35
+ |---|---|---|---|
36
+ | 1 | 97 | epoch-1 | step-97 |
37
+ | 5 | 489 | epoch-5 | step-489 |
38
+ | 10 | 978 | epoch-10 | step-978 |
39
+ | 20 | 1956 | epoch-20 | step-1956 |
40
+ | 40 | 3913 | epoch-40 | step-3913 |
41
+ | 60 | 5870 | epoch-60 | step-5870 |
42
+ | 80 | 7826 | epoch-80 | step-7826 |
43
+ | 100 | 9783 | epoch-100 | step-9783 |
44
+ | 120 | 11740 | epoch-120 | step-11740 |
45
+ | 140 | 13696 | epoch-140 | step-13696 |
46
+ | 160 | 15653 | epoch-160 | step-15653 |
47
+ | 180 | 17610 | epoch-180 | step-17610 |
48
+ | 198 | 19400 | epoch-198 | step-19400 |
49
 
50
+ To load a model from a specific intermediate checkpoint, use the `revision` parameter with the corresponding tag:
51
+ ```python
52
+ from transformers import AutoModelForCausalLM
53
 
54
+ model = AutoModelForMaskedLM.from_pretrained("<model-name>", revision="<checkpoint-tag>")
55
+ ```
56
 
57
+ ### Sources
58
 
59
+ - **Paper:** [Information pending]
60
 
61
  ## Training Details
62
 
63
+ For more details on the training procedure, please refer to the base model's documentation:
64
+ [Training procedure](https://huggingface.co/openai-community/gpt2#training-procedure).
 
65
 
66
+ ### Training Data
67
 
68
+ All texts from Ohsumed dataset, excluding the test partition.
69
 
70
  #### Training Hyperparameters
71
 
72
+ - **Precision:** fp16
73
  - **Batch size:** 8
74
  - **Gradient accumulation steps:** 12
75
 
76
+ ## Uses
77
+
78
+ For typical use cases and limitations, please refer to the base model's guidance:
79
+ [Inteded uses & limitations](https://huggingface.co/openai-community/gpt2#intended-uses--limitations).
80
+
81
+ ## Bias, Risks, and Limitations
82
+
83
+ This model inherits potential risks and limitations from the base model. Refer to:
84
+ [Limitations and bias](https://huggingface.co/openai-community/gpt2#limitations-and-bias).
85
+
86
  ## Environmental Impact
87
 
88
  - **Hardware Type:** NVIDIA A100 PCIE 40GB
89
+ - **Runtime:** 35 h
90
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
91
  - **Compute Region:** EU
92
+ - **Carbon Emitted:** 5.42 kg CO2 eq.
93
 
94
  ## Citation
95