JeffreyLau commited on
Commit
e70c573
·
1 Parent(s): c6e1966

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -0
README.md CHANGED
@@ -1,3 +1,157 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # SSCI-BERT: A pretrained language model for social scientific text
5
+
6
+ <img src="sscibert-logo/logo.png" alt="logo" style="zoom: 67%;" />
7
+
8
+ ## Introduction
9
+
10
+ The research for social science texts needs the support natural language processing tools.
11
+
12
+ The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science.
13
+
14
+ We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
15
+
16
+ We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model.
17
+
18
+ - SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`.
19
+ - Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts.
20
+
21
+
22
+
23
+ ## News
24
+
25
+ - 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time.
26
+
27
+
28
+
29
+ ## How to use
30
+
31
+ ### Huggingface Transformers
32
+
33
+ The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online.
34
+
35
+
36
+
37
+ - SSCI-BERT
38
+
39
+ ```python
40
+ from transformers import AutoTokenizer, AutoModel
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
43
+
44
+ model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
45
+ ```
46
+
47
+ - SSCI-SciBERT
48
+
49
+ ```python
50
+ from transformers import AutoTokenizer, AutoModel
51
+
52
+ tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
53
+
54
+ model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
55
+ ```
56
+
57
+ ### Download Models
58
+
59
+ - The version of the model we provide is `PyTorch`.
60
+
61
+ ### From Huggingface
62
+
63
+ - Download directly through Huggingface's official website.
64
+ - [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2)
65
+ - [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2)
66
+ - [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4)
67
+ - [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
68
+
69
+ ### From Google Drive
70
+
71
+ We have put the model on Google Drive for users.
72
+
73
+ | Model | DATASET(year) | Base Model |
74
+ | ------------------------------------------------------------ | ------------- | ---------------------- |
75
+ | [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased |
76
+ | [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased |
77
+ | [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased |
78
+ | [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased |
79
+
80
+ ## Evaluation & Results
81
+
82
+ - We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project.
83
+
84
+ #### JCR Title Classify Dataset
85
+
86
+ | Model | accuracy | macro avg | weighted avg |
87
+ | ---------------------- | -------- | --------- | ------------ |
88
+ | Bert-base-cased | 28.43 | 22.06 | 21.86 |
89
+ | Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 |
90
+ | SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 |
91
+ | SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 |
92
+ | SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 |
93
+ | SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 |
94
+ | Support | 2300 | 2300 | 2300 |
95
+
96
+ #### JCR Abstract Classify Dataset
97
+
98
+ | Model | accuracy | macro avg | weighted avg |
99
+ | ---------------------- | -------- | --------- | ------------ |
100
+ | Bert-base-cased | 48.59 | 42.8 | 42.82 |
101
+ | Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 |
102
+ | SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 |
103
+ | SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 |
104
+ | SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 |
105
+ | SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 |
106
+ | Support | 2200 | 2200 | 2200 |
107
+
108
+ #### JCR Mixed Titles and Abstracts Dataset
109
+
110
+ | **Model** | **accuracy** | **macro avg** | **weighted avg** |
111
+ | ---------------------- | ------------ | -------------- | ----------------- |
112
+ | Bert-base-cased | 58.24 | 57.27 | 57.25 |
113
+ | Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 |
114
+ | SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 |
115
+ | SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 |
116
+ | SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 |
117
+ | SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 |
118
+ | Support | 4500 | 4500 | 4500 |
119
+
120
+ #### SSCI Abstract Structural Function Recognition (Classify Dataset)
121
+
122
+ | | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support |
123
+ | ------------ | -------------------------- | ------------------- | ------------------- | ----------- |
124
+ | B | 63.77 | 64.29 | 64.63 | 224 |
125
+ | P | 53.66 | 57.14 | 57.99 | 95 |
126
+ | M | 87.63 | 88.43 | 89.06 | 323 |
127
+ | R | 86.81 | 88.28 | **88.47** | 419 |
128
+ | C | 78.32 | 79.82 | 78.95 | 316 |
129
+ | accuracy | 79.59 | 80.9 | 80.97 | 1377 |
130
+ | macro avg | 74.04 | 75.59 | 75.82 | 1377 |
131
+ | weighted avg | 79.02 | 80.32 | 80.44 | 1377 |
132
+ | | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** |
133
+ | B | 69.98 | **70.95** | **70.95** | 224 |
134
+ | P | 58.89 | **60.12** | 58.96 | 95 |
135
+ | M | 89.37 | **90.12** | 88.11 | 323 |
136
+ | R | 87.66 | 88.07 | 87.44 | 419 |
137
+ | C | 80.7 | 82.61 | **82.94** | 316 |
138
+ | accuracy | 81.63 | **82.72** | 82.06 | 1377 |
139
+ | macro avg | 77.32 | **78.37** | 77.68 | 1377 |
140
+ | weighted avg | 81.6 | **82.58** | 81.92 | 1377 |
141
+
142
+ ## Cited
143
+
144
+ - If our content is helpful for your research work, please quote our research in your article.
145
+ - If you want to quote our research, you can use this url (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) as an alternative before our paper is published.
146
+
147
+ ## Disclaimer
148
+
149
+ - The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment.
150
+ - **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
151
+
152
+
153
+ ## Acknowledgment
154
+
155
+ - SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)).
156
+ - SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
157
+