nlpaueb commited on
Commit
ffe99c3
·
1 Parent(s): 94dadb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -1
README.md CHANGED
@@ -12,4 +12,217 @@ widget:
12
  - text: "Total net sales decreased [X]% or $[X.X] billion during [MASK] compared to [XXXX]."
13
  - text: "During [MASK], the Company repurchased $[XX.X] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
14
  - text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid [MASK] equivalents of $[XX.X] billion."
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - text: "Total net sales decreased [X]% or $[X.X] billion during [MASK] compared to [XXXX]."
13
  - text: "During [MASK], the Company repurchased $[XX.X] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
14
  - text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid [MASK] equivalents of $[XX.X] billion."
15
+ ---
16
+
17
+ # SEC-BERT
18
+
19
+ <img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="sec-bert-logo" width="400"/>
20
+
21
+ SEC-BERT is a family of BERT models for the financial domain, intended to assist financial NLP research and FinTech applications.<br>
22
+ SEC-BERT consists of the following models:
23
+ * [SEC-BERT-BASE](https://huggingface.co/nlpaueb/sec-bert-base)
24
+ * [SEC-BERT-NUM](https://huggingface.co/nlpaueb/sec-bert-num) : We replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation)
25
+ * SEC-BERT-SHAPE (this model): We replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented.<br>
26
+ (e.g. , '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]').<br>
27
+
28
+ ## Pre-training corpus
29
+
30
+ The model was pre-trained on 260,773 10-K filings from 1993-2019, publicly available at <a href="https://www.sec.gov/">U.S. Securities and Exchange Commission (SEC)</a>
31
+
32
+ ## Pre-training details
33
+
34
+ * We created a new vocabulary of 30k subwords by training a [BertWordPieceTokenizer](https://github.com/huggingface/tokenizers) from scratch on the pre-training corpus.
35
+ * We trained BERT using the official code provided in [Google BERT's GitHub repository](https://github.com/google-research/bert)</a>.
36
+ * We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
37
+ * We release a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
38
+ * We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
39
+ * We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
40
+
41
+ ## Load Pretrained Model
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModel
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-base")
47
+ model = AutoModel.from_pretrained("nlpaueb/sec-bert-base")
48
+ ```
49
+
50
+ In order to use SEC-BERT-NUM, you have to pre-process texts replacing every numerical token with a corresponding shape pseudo token from a list of 214 predefined shape pseudo tokens. If the numerical token does not correspond to any shape pseudo token we replace it with the [NUM] pseudo-token.
51
+ Below there is an example how you can pre-process a simple sentence. This approach is quite simple, feel free to modify it as you see fit.
52
+
53
+ ```python
54
+ import spacy
55
+ from transformers import AutoTokenizer
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-num")
58
+ spacy_tokenizer = spacy.load("en_core_web_sm")
59
+
60
+ sentence = "Total net sales decreased 2% or $5.4 billion during 2019 compared to 2018."
61
+ tokens = [t.text for t in spacy_tokenizer(sentence)]
62
+
63
+ processed_sentence = []
64
+ for token in tokens:
65
+ if re.fullmatch(r"(\d+[\d,.]*)|([,.]\d+)", token):
66
+ shape = '[' + re.sub(r'\d', 'X', token) + ']'
67
+ if shape in tokenizer.additional_special_tokens:
68
+ processed_sentence.append(shape)
69
+ else:
70
+ processed_sentence.append('[NUM]')
71
+ else:
72
+ processed_sentence.append(token)
73
+
74
+ tokenized_sentence = tokenizer.tokenize(' '.join(processed_sentence))
75
+ print(tokenized_sentence)
76
+ """
77
+ ['total', 'net', 'sales', 'decreased', '[X]', '%', 'or', '$', '[X.X]', 'billion', 'during', '[XXXX]', 'compared', 'to', '[XXXX]', '.']
78
+ """
79
+ ```
80
+
81
+ ## Use LEBAL-BERT variants as Language Models
82
+
83
+ | Sample | Masked Token |
84
+ | --------------------------------------------------- | ------------ |
85
+ | Total net sales [MASK] 2% or $5.4 billion during 2019 compared to 2018. | decreased
86
+
87
+ | Model | Predictions (Probability) |
88
+ | --------------------------------------------------- | ------------ |
89
+ | **BERT-BASE-UNCASED** | increased (0.221), were (0.131), are (0.103), rose (0.075), of (0.058)
90
+ | **SEC-BERT-BASE** | increased (0.678), decreased (0.282), declined (0.017), grew (0.016), rose (0.004)
91
+ | **SEC-BERT-NUM** | increased (0.753), decreased (0.211), grew (0.019), declined (0.010), rose (0.006)
92
+ | **SEC-BERT-SHAPE** | increased (0.747), decreased (0.214), grew (0.021), declined (0.013), rose (0.002)
93
+
94
+
95
+ | Sample | Masked Token |
96
+ | --------------------------------------------------- | ------------ |
97
+ | Total net sales decreased 2% or $5.4 [MASK] during 2019 compared to 2018. | billion
98
+
99
+ | Model | Predictions (Probability) |
100
+ | --------------------------------------------------- | ------------ |
101
+ | **BERT-BASE-UNCASED** | billion (0.841), million (0.097), trillion (0.028), ##m (0.015), ##bn (0.006)
102
+ | **SEC-BERT-BASE** | million (0.972), billion (0.028), millions (0.000), ##million (0.000), m (0.000)
103
+ | **SEC-BERT-NUM** | million (0.974), billion (0.012), , (0.010), thousand (0.003), m (0.000)
104
+ | **SEC-BERT-SHAPE** | million (0.978), billion (0.021), % (0.000), , (0.000), millions (0.000)
105
+
106
+
107
+ | Sample | Masked Token |
108
+ | --------------------------------------------------- | ------------ |
109
+ | Total net sales decreased [MASK]% or $5.4 billion during 2019 compared to 2018. | 2
110
+
111
+ | Model | Predictions (Probability) |
112
+ | --------------------------------------------------- | ------------ |
113
+ | **BERT-BASE-UNCASED** | 20 (0.031), 10 (0.030), 6 (0.029), 4 (0.027), 30 (0.027)
114
+ | **SEC-BERT-BASE** | 13 (0.045), 12 (0.040), 11 (0.040), 14 (0.035), 10 (0.035)
115
+ | **SEC-BERT-NUM** | [NUM] (1.000), one (0.000), five (0.000), three (0.000), seven (0.000)
116
+ | **SEC-BERT-SHAPE** | [XX] (0.316), [XX.X] (0.253), [X.X] (0.237), [X] (0.188), [X.XX] (0.002)
117
+
118
+
119
+ | Sample | Masked Token |
120
+ | --------------------------------------------------- | ------------ |
121
+ | Total net sales decreased 2[MASK] or $5.4 billion during 2019 compared to 2018. | %
122
+
123
+ | Model | Predictions (Probability) |
124
+ | --------------------------------------------------- | ------------ |
125
+ | **BERT-BASE-UNCASED** | % (0.795), percent (0.174), ##fold (0.009), billion (0.004), times (0.004)
126
+ | **SEC-BERT-BASE** | % (0.924), percent (0.076), points (0.000), , (0.000), times (0.000)
127
+ | **SEC-BERT-NUM** | % (0.882), percent (0.118), million (0.000), units (0.000), bps (0.000)
128
+ | **SEC-BERT-SHAPE** | % (0.961), percent (0.039), bps (0.000), , (0.000), bcf (0.000)
129
+
130
+
131
+ | Sample | Masked Token |
132
+ | --------------------------------------------------- | ------------ |
133
+ | Total net sales decreased 2% or $[MASK] billion during 2019 compared to 2018. | 5.4
134
+
135
+ | Model | Predictions (Probability) |
136
+ | --------------------------------------------------- | ------------ |
137
+ | **BERT-BASE-UNCASED** | 1 (0.074), 4 (0.045), 3 (0.044), 2 (0.037), 5 (0.034)
138
+ | **SEC-BERT-BASE** | 1 (0.218), 2 (0.136), 3 (0.078), 4 (0.066), 5 (0.048)
139
+ | **SEC-BERT-NUM** | [NUM] (1.000), l (0.000), 1 (0.000), - (0.000), 30 (0.000)
140
+ | **SEC-BERT-SHAPE** | [X.X] (0.787), [X.XX] (0.095), [XX.X] (0.049), [X.XXX] (0.046), [X] (0.013)
141
+
142
+
143
+ | Sample | Masked Token |
144
+ | --------------------------------------------------- | ------------ |
145
+ | Total net sales decreased 2% or $5.4 billion during [MASK] compared to 2018. | 2019
146
+
147
+ | Model | Predictions (Probability) |
148
+ | --------------------------------------------------- | ------------ |
149
+ | **BERT-BASE-UNCASED** | 2017 (0.485), 2018 (0.169), 2016 (0.164), 2015 (0.070), 2014 (0.022)
150
+ | **SEC-BERT-BASE** | 2019 (0.990), 2017 (0.007), 2018 (0.003), 2020 (0.000), 2015 (0.000)
151
+ | **SEC-BERT-NUM** | [NUM] (1.000), as (0.000), fiscal (0.000), year (0.000), when (0.000)
152
+ | **SEC-BERT-SHAPE** | [XXXX] (1.000), as (0.000), year (0.000), periods (0.000), , (0.000)
153
+
154
+
155
+ | Sample | Masked Token |
156
+ | --------------------------------------------------- | ------------ |
157
+ | Total net sales decreased 2% or $5.4 billion during 2019 compared to [MASK]. | 2018
158
+
159
+ | Model | Predictions (Probability) |
160
+ | --------------------------------------------------- | ------------ |
161
+ | **BERT-BASE-UNCASED** | 2017 (0.100), 2016 (0.097), above (0.054), inflation (0.050), previously (0.037)
162
+ | **SEC-BERT-BASE** | 2018 (0.999), 2019 (0.000), 2017 (0.000), 2016 (0.000), 2014 (0.000)
163
+ | **SEC-BERT-NUM** | [NUM] (1.000), year (0.000), last (0.000), sales (0.000), fiscal (0.000)
164
+ | **SEC-BERT-SHAPE** | [XXXX] (1.000), year (0.000), sales (0.000), prior (0.000), years (0.000)
165
+
166
+
167
+ | Sample | Masked Token |
168
+ | --------------------------------------------------- | ------------ |
169
+ | During 2019, the Company [MASK] $67.1 billion of its common stock and paid dividend equivalents of $14.1 billion. | repurchased
170
+
171
+ | Model | Predictions (Probability) |
172
+ | --------------------------------------------------- | ------------ |
173
+ | **BERT-BASE-UNCASED** | held (0.229), sold (0.192), acquired (0.172), owned (0.052), traded (0.033)
174
+ | **SEC-BERT-BASE** | repurchased (0.913), issued (0.036), purchased (0.029), redeemed (0.010), sold (0.003)
175
+ | **SEC-BERT-NUM** | repurchased (0.917), purchased (0.054), reacquired (0.013), issued (0.005), acquired (0.003)
176
+ | **SEC-BERT-SHAPE** | repurchased (0.902), purchased (0.068), issued (0.010), reacquired (0.008), redeemed (0.006)
177
+
178
+
179
+ | Sample | Masked Token |
180
+ | --------------------------------------------------- | ------------ |
181
+ | During 2019, the Company repurchased $67.1 billion of its common [MASK] and paid dividend equivalents of $14.1 billion. | stock
182
+
183
+ | Model | Predictions (Probability) |
184
+ | --------------------------------------------------- | ------------ |
185
+ | **BERT-BASE-UNCASED** | stock (0.835), assets (0.039), equity (0.025), debt (0.021), bonds (0.017)
186
+ | **SEC-BERT-BASE** | stock (0.857), shares (0.135), equity (0.004), units (0.002), securities (0.000)
187
+ | **SEC-BERT-NUM** | stock (0.842), shares (0.157), equity (0.000), securities (0.000), units (0.000)
188
+ | **SEC-BERT-SHAPE** | stock (0.888), shares (0.109), equity (0.001), securities (0.001), stocks (0.000)
189
+
190
+
191
+ | Sample | Masked Token |
192
+ | --------------------------------------------------- | ------------ |
193
+ | During 2019, the Company repurchased $67.1 billion of its common stock and paid [MASK] equivalents of $14.1 billion. | dividend
194
+
195
+ | Model | Predictions (Probability) |
196
+ | --------------------------------------------------- | ------------ |
197
+ | **BERT-BASE-UNCASED** | cash (0.276), net (0.128), annual (0.083), the (0.040), debt (0.027)
198
+ | **SEC-BERT-BASE** | dividend (0.890), cash (0.018), dividends (0.016), share (0.013), tax (0.010)
199
+ | **SEC-BERT-NUM** | dividend (0.735), cash (0.115), share (0.087), tax (0.025), stock (0.013)
200
+ | **SEC-BERT-SHAPE** | dividend (0.655), cash (0.248), dividends (0.042), share (0.019), out (0.003)
201
+
202
+
203
+ | Sample | Masked Token |
204
+ | --------------------------------------------------- | ------------ |
205
+ | During 2019, the Company repurchased $67.1 billion of its common stock and paid dividend [MASK] of $14.1 billion. | equivalents
206
+
207
+ | Model | Predictions (Probability) |
208
+ | --------------------------------------------------- | ------------ |
209
+ | **BERT-BASE-UNCASED** | revenue (0.085), earnings (0.078), rates (0.065), amounts (0.064), proceeds (0.062)
210
+ | **SEC-BERT-BASE** | payments (0.790), distributions (0.087), equivalents (0.068), cash (0.013), amounts (0.004)
211
+ | **SEC-BERT-NUM** | payments (0.845), equivalents (0.097), distributions (0.024), increases (0.005), dividends (0.004)
212
+ | **SEC-BERT-SHAPE** | payments (0.784), equivalents (0.093), distributions (0.043), dividends (0.015), requirements (0.009)
213
+
214
+ ## About Us
215
+
216
+ [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
217
+
218
+ The group's current research interests include:
219
+ * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
220
+ * natural language generation from databases and ontologies, especially Semantic Web ontologies,
221
+ text classification, including filtering spam and abusive content,
222
+ * information extraction and opinion mining, including legal text analytics and sentiment analysis,
223
+ * natural language processing tools for Greek, for example parsers and named-entity recognizers,
224
+ machine learning in natural language processing, especially deep learning.
225
+
226
+ The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
227
+
228
+ [Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)