pszemraj autoevaluator HF Staff SFconvertbot peter szemraj commited on
Commit
e31b98d
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: autoevaluator <autoevaluator@users.noreply.huggingface.co>
Co-authored-by: SFconvertbot <SFconvertbot@users.noreply.huggingface.co>
Co-authored-by: peter szemraj <peter szemraj@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - bsd-3-clause
6
+ - apache-2.0
7
+ library_name: transformers
8
+ tags:
9
+ - long document summary
10
+ - book summary
11
+ - booksum
12
+ datasets:
13
+ - kmfoda/booksum
14
+ metrics:
15
+ - rouge
16
+ pipeline_tag: summarization
17
+ widget:
18
+ - text: large earthquakes along a given fault segment do not occur at random intervals
19
+ because it takes time to accumulate the strain energy for the rupture. The rates
20
+ at which tectonic plates move and accumulate strain at their boundaries are approximately
21
+ uniform. Therefore, in first approximation, one may expect that large ruptures
22
+ of the same fault segment will occur at approximately constant time intervals.
23
+ If subsequent main shocks have different amounts of slip across the fault, then
24
+ the recurrence time may vary, and the basic idea of periodic mainshocks must be
25
+ modified. For great plate boundary ruptures the length and slip often vary by
26
+ a factor of 2. Along the southern segment of the San Andreas fault the recurrence
27
+ interval is 145 years with variations of several decades. The smaller the standard
28
+ deviation of the average recurrence interval, the more specific could be the long
29
+ term prediction of a future mainshock.
30
+ example_title: earthquakes
31
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
32
+ are fed into a neural network that predicts values in the reconstructed domain.
33
+ Then, this domain is mapped to the sensor domain where sensor measurements are
34
+ available as supervision. Class and Section Problems Addressed Generalization
35
+ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
36
+ Representations (Section 3) Computation & memory efficiency, representation capacity,
37
+ editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
38
+ 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
39
+ 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
40
+ in the neural field toolbox each addresses problems that arise in learning, inference,
41
+ and control. (Section 3). We can supervise reconstruction via differentiable forward
42
+ maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
43
+ Section 4) With appropriate network architecture choices, we can overcome neural
44
+ network spectral biases (blurriness) and efficiently compute derivatives and integrals
45
+ (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
46
+ and to achieve editable representations (Section 6). Collectively, these classes
47
+ constitute a ''toolbox'' of techniques to help solve problems with neural fields
48
+ There are three components in a conditional neural field: (1) An encoder or inference
49
+ function € that outputs the conditioning latent variable 2 given an observation
50
+ 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
51
+ a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
52
+ parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
53
+ most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
54
+ the inverse conditional probability to find the most probable 0 given Z: arg-
55
+ max P(Olz). We discuss different encoding schemes with different optimality guarantees
56
+ (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
57
+ mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
58
+ a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
59
+ prior over the sur- face in its reconstruction domain to generalize to the partial
60
+ observations. A neural network expresses a prior via the function space of its
61
+ architecture and parameters 0, and generalization is influenced by the inductive
62
+ bias of this function space (Section 5).'
63
+ example_title: scientific paper
64
+ - text: 'Is a else or outside the cob and tree written being of early client rope
65
+ and you have is for good reasons. On to the ocean in Orange for time. By''s the
66
+ aggregate we can bed it yet. Why this please pick up on a sort is do and also
67
+ M Getoi''s nerocos and do rain become you to let so is his brother is made in
68
+ use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
69
+ Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
70
+ be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
71
+ As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
72
+ task for this class and you might have already seen me in the first lecture where
73
+ I made a quick appearance. I''m also going to give the tortillas in the last third
74
+ of this course. So to give you a little bit about me, I''m a old student here
75
+ with better Bulman and my research centres on casual inference applied to biomedical
76
+ disasters, so that could be genomics or that could be hospital data. If any of
77
+ you is interested in writing a bachelor thesis, a semester paper may be mastathesis
78
+ about this topic feel for reach out to me. you have my name on models and my email
79
+ address you can find in the directory I''d Be very happy to talk about it. you
80
+ do not need to be sure about it, we can just have a chat. So with that said, let''s
81
+ get on with the lecture. There''s an exciting topic today I''m going to start
82
+ by sharing some slides with you and later on during the lecture we''ll move to
83
+ the paper. So bear with me for a few seconds. Well, the projector is starting
84
+ up. Okay, so let''s get started. Today''s topic is a very important one. It''s
85
+ about a technique which really forms one of the fundamentals of data science,
86
+ machine learning, and any sort of modern statistics. It''s called cross validation.
87
+ I know you really want to understand this topic I Want you to understand this
88
+ and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
89
+ cross validation. So to set the stage for this, I Want to introduce you to the
90
+ validation problem in computational statistics. So the problem is the following:
91
+ You trained a model on available data. You fitted your model, but you know the
92
+ training data you got could always have been different and some data from the
93
+ environment. Maybe it''s a random process. You do not really know what it is,
94
+ but you know that somebody else who gets a different batch of data from the same
95
+ environment they would get slightly different training data and you do not care
96
+ that your method performs as well. On this training data. you want to to perform
97
+ well on other data that you have not seen other data from the same environment.
98
+ So in other words, the validation problem is you want to quantify the performance
99
+ of your model on data that you have not seen. So how is this even possible? How
100
+ could you possibly measure the performance on data that you do not know The solution
101
+ to? This is the following realization is that given that you have a bunch of data,
102
+ you were in charge. You get to control how much that your model sees. It works
103
+ in the following way: You can hide data firms model. Let''s say you have a training
104
+ data set which is a bunch of doubtless so X eyes are the features those are typically
105
+ hide and national vector. It''s got more than one dimension for sure. And the
106
+ why why eyes. Those are the labels for supervised learning. As you''ve seen before,
107
+ it''s the same set up as we have in regression. And so you have this training
108
+ data and now you choose that you only use some of those data to fit your model.
109
+ You''re not going to use everything, you only use some of it the other part you
110
+ hide from your model. And then you can use this hidden data to do validation from
111
+ the point of you of your model. This hidden data is complete by unseen. In other
112
+ words, we solve our problem of validation.'
113
+ example_title: transcribed audio - lecture
114
+ - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
115
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
116
+ & memory complexity (where nn is sequence length). Hence, it''s computationally
117
+ very expensive to apply transformer-based models on long sequences n > 512n>512.
118
+ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
119
+ try to remedy this problem by approximating the full attention matrix. You can
120
+ checkout 🤗''s recent blog post in case you are unfamiliar with these models.
121
+
122
+ BigBird (introduced in paper) is one of such recent models to address this issue.
123
+ BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
124
+ attention) and can handle sequences up to a length of 4096 at a much lower computational
125
+ cost compared to BERT. It has achieved SOTA on various tasks involving very long
126
+ sequences such as long documents summarization, question-answering with long contexts.
127
+
128
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
129
+ post is to give the reader an in-depth understanding of big bird implementation
130
+ & ease one''s life in using BigBird with 🤗Transformers. But, before going into
131
+ more depth, it is important to remember that the BigBird''s attention is an approximation
132
+ of BERT''s full attention and therefore does not strive to be better than BERT''s
133
+ full attention, but rather to be more efficient. It simply allows to apply transformer-based
134
+ models to much longer sequences since BERT''s quadratic memory requirement quickly
135
+ becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
136
+ would be preferred over block sparse attention (which we are going to discuss
137
+ in this post).
138
+
139
+ If you wonder why we need more compute when working with longer sequences, this
140
+ blog post is just right for you!
141
+
142
+ Some of the main questions one might have when working with standard BERT-like
143
+ attention include:
144
+
145
+ Do all tokens really have to attend to all other tokens? Why not compute attention
146
+ only over important tokens? How to decide what tokens are important? How to attend
147
+ to just a few tokens in a very efficient way? In this blog post, we will try to
148
+ answer those questions.
149
+
150
+ What tokens should be attended to? We will give a practical example of how attention
151
+ works by considering the sentence ''BigBird is now available in HuggingFace for
152
+ extractive question answering''. In BERT-like attention, every word would simply
153
+ attend to all other tokens.
154
+
155
+ Let''s think about a sensible choice of key tokens that a queried token actually
156
+ only should attend to by writing some pseudo-code. Will will assume that the token
157
+ available is queried and build a sensible list of key tokens to attend to.
158
+
159
+ >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
160
+ ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
161
+ ''question'', ''answering'']
162
+
163
+ >>> # further let''s assume, we''re trying to understand the representation of
164
+ ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
165
+ empty `set` and fill up the tokens of our interest as we proceed in this section.
166
+ >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
167
+ to attend Nearby tokens should be important because, in a sentence (sequence of
168
+ words), the current word is highly dependent on neighboring past & future tokens.
169
+ This intuition is the idea behind the concept of sliding attention.'
170
+ example_title: bigbird blog intro
171
+ - text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
172
+ The humour is extremely subtle, and without a solid grasp of theoretical physics
173
+ most of the jokes will go over a typical viewer''s head. There''s also Rick''s
174
+ nihilistic outlook, which is deftly woven into his characterisation- his personal
175
+ philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
176
+ understand this stuff; they have the intellectual capacity to truly appreciate
177
+ the depths of these jokes, to realise that they''re not just funny- they say something
178
+ deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
179
+ of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
180
+ catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
181
+ Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
182
+ addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
183
+ wit unfolds itself on their television screens. What fools.. how I pity them.
184
+ 😂
185
+
186
+ And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
187
+ It''s for the ladies'' eyes only- and even then they have to demonstrate that
188
+ they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
189
+ kid 😎'
190
+ example_title: Richard & Mortimer
191
+ - text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
192
+ building, and the tallest structure in Paris. Its base is square, measuring 125
193
+ metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
194
+ the Washington Monument to become the tallest man-made structure in the world,
195
+ a title it held for 41 years until the Chrysler Building in New York City was
196
+ finished in 1930. It was the first structure to reach a height of 300 metres.
197
+ Due to the addition of a broadcasting aerial at the top of the tower in 1957,
198
+ it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
199
+ the Eiffel Tower is the second tallest free-standing structure in France after
200
+ the Millau Viaduct.
201
+ example_title: eiffel
202
+ parameters:
203
+ max_length: 64
204
+ min_length: 8
205
+ no_repeat_ngram_size: 3
206
+ early_stopping: true
207
+ repetition_penalty: 3.5
208
+ encoder_no_repeat_ngram_size: 4
209
+ num_beams: 2
210
+ model-index:
211
+ - name: pszemraj/led-large-book-summary-continued
212
+ results:
213
+ - task:
214
+ type: summarization
215
+ name: Summarization
216
+ dataset:
217
+ name: kmfoda/booksum
218
+ type: kmfoda/booksum
219
+ config: kmfoda--booksum
220
+ split: test
221
+ metrics:
222
+ - type: rouge
223
+ value: 31.2367
224
+ name: ROUGE-1
225
+ verified: true
226
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI3NzQwMTUxOWRkOGVmZGYwZTkyODIxZmRhM2Y5N2FjYmM2MWEyMDNiN2JmODc3ODExNTAwZjhhZDJkNzNiYyIsInZlcnNpb24iOjF9.EYEvooI7WG94OinI4p5sNiuM1MAFVSYeb2ehv2lGe-B-qR1yvPVBBr7J3iI5UFegZsYciCLA6VRFUe8eQ8KNAg
227
+ - type: rouge
228
+ value: 5.0148
229
+ name: ROUGE-2
230
+ verified: true
231
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzMxYjIzMWY2MTNkODczZWEzOGEzNjYxNzZjMTc0N2U3NmFhMWM5NWFiMzBjZDEwNTFkYjhhMGMwMjliY2JjOSIsInZlcnNpb24iOjF9.DmIc7iNjo5nm_T-uWehMCbcWjgY_WNGdRkiUXdzv96uFIRiVIoW03UspkGfzvjEiKRoa7OM403XZxNXuCjVJCQ
232
+ - type: rouge
233
+ value: 15.7724
234
+ name: ROUGE-L
235
+ verified: true
236
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDUzNzNkYjUxMjE1MzZjMDhkNWE2MmZlMTg0OGM1NDc2M2JlZDJmNDI3M2YyZGM2NmY1ZDZlOWYxMzcyYmExZCIsInZlcnNpb24iOjF9.CVjivCusq1J_tiktqQ-pnsH6iOWdYrf5rwt9wlGoCgw4boXzDVivtHpe0MWlJ5L-XFY75SnrMXeunCBGOwONBQ
237
+ - type: rouge
238
+ value: 28.494
239
+ name: ROUGE-LSUM
240
+ verified: true
241
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTY0MjI3NDNkYzI5ZjA1Nzg5MmE0MzY3OTZkM2U2ZWZkMDBjZjQzMjdjN2Q3Y2NiZjIwNzI1OWJhMzhjYzg4NiIsInZlcnNpb24iOjF9.A0iwWEti-OPFbi9TEpnEpC0rPCLP3Gw3Ns23Lz8e_zi4B_vlGrVW7weofzO8cuGVoC9kS-aJk2a5VGdXYh5KBw
242
+ - type: loss
243
+ value: 4.777158260345459
244
+ name: loss
245
+ verified: true
246
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZkNjdhNGNkNDUyYWNlNDgyNzkxNDdkNTZlOGQ0MmQ3ZGVjYjgwZTk2M2E4NjAwNWZkNGEzMTU2ZWFjMmFmMCIsInZlcnNpb24iOjF9.TTEWfYmpM4VPKn1Jukkwadj6C3HASvzTMJeTLHCHqd5Vr7s0X0PcIKvnyEVycwywFanfrgIg4Pyn0G_IVeYcBg
247
+ - type: gen_len
248
+ value: 154.1908
249
+ name: gen_len
250
+ verified: true
251
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmI3YjZkNTZmMzNjMzMzODlhODFmNWFlNjNmODI0ZjE2ZWNjMzcxMWUyMGMzNzY2MDIzZWIwYTMxODk3M2Q3YiIsInZlcnNpb24iOjF9.nyUANcwiu-sb3vXMFIdzvdDPTBBhJOEQmdu25XSXRgwNSfugKDydAoHy2tdo9ZE8r32xxYDPoutER22APV4PCA
252
+ ---
253
+
254
+ # led-large-book-summary: continued
255
+
256
+ Fine-tuned further to explore if any improvements vs. the default.
257
+
258
+ ## Details
259
+
260
+ This model is a version of [pszemraj/led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary) further fine-tuned for two epochs.
261
+
262
+ ## Usage
263
+
264
+ It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
265
+
266
+
267
+ ```bash
268
+ pip install -U textsum
269
+ ```
270
+
271
+ ```python
272
+ from textsum.summarize import Summarizer
273
+
274
+ model_name = "pszemraj/led-large-book-summary-continued"
275
+ summarizer = Summarizer(model_name) # GPU auto-detected
276
+ text = "put the text you don't want to read here"
277
+ summary = summarizer.summarize_string(text)
278
+ print(summary)
279
+ ```
280
+
281
+ ## Training procedure
282
+
283
+ ### Training hyperparameters
284
+
285
+ The following hyperparameters were used during training:
286
+ - learning_rate: 3e-05
287
+ - train_batch_size: 4
288
+ - eval_batch_size: 2
289
+ - seed: 8191
290
+ - gradient_accumulation_steps: 16
291
+ - total_train_batch_size: 64
292
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
293
+ - lr_scheduler_type: cosine
294
+ - lr_scheduler_warmup_ratio: 0.01
295
+ - num_epochs: 2.0
296
+ - mixed_precision_training: Native AMP
config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pszemraj/led-large-book-summary-booksum-1024-output-r2",
3
+ "_num_labels": 3,
4
+ "activation_dropout": 0.0,
5
+ "activation_function": "gelu",
6
+ "architectures": [
7
+ "LEDForConditionalGeneration"
8
+ ],
9
+ "attention_dropout": 0.0,
10
+ "attention_window": [
11
+ 1024,
12
+ 1024,
13
+ 1024,
14
+ 1024,
15
+ 1024,
16
+ 1024,
17
+ 1024,
18
+ 1024,
19
+ 1024,
20
+ 1024,
21
+ 1024,
22
+ 1024
23
+ ],
24
+ "bos_token_id": 0,
25
+ "classif_dropout": 0.0,
26
+ "classifier_dropout": 0.0,
27
+ "d_model": 1024,
28
+ "decoder_attention_heads": 16,
29
+ "decoder_ffn_dim": 4096,
30
+ "decoder_layerdrop": 0.0,
31
+ "decoder_layers": 12,
32
+ "decoder_start_token_id": 2,
33
+ "dropout": 0.1,
34
+ "early_stopping": true,
35
+ "encoder_attention_heads": 16,
36
+ "encoder_ffn_dim": 4096,
37
+ "encoder_layerdrop": 0.0,
38
+ "encoder_layers": 12,
39
+ "eos_token_id": 2,
40
+ "id2label": {
41
+ "0": "LABEL_0",
42
+ "1": "LABEL_1",
43
+ "2": "LABEL_2"
44
+ },
45
+ "init_std": 0.02,
46
+ "is_encoder_decoder": true,
47
+ "label2id": {
48
+ "LABEL_0": 0,
49
+ "LABEL_1": 1,
50
+ "LABEL_2": 2
51
+ },
52
+ "length_penalty": 0.8,
53
+ "max_decoder_position_embeddings": 1024,
54
+ "max_encoder_position_embeddings": 16384,
55
+ "max_length": 1024,
56
+ "min_length": 8,
57
+ "model_type": "led",
58
+ "no_repeat_ngram_size": 3,
59
+ "num_beams": 4,
60
+ "num_hidden_layers": 12,
61
+ "output_past": false,
62
+ "pad_token_id": 1,
63
+ "prefix": " ",
64
+ "repetition_penalty": 3.5,
65
+ "torch_dtype": "float32",
66
+ "transformers_version": "4.27.4",
67
+ "use_cache": true,
68
+ "vocab_size": 50265
69
+ }
generation_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "length_penalty": 0.8,
7
+ "max_length": 1024,
8
+ "min_length": 8,
9
+ "no_repeat_ngram_size": 3,
10
+ "num_beams": 4,
11
+ "pad_token_id": 1,
12
+ "repetition_penalty": 3.5,
13
+ "transformers_version": "4.27.4",
14
+ "use_cache": false
15
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:712c3e5d1ae3c37deb68121bfa4877b69398db016ea08e3b8bb70a9ee8660854
3
+ size 1839478370
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:496fb4c5b4a9477dc2216caeb08939f1566678627cd2c319089ff399741c144a
3
+ size 1839600557
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "cls_token": {
12
+ "__type": "AddedToken",
13
+ "content": "<s>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "eos_token": {
20
+ "__type": "AddedToken",
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ },
27
+ "errors": "replace",
28
+ "mask_token": {
29
+ "__type": "AddedToken",
30
+ "content": "<mask>",
31
+ "lstrip": true,
32
+ "normalized": true,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ },
36
+ "model_max_length": 16384,
37
+ "pad_token": {
38
+ "__type": "AddedToken",
39
+ "content": "<pad>",
40
+ "lstrip": false,
41
+ "normalized": true,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ "sep_token": {
46
+ "__type": "AddedToken",
47
+ "content": "</s>",
48
+ "lstrip": false,
49
+ "normalized": true,
50
+ "rstrip": false,
51
+ "single_word": false
52
+ },
53
+ "special_tokens_map_file": "/root/.cache/huggingface/transformers/2ad921573d53ebf0c0450d63a211e61d8e328324e84830c365abff01f2d115f1.cb2244924ab24d706b02fd7fcedaea4531566537687a539ebb94db511fd122a0",
54
+ "tokenizer_class": "LEDTokenizer",
55
+ "trim_offsets": true,
56
+ "unk_token": {
57
+ "__type": "AddedToken",
58
+ "content": "<unk>",
59
+ "lstrip": false,
60
+ "normalized": true,
61
+ "rstrip": false,
62
+ "single_word": false
63
+ }
64
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff