pszemraj SFconvertbot autoevaluator HF Staff commited on
Commit
c38347f
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: SFconvertbot <SFconvertbot@users.noreply.huggingface.co>
Co-authored-by: autoevaluator <autoevaluator@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - summarization
7
+ - pegasus
8
+ datasets:
9
+ - kmfoda/booksum
10
+ metrics:
11
+ - rouge
12
+ widget:
13
+ - text: large earthquakes along a given fault segment do not occur at random intervals
14
+ because it takes time to accumulate the strain energy for the rupture. The rates
15
+ at which tectonic plates move and accumulate strain at their boundaries are approximately
16
+ uniform. Therefore, in first approximation, one may expect that large ruptures
17
+ of the same fault segment will occur at approximately constant time intervals.
18
+ If subsequent main shocks have different amounts of slip across the fault, then
19
+ the recurrence time may vary, and the basic idea of periodic mainshocks must be
20
+ modified. For great plate boundary ruptures the length and slip often vary by
21
+ a factor of 2. Along the southern segment of the San Andreas fault the recurrence
22
+ interval is 145 years with variations of several decades. The smaller the standard
23
+ deviation of the average recurrence interval, the more specific could be the long
24
+ term prediction of a future mainshock.
25
+ example_title: earthquakes
26
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
27
+ are fed into a neural network that predicts values in the reconstructed domain.
28
+ Then, this domain is mapped to the sensor domain where sensor measurements are
29
+ available as supervision. Class and Section Problems Addressed Generalization
30
+ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
31
+ Representations (Section 3) Computation & memory efficiency, representation capacity,
32
+ editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
33
+ 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
34
+ 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
35
+ in the neural field toolbox each addresses problems that arise in learning, inference,
36
+ and control. (Section 3). We can supervise reconstruction via differentiable forward
37
+ maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
38
+ Section 4) With appropriate network architecture choices, we can overcome neural
39
+ network spectral biases (blurriness) and efficiently compute derivatives and integrals
40
+ (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
41
+ and to achieve editable representations (Section 6). Collectively, these classes
42
+ constitute a ''toolbox'' of techniques to help solve problems with neural fields
43
+ There are three components in a conditional neural field: (1) An encoder or inference
44
+ function € that outputs the conditioning latent variable 2 given an observation
45
+ 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
46
+ a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
47
+ parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
48
+ most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
49
+ the inverse conditional probability to find the most probable 0 given Z: arg-
50
+ max P(Olz). We discuss different encoding schemes with different optimality guarantees
51
+ (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
52
+ mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
53
+ a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
54
+ prior over the sur- face in its reconstruction domain to generalize to the partial
55
+ observations. A neural network expresses a prior via the function space of its
56
+ architecture and parameters 0, and generalization is influenced by the inductive
57
+ bias of this function space (Section 5).'
58
+ example_title: scientific paper
59
+ - text: ' the big variety of data coming from diverse sources is one of the key properties
60
+ of the big data phenomenon. It is, therefore, beneficial to understand how data
61
+ is generated in various environments and scenarios, before looking at what should
62
+ be done with this data and how to design the best possible architecture to accomplish
63
+ this The evolution of IT architectures, described in Chapter 2, means that the
64
+ data is no longer processed by a few big monolith systems, but rather by a group
65
+ of services In parallel to the processing layer, the underlying data storage has
66
+ also changed and became more distributed This, in turn, required a significant
67
+ paradigm shift as the traditional approach to transactions (ACID) could no longer
68
+ be supported. On top of this, cloud computing is becoming a major approach with
69
+ the benefits of reducing costs and providing on-demand scalability but at the
70
+ same time introducing concerns about privacy, data ownership, etc In the meantime
71
+ the Internet continues its exponential growth: Every day both structured and unstructured
72
+ data is published and available for processing: To achieve competitive advantage
73
+ companies have to relate their corporate resources to external services, e.g.
74
+ financial markets, weather forecasts, social media, etc While several of the sites
75
+ provide some sort of API to access the data in a more orderly fashion; countless
76
+ sources require advanced web mining and Natural Language Processing (NLP) processing
77
+ techniques: Advances in science push researchers to construct new instruments
78
+ for observing the universe O conducting experiments to understand even better
79
+ the laws of physics and other domains. Every year humans have at their disposal
80
+ new telescopes, space probes, particle accelerators, etc These instruments generate
81
+ huge streams of data, which need to be stored and analyzed. The constant drive
82
+ for efficiency in the industry motivates the introduction of new automation techniques
83
+ and process optimization: This could not be done without analyzing the precise
84
+ data that describe these processes. As more and more human tasks are automated,
85
+ machines provide rich data sets, which can be analyzed in real-time to drive efficiency
86
+ to new levels. Finally, it is now evident that the growth of the Internet of Things
87
+ is becoming a major source of data. More and more of the devices are equipped
88
+ with significant computational power and can generate a continuous data stream
89
+ from their sensors. In the subsequent sections of this chapter, we will look at
90
+ the domains described above to see what they generate in terms of data sets. We
91
+ will compare the volumes but will also look at what is characteristic and important
92
+ from their respective points of view. 3.1 The Internet is undoubtedly the largest
93
+ database ever created by humans. While several well described; cleaned, and structured
94
+ data sets have been made available through this medium, most of the resources
95
+ are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
96
+ several examples in the areas such as opinion mining, social media analysis, e-governance,
97
+ etc, clearly show the potential lying in these resources. Those who can successfully
98
+ mine and interpret the Internet data can gain unique insight and competitive advantage
99
+ in their business An important area of data analytics on the edge of corporate
100
+ IT and the Internet is Web Analytics.'
101
+ example_title: data science textbook
102
+ - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
103
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
104
+ & memory complexity (where nn is sequence length). Hence, it''s computationally
105
+ very expensive to apply transformer-based models on long sequences n > 512n>512.
106
+ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
107
+ try to remedy this problem by approximating the full attention matrix. You can
108
+ checkout 🤗''s recent blog post in case you are unfamiliar with these models.
109
+
110
+ BigBird (introduced in paper) is one of such recent models to address this issue.
111
+ BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
112
+ attention) and can handle sequences up to a length of 4096 at a much lower computational
113
+ cost compared to BERT. It has achieved SOTA on various tasks involving very long
114
+ sequences such as long documents summarization, question-answering with long contexts.
115
+
116
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
117
+ post is to give the reader an in-depth understanding of big bird implementation
118
+ & ease one''s life in using BigBird with 🤗Transformers. But, before going into
119
+ more depth, it is important to remember that the BigBird''s attention is an approximation
120
+ of BERT''s full attention and therefore does not strive to be better than BERT''s
121
+ full attention, but rather to be more efficient. It simply allows to apply transformer-based
122
+ models to much longer sequences since BERT''s quadratic memory requirement quickly
123
+ becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
124
+ would be preferred over block sparse attention (which we are going to discuss
125
+ in this post).
126
+
127
+ If you wonder why we need more compute when working with longer sequences, this
128
+ blog post is just right for you!
129
+
130
+ Some of the main questions one might have when working with standard BERT-like
131
+ attention include:
132
+
133
+ Do all tokens really have to attend to all other tokens? Why not compute attention
134
+ only over important tokens? How to decide what tokens are important? How to attend
135
+ to just a few tokens in a very efficient way? In this blog post, we will try to
136
+ answer those questions.
137
+
138
+ What tokens should be attended to? We will give a practical example of how attention
139
+ works by considering the sentence ''BigBird is now available in HuggingFace for
140
+ extractive question answering''. In BERT-like attention, every word would simply
141
+ attend to all other tokens.
142
+
143
+ Let''s think about a sensible choice of key tokens that a queried token actually
144
+ only should attend to by writing some pseudo-code. Will will assume that the token
145
+ available is queried and build a sensible list of key tokens to attend to.
146
+
147
+ >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
148
+ ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
149
+ ''question'', ''answering'']
150
+
151
+ >>> # further let''s assume, we''re trying to understand the representation of
152
+ ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
153
+ empty `set` and fill up the tokens of our interest as we proceed in this section.
154
+ >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
155
+ to attend Nearby tokens should be important because, in a sentence (sequence of
156
+ words), the current word is highly dependent on neighboring past & future tokens.
157
+ This intuition is the idea behind the concept of sliding attention.'
158
+ example_title: bigbird blog intro
159
+ inference:
160
+ parameters:
161
+ max_length: 64
162
+ no_repeat_ngram_size: 2
163
+ encoder_no_repeat_ngram_size: 3
164
+ repetition_penalty: 2.4
165
+ length_penalty: 0.5
166
+ num_beams: 4
167
+ early_stopping: true
168
+ model-index:
169
+ - name: pszemraj/pegasus-large-summary-explain
170
+ results:
171
+ - task:
172
+ type: summarization
173
+ name: Summarization
174
+ dataset:
175
+ name: kmfoda/booksum
176
+ type: kmfoda/booksum
177
+ config: kmfoda--booksum
178
+ split: test
179
+ metrics:
180
+ - type: rouge
181
+ value: 29.1023
182
+ name: ROUGE-1
183
+ verified: true
184
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTFhNjg4YTFlODU5MmVjNGVmNDRmMjQ4M2YyZGNmMWRlYjBhZmVhMTY3ZTUxNDkzNjY0OGVmNWJlNmY1OTkzNCIsInZlcnNpb24iOjF9.E_rVKqB7WEerLeRq6JIVTLZ1TgmsThFQJVKh11WH1qWa-cL3766psPWDKe8mK3lNkjmwbiDW0DZlDt4dm2ATCA
185
+ - type: rouge
186
+ value: 6.2441
187
+ name: ROUGE-2
188
+ verified: true
189
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVmZmFlOTgwN2Q3ZWRkZGVkMzU1ZDRkYzU1MWMzMTk1NDM5YTU0MzFjNDljNmZlY2I2NjZmZjcyYjBkZGExZCIsInZlcnNpb24iOjF9.QnuGoMWX8cq5_ukRtiaLRLau_F9XiCjg313GC7Iu1VGK8Kj_9lzU43377VsH0fBWooA1zJjtIK0UA-YpGQQOAA
190
+ - type: rouge
191
+ value: 14.7503
192
+ name: ROUGE-L
193
+ verified: true
194
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzJhNzE0YjZiZWQ4NDE1Yjg3ZGJjY2ZmYWEwYzU5MTRhYWNiNTcyODU1NzM5NTZhNjNlNmYwNDVlYmZmYjkxOCIsInZlcnNpb24iOjF9.m5BLUMefXa1KivIIE9-gYKYq5aRRbfpQWazqzXxfCsqqp38Lt0ymk6OwXSlQyB_5oksNHIDFKpJX4wjYx2i7Bw
195
+ - type: rouge
196
+ value: 27.2375
197
+ name: ROUGE-LSUM
198
+ verified: true
199
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTY1OTIxMzBkMGJiZmNiNjZjYmQ2MjUwMjBkYTg5Zjc1NjVlZjllNTg0MDM1NTdhZDJlZmIwOTczOGNkZDc5YyIsInZlcnNpb24iOjF9.bThI16mvqhEuGBhdao0w8j03vv9G9Quy-ITRZzalr41zOour9it4oxEPFCvmPf-nLCQkqgWKUDEzgr6Ww8qgBg
200
+ - type: loss
201
+ value: 2.979011058807373
202
+ name: loss
203
+ verified: true
204
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGM0NzM3YTI4Njg4NDY0ZjQzNTZmYTIxYzcxNDBlNzAwNTAxNDE4MTZjYmZmNzYwODU0OWQ1ZjM5YjRmMmFkZiIsInZlcnNpb24iOjF9.EPEP53AoqHz0rjVGStJI2dM7ivxFmOj572I3llWdAoejm3zO1Iq5WDArYsqOse_oLxYCgcqPmNVc5IcLW9x7Dg
205
+ - type: gen_len
206
+ value: 467.269
207
+ name: gen_len
208
+ verified: true
209
+ verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjgzYzU2ZjkwN2RhNzJlZmQyZTBlYmUxMTZhNzg0ODMwMjA3OTUzNTIwOWFkZWVmNjVmMTJiZmZhNWFmY2UzZCIsInZlcnNpb24iOjF9.RW5tzk2fcc_m4bgaSopRDFhSR9R8hRaYKrstXH4X5iGP_Xwvhy5Q7-igd2ACnlxIfmtdTmMxLMsvHr5oAZEwDg
210
+ ---
211
+
212
+
213
+ # pszemraj/pegasus-large-summary-explain
214
+
215
+ This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset for four total epochs.
216
+
217
+ It achieves the following results on the evaluation set:
218
+ - eval_loss: 1.1193
219
+ - eval_runtime: 6.6754
220
+ - eval_samples_per_second: 27.714
221
+ - eval_steps_per_second: 1.798
222
+ - epoch: 3.0
223
+ - step: 900
224
+
225
+ A 1-epoch checkpoint can be found at [pszemraj/pegasus-large-book-summary](https://huggingface.co/pszemraj/pegasus-large-book-summary), which is where the second training session started from.
226
+
227
+ ## Model description
228
+
229
+ - After some initial tests, it was found that models trained on the [booksum](https://github.com/salesforce/booksum) dataset seem to inherit the summaries' SparkNotes-style explanations; so the user gets a shorter and easier-to-understand version of the text instead of **just** more compact.
230
+ - This quality (anecdotally) is favourable for learning/comprehension because summarization datasets that simply make the information more compact (* cough * arXiv) can be so dense that the overall time spent trying to _comprehend_ what it is saying can be the same as just reading the original material.
231
+
232
+
233
+ ## Intended uses & limitations
234
+
235
+ - standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
236
+
237
+ ## Training and evaluation data
238
+
239
+ More information needed
240
+
241
+ ## Training procedure
242
+
243
+ ### Training hyperparameters
244
+
245
+ The following hyperparameters were used during training:
246
+ - learning_rate: 4e-05
247
+ - train_batch_size: 16
248
+ - eval_batch_size: 16
249
+ - seed: 42
250
+ - distributed_type: multi-GPU
251
+ - gradient_accumulation_steps: 2
252
+ - total_train_batch_size: 32
253
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
254
+ - lr_scheduler_type: cosine
255
+ - lr_scheduler_warmup_ratio: 0.03
256
+ - num_epochs: 4
257
+
258
+ ### Framework versions
259
+
260
+ - Transformers 4.16.2
261
+ - Pytorch 1.10.2+cu113
262
+ - Datasets 1.18.3
263
+ - Tokenizers 0.11.0
config.json ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pszemraj/pegasus-large-book-summary",
3
+ "activation_dropout": 0.1,
4
+ "activation_function": "relu",
5
+ "add_bias_logits": false,
6
+ "add_final_layer_norm": true,
7
+ "architectures": [
8
+ "PegasusForConditionalGeneration"
9
+ ],
10
+ "attention_dropout": 0.1,
11
+ "bos_token_id": 0,
12
+ "classif_dropout": 0.0,
13
+ "classifier_dropout": 0.0,
14
+ "d_model": 1024,
15
+ "decoder_attention_heads": 16,
16
+ "decoder_ffn_dim": 4096,
17
+ "decoder_layerdrop": 0.0,
18
+ "decoder_layers": 16,
19
+ "decoder_start_token_id": 0,
20
+ "dropout": 0.1,
21
+ "early_stopping": true,
22
+ "encoder_attention_heads": 16,
23
+ "encoder_ffn_dim": 4096,
24
+ "encoder_layerdrop": 0.0,
25
+ "encoder_layers": 16,
26
+ "eos_token_id": 1,
27
+ "extra_pos_embeddings": 1,
28
+ "force_bos_token_to_be_generated": false,
29
+ "forced_eos_token_id": 1,
30
+ "gradient_checkpointing": false,
31
+ "id2label": {
32
+ "0": "LABEL_0",
33
+ "1": "LABEL_1",
34
+ "2": "LABEL_2"
35
+ },
36
+ "init_std": 0.02,
37
+ "is_encoder_decoder": true,
38
+ "label2id": {
39
+ "LABEL_0": 0,
40
+ "LABEL_1": 1,
41
+ "LABEL_2": 2
42
+ },
43
+ "length_penalty": 3.5,
44
+ "max_length": 512,
45
+ "max_position_embeddings": 1024,
46
+ "min_length": 32,
47
+ "model_type": "pegasus",
48
+ "no_repeat_ngram_size": 3,
49
+ "normalize_before": true,
50
+ "normalize_embedding": false,
51
+ "num_beams": 5,
52
+ "num_hidden_layers": 16,
53
+ "pad_token_id": 0,
54
+ "scale_embedding": true,
55
+ "static_position_embeddings": true,
56
+ "task_specific_params": {
57
+ "summarization_aeslc": {
58
+ "length_penalty": 0.6,
59
+ "max_length": 32,
60
+ "max_position_embeddings": 512
61
+ },
62
+ "summarization_arxiv": {
63
+ "length_penalty": 0.8,
64
+ "max_length": 256,
65
+ "max_position_embeddings": 1024
66
+ },
67
+ "summarization_big_patent": {
68
+ "length_penalty": 0.7,
69
+ "max_length": 256,
70
+ "max_position_embeddings": 1024
71
+ },
72
+ "summarization_billsum": {
73
+ "length_penalty": 0.6,
74
+ "max_length": 256,
75
+ "max_position_embeddings": 1024
76
+ },
77
+ "summarization_cnn_dailymail": {
78
+ "length_penalty": 0.8,
79
+ "max_length": 128,
80
+ "max_position_embeddings": 1024
81
+ },
82
+ "summarization_gigaword": {
83
+ "length_penalty": 0.6,
84
+ "max_length": 32,
85
+ "max_position_embeddings": 128
86
+ },
87
+ "summarization_large": {
88
+ "length_penalty": 0.8,
89
+ "max_length": 256,
90
+ "max_position_embeddings": 1024
91
+ },
92
+ "summarization_multi_news": {
93
+ "length_penalty": 0.8,
94
+ "max_length": 256,
95
+ "max_position_embeddings": 1024
96
+ },
97
+ "summarization_newsroom": {
98
+ "length_penalty": 0.8,
99
+ "max_length": 128,
100
+ "max_position_embeddings": 512
101
+ },
102
+ "summarization_pubmed": {
103
+ "length_penalty": 0.8,
104
+ "max_length": 256,
105
+ "max_position_embeddings": 1024
106
+ },
107
+ "summarization_reddit_tifu": {
108
+ "length_penalty": 0.6,
109
+ "max_length": 128,
110
+ "max_position_embeddings": 512
111
+ },
112
+ "summarization_wikihow": {
113
+ "length_penalty": 0.6,
114
+ "max_length": 256,
115
+ "max_position_embeddings": 512
116
+ },
117
+ "summarization_xsum": {
118
+ "length_penalty": 0.8,
119
+ "max_length": 64,
120
+ "max_position_embeddings": 512
121
+ }
122
+ },
123
+ "torch_dtype": "float32",
124
+ "transformers_version": "4.16.2",
125
+ "use_cache": false,
126
+ "vocab_size": 96103
127
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d14b4b1a9bbdc179564a007282f80eac0bbe8491e5f4b5f65ca26851da1f432
3
+ size 2275264008
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7e1cad802995443c74eaaf8ec20755e0d015a6b1e8a564b6b71d0b85a4f4419
3
+ size 2275268647
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask_2>", "additional_special_tokens": ["<mask_1>", "<unk_2>", "<unk_3>", "<unk_4>", "<unk_5>", "<unk_6>", "<unk_7>", "<unk_8>", "<unk_9>", "<unk_10>", "<unk_11>", "<unk_12>", "<unk_13>", "<unk_14>", "<unk_15>", "<unk_16>", "<unk_17>", "<unk_18>", "<unk_19>", "<unk_20>", "<unk_21>", "<unk_22>", "<unk_23>", "<unk_24>", "<unk_25>", "<unk_26>", "<unk_27>", "<unk_28>", "<unk_29>", "<unk_30>", "<unk_31>", "<unk_32>", "<unk_33>", "<unk_34>", "<unk_35>", "<unk_36>", "<unk_37>", "<unk_38>", "<unk_39>", "<unk_40>", "<unk_41>", "<unk_42>", "<unk_43>", "<unk_44>", "<unk_45>", "<unk_46>", "<unk_47>", "<unk_48>", "<unk_49>", "<unk_50>", "<unk_51>", "<unk_52>", "<unk_53>", "<unk_54>", "<unk_55>", "<unk_56>", "<unk_57>", "<unk_58>", "<unk_59>", "<unk_60>", "<unk_61>", "<unk_62>", "<unk_63>", "<unk_64>", "<unk_65>", "<unk_66>", "<unk_67>", "<unk_68>", "<unk_69>", "<unk_70>", "<unk_71>", "<unk_72>", "<unk_73>", "<unk_74>", "<unk_75>", "<unk_76>", "<unk_77>", "<unk_78>", "<unk_79>", "<unk_80>", "<unk_81>", "<unk_82>", "<unk_83>", "<unk_84>", "<unk_85>", "<unk_86>", "<unk_87>", "<unk_88>", "<unk_89>", "<unk_90>", "<unk_91>", "<unk_92>", "<unk_93>", "<unk_94>", "<unk_95>", "<unk_96>", "<unk_97>", "<unk_98>", "<unk_99>", "<unk_100>", "<unk_101>", "<unk_102>"]}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0015189ef36359283fec8b93cf6d9ce51bca37eb1101defc68a53b394913b96c
3
+ size 1912529
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"pad_token": "<pad>", "eos_token": "</s>", "unk_token": "<unk>", "mask_token": "<mask_2>", "mask_token_sent": "<mask_1>", "offset": 103, "additional_special_tokens": ["<mask_1>", "<unk_2>", "<unk_3>", "<unk_4>", "<unk_5>", "<unk_6>", "<unk_7>", "<unk_8>", "<unk_9>", "<unk_10>", "<unk_11>", "<unk_12>", "<unk_13>", "<unk_14>", "<unk_15>", "<unk_16>", "<unk_17>", "<unk_18>", "<unk_19>", "<unk_20>", "<unk_21>", "<unk_22>", "<unk_23>", "<unk_24>", "<unk_25>", "<unk_26>", "<unk_27>", "<unk_28>", "<unk_29>", "<unk_30>", "<unk_31>", "<unk_32>", "<unk_33>", "<unk_34>", "<unk_35>", "<unk_36>", "<unk_37>", "<unk_38>", "<unk_39>", "<unk_40>", "<unk_41>", "<unk_42>", "<unk_43>", "<unk_44>", "<unk_45>", "<unk_46>", "<unk_47>", "<unk_48>", "<unk_49>", "<unk_50>", "<unk_51>", "<unk_52>", "<unk_53>", "<unk_54>", "<unk_55>", "<unk_56>", "<unk_57>", "<unk_58>", "<unk_59>", "<unk_60>", "<unk_61>", "<unk_62>", "<unk_63>", "<unk_64>", "<unk_65>", "<unk_66>", "<unk_67>", "<unk_68>", "<unk_69>", "<unk_70>", "<unk_71>", "<unk_72>", "<unk_73>", "<unk_74>", "<unk_75>", "<unk_76>", "<unk_77>", "<unk_78>", "<unk_79>", "<unk_80>", "<unk_81>", "<unk_82>", "<unk_83>", "<unk_84>", "<unk_85>", "<unk_86>", "<unk_87>", "<unk_88>", "<unk_89>", "<unk_90>", "<unk_91>", "<unk_92>", "<unk_93>", "<unk_94>", "<unk_95>", "<unk_96>", "<unk_97>", "<unk_98>", "<unk_99>", "<unk_100>", "<unk_101>", "<unk_102>"], "model_max_length": 1024, "special_tokens_map_file": null, "full_tokenizer_file": null, "name_or_path": "pszemraj/pegasus-large-book-summary", "sp_model_kwargs": {}, "tokenizer_class": "PegasusTokenizer"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95d737d31c01d3e5267f63e8a2c4c2f74120914f368a49bab29c8090e141b2ac
3
+ size 4207