GGUF
PyTorch
English
causal-lm
pythia
aashish1904 commited on
Commit
e9286ae
·
verified ·
1 Parent(s): 5f2b416

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +292 -0
README.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ tags:
7
+ - pytorch
8
+ - causal-lm
9
+ - pythia
10
+ license: apache-2.0
11
+ datasets:
12
+ - EleutherAI/pile
13
+
14
+ ---
15
+
16
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
17
+
18
+
19
+ # QuantFactory/pythia-160m-GGUF
20
+ This is quantized version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) created using llama.cpp
21
+
22
+ # Original Model Card
23
+
24
+
25
+ The *Pythia Scaling Suite* is a collection of models developed to facilitate
26
+ interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
27
+ It contains two sets of eight models of sizes
28
+ 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
29
+ models: one trained on the Pile, and one trained on the Pile after the dataset
30
+ has been globally deduplicated. All 8 model sizes are trained on the exact
31
+ same data, in the exact same order. We also provide 154 intermediate
32
+ checkpoints per model, hosted on Hugging Face as branches.
33
+
34
+ The Pythia model suite was deliberately designed to promote scientific
35
+ research on large language models, especially interpretability research.
36
+ Despite not centering downstream performance as a design goal, we find the
37
+ models <a href="#evaluations">match or exceed</a> the performance of
38
+ similar and same-sized models, such as those in the OPT and GPT-Neo suites.
39
+
40
+ <details>
41
+ <summary style="font-weight:600">Details on previous early release and naming convention.</summary>
42
+
43
+ Previously, we released an early version of the Pythia suite to the public.
44
+ However, we decided to retrain the model suite to address a few hyperparameter
45
+ discrepancies. This model card <a href="#changelog">lists the changes</a>;
46
+ see appendix B in the Pythia paper for further discussion. We found no
47
+ difference in benchmark performance between the two Pythia versions.
48
+ The old models are
49
+ [still available](https://huggingface.co/models?other=pythia_v0), but we
50
+ suggest the retrained suite if you are just starting to use Pythia.<br>
51
+ **This is the current release.**
52
+
53
+ Please note that all models in the *Pythia* suite were renamed in January
54
+ 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
55
+ comparing the old and new names</a> is provided in this model card, together
56
+ with exact parameter counts.
57
+ </details>
58
+ <br>
59
+
60
+ # Pythia-160M
61
+
62
+ ## Model Details
63
+
64
+ - Developed by: [EleutherAI](http://eleuther.ai)
65
+ - Model type: Transformer-based Language Model
66
+ - Language: English
67
+ - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
68
+ for training procedure, config files, and details on how to use.
69
+ [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
70
+ details.
71
+ - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
72
+ - License: Apache 2.0
73
+ - Contact: to ask questions about this model, join the [EleutherAI
74
+ Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
75
+ Please read the existing *Pythia* documentation before asking about it in the
76
+ EleutherAI Discord. For general correspondence: [contact@eleuther.
77
+ ai](mailto:contact@eleuther.ai).
78
+
79
+ <figure>
80
+
81
+ | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
82
+ | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
83
+ | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
84
+ | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
85
+ | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
86
+ | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
87
+ | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
88
+ | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
89
+ | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
90
+ | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
91
+ <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
92
+ non-deduped models of a given size have the same hyperparameters. “Equivalent”
93
+ models have <b>exactly</b> the same architecture, and the same number of
94
+ non-embedding parameters.</figcaption>
95
+ </figure>
96
+
97
+ ## Uses and Limitations
98
+
99
+ ### Intended Use
100
+
101
+ The primary intended use of Pythia is research on the behavior, functionality,
102
+ and limitations of large language models. This suite is intended to provide
103
+ a controlled setting for performing scientific experiments. We also provide
104
+ 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
105
+ `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
106
+ `step143000`. These checkpoints are hosted on Hugging Face as branches. Note
107
+ that branch `143000` corresponds exactly to the model checkpoint on the `main`
108
+ branch of each model.
109
+
110
+ You may also further fine-tune and adapt Pythia-160M for deployment,
111
+ as long as your use is in accordance with the Apache 2.0 license. Pythia
112
+ models work with the Hugging Face [Transformers
113
+ Library](https://huggingface.co/docs/transformers/index). If you decide to use
114
+ pre-trained Pythia-160M as a basis for your fine-tuned model, please
115
+ conduct your own risk and bias assessment.
116
+
117
+ ### Out-of-scope use
118
+
119
+ The Pythia Suite is **not** intended for deployment. It is not a in itself
120
+ a product and cannot be used for human-facing interactions. For example,
121
+ the model may generate harmful or offensive text. Please evaluate the risks
122
+ associated with your particular use case.
123
+
124
+ Pythia models are English-language only, and are not suitable for translation
125
+ or generating text in other languages.
126
+
127
+ Pythia-160M has not been fine-tuned for downstream contexts in which
128
+ language models are commonly deployed, such as writing genre prose,
129
+ or commercial chatbots. This means Pythia-160M will **not**
130
+ respond to a given prompt the way a product like ChatGPT does. This is because,
131
+ unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
132
+ Learning from Human Feedback (RLHF) to better “follow” human instructions.
133
+
134
+ ### Limitations and biases
135
+
136
+ The core functionality of a large language model is to take a string of text
137
+ and predict the next token. The token used by the model need not produce the
138
+ most “accurate” text. Never rely on Pythia-160M to produce factually accurate
139
+ output.
140
+
141
+ This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
142
+ known to contain profanity and texts that are lewd or otherwise offensive.
143
+ See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
144
+ discussion of documented biases with regards to gender, religion, and race.
145
+ Pythia-160M may produce socially unacceptable or undesirable text, *even if*
146
+ the prompt itself does not include anything explicitly offensive.
147
+
148
+ If you plan on using text generated through, for example, the Hosted Inference
149
+ API, we recommend having a human curate the outputs of this language model
150
+ before presenting it to other people. Please inform your audience that the
151
+ text was generated by Pythia-160M.
152
+
153
+ ### Quickstart
154
+
155
+ Pythia models can be loaded and used via the following code, demonstrated here
156
+ for the third `pythia-70m-deduped` checkpoint:
157
+
158
+ ```python
159
+ from transformers import GPTNeoXForCausalLM, AutoTokenizer
160
+
161
+ model = GPTNeoXForCausalLM.from_pretrained(
162
+ "EleutherAI/pythia-70m-deduped",
163
+ revision="step3000",
164
+ cache_dir="./pythia-70m-deduped/step3000",
165
+ )
166
+
167
+ tokenizer = AutoTokenizer.from_pretrained(
168
+ "EleutherAI/pythia-70m-deduped",
169
+ revision="step3000",
170
+ cache_dir="./pythia-70m-deduped/step3000",
171
+ )
172
+
173
+ inputs = tokenizer("Hello, I am", return_tensors="pt")
174
+ tokens = model.generate(**inputs)
175
+ tokenizer.decode(tokens[0])
176
+ ```
177
+
178
+ Revision/branch `step143000` corresponds exactly to the model checkpoint on
179
+ the `main` branch of each model.<br>
180
+ For more information on how to use all Pythia models, see [documentation on
181
+ GitHub](https://github.com/EleutherAI/pythia).
182
+
183
+ ## Training
184
+
185
+ ### Training data
186
+
187
+ [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
188
+ English. It was created by EleutherAI specifically for training large language
189
+ models. It contains texts from 22 diverse sources, roughly broken down into
190
+ five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
191
+ prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
192
+ miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
193
+ paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
194
+ methodology, and a discussion of ethical implications. Consult [the
195
+ datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
196
+ about the Pile and its component datasets. The Pile can be downloaded from
197
+ the [official website](https://pile.eleuther.ai/), or from a [community
198
+ mirror](https://the-eye.eu/public/AI/pile/).<br>
199
+ The Pile was **not** deduplicated before being used to train Pythia-160M.
200
+
201
+ ### Training procedure
202
+
203
+ All models were trained on the exact same data, in the exact same order. Each
204
+ model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
205
+ model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
206
+ from `step1000` to `step143000` (which is the same as `main`). In addition, we
207
+ also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
208
+ This corresponds to training for just under 1 epoch on the Pile for
209
+ non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
210
+
211
+ All *Pythia* models trained for 143000 steps at a batch size
212
+ of 2M (2,097,152 tokens).<br>
213
+ See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
214
+ procedure, including [how to reproduce
215
+ it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
216
+ Pythia uses the same tokenizer as [GPT-NeoX-
217
+ 20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
218
+
219
+ ## Evaluations
220
+
221
+ All 16 *Pythia* models were evaluated using the [LM Evaluation
222
+ Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
223
+ the results by model and step at `results/json/*` in the [GitHub
224
+ repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
225
+ Expand the sections below to see plots of evaluation results for all
226
+ Pythia and Pythia-deduped models compared with OPT and BLOOM.
227
+
228
+ <details>
229
+ <summary>LAMBADA – OpenAI</summary>
230
+ <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
231
+ </details>
232
+
233
+ <details>
234
+ <summary>Physical Interaction: Question Answering (PIQA)</summary>
235
+ <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
236
+ </details>
237
+
238
+ <details>
239
+ <summary>WinoGrande</summary>
240
+ <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
241
+ </details>
242
+
243
+ <details>
244
+ <summary>AI2 Reasoning Challenge—Easy Set</summary>
245
+ <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
246
+ </details>
247
+
248
+ <details>
249
+ <summary>SciQ</summary>
250
+ <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
251
+ </details>
252
+
253
+ ## Changelog
254
+
255
+ This section compares differences between previously released
256
+ [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
257
+ models. See Appendix B of the Pythia paper for further discussion of these
258
+ changes and the motivation behind them. We found that retraining Pythia had no
259
+ impact on benchmark performance.
260
+
261
+ - All model sizes are now trained with uniform batch size of 2M tokens.
262
+ Previously, the models of size 160M, 410M, and 1.4B parameters were trained
263
+ with batch sizes of 4M tokens.
264
+ - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
265
+ 128,256,512} in addition to every 1000 training steps.
266
+ - Flash Attention was used in the new retrained suite.
267
+ - We remedied a minor inconsistency that existed in the original suite: all
268
+ models of size 2.8B parameters or smaller had a learning rate (LR) schedule
269
+ which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
270
+ 12B models all used an LR schedule which decayed to a minimum LR of 0. In
271
+ the redone training runs, we rectified this inconsistency: all models now were
272
+ trained with LR decaying to a minimum of 0.1× their maximum LR.
273
+
274
+ ### Naming convention and parameter count
275
+
276
+ *Pythia* models were renamed in January 2023. It is possible that the old
277
+ naming convention still persists in some documentation by accident. The
278
+ current naming convention (70M, 160M, etc.) is based on total parameter count.
279
+
280
+ <figure style="width:32em">
281
+
282
+ | current Pythia suffix | old suffix | total params | non-embedding params |
283
+ | --------------------: | ---------: | -------------: | -------------------: |
284
+ | 70M | 19M | 70,426,624 | 18,915,328 |
285
+ | 160M | 125M | 162,322,944 | 85,056,000 |
286
+ | 410M | 350M | 405,334,016 | 302,311,424 |
287
+ | 1B | 800M | 1,011,781,632 | 805,736,448 |
288
+ | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
289
+ | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
290
+ | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
291
+ | 12B | 13B | 11,846,072,320 | 11,327,027,200 |
292
+ </figure>