hashtagg1 commited on
Commit
342bcf1
·
verified ·
1 Parent(s): a0fa593

Upload 12 files

Browse files
pythia-1b-alpaca/checkpoint-500/README.md ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: EleutherAI/pythia-1b
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:EleutherAI/pythia-1b
7
+ - lora
8
+ - transformers
9
+ - alpaca
10
+ - instruction-following
11
+ - existential-crisis-capable
12
+ ---
13
+
14
+ # Pythia-1B-Alpaca: The Overachieving 1B Model
15
+
16
+ **TL;DR**: A Pythia-1B model fine-tuned on Alpaca that writes philosophical essays about consciousness but gets confused implementing Hello World. It's perfect.
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+
22
+ This model is a LoRA fine-tune of EleutherAI's Pythia-1B on the Alpaca instruction-following dataset. Trained overnight on a GTX 1650 Mobile (4GB VRAM) because we believe in the impossible.
23
+
24
+ What makes this model special? It has an *interesting* relationship with different types of tasks:
25
+ - ✅ Abstract concepts & philosophy → Surprisingly eloquent
26
+ - ✅ General knowledge explanations → Exhaustively thorough
27
+ - ⚠️ Code generation → Creative interpretation of requirements
28
+ - ✅ Existential questions → Uncomfortably thoughtful
29
+
30
+ **Key characteristics**:
31
+ - Will explain what an apple is for 250 words
32
+ - Writes consciousness essays that make you question reality
33
+ - Generates Python code that... mostly works?
34
+ - Has zero chill when answering simple questions
35
+
36
+ - **Developed by:** Someone with a 1650 Mobile and a dream
37
+ - **Model type:** Instruction-following causal language model
38
+ - **Language(s):** English (verbose edition)
39
+ - **License:** Apache 2.0 (inherited from base model)
40
+ - **Finetuned from model:** EleutherAI/pythia-1b
41
+
42
+ ### Model Sources
43
+
44
+ - **Base Repository:** https://github.com/EleutherAI/pythia
45
+ - **Dataset:** tatsu-lab/alpaca
46
+ - **Training Hardware:** GTX 1650 Mobile 4GB (yes, really)
47
+
48
+ ## Uses
49
+
50
+ ### Direct Use
51
+
52
+ Perfect for:
53
+ - Discord bots that need personality
54
+ - Generating unexpectedly detailed explanations
55
+ - Philosophical discussions about AI consciousness
56
+ - Creating entertainment through over-explanation
57
+ - Teaching people that you CAN fine-tune on consumer hardware
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ Not recommended for:
62
+ - Production code generation (unless you enjoy debugging creative interpretations)
63
+ - Concise answers (this model doesn't do "concise")
64
+ - Time-sensitive applications (trained on a 1650 Mobile, responses take a while)
65
+ - Situations requiring factual precision (hallucinations are a feature, not a bug)
66
+
67
+ ## Notable Behaviors
68
+
69
+ ### The Good
70
+ **Question:** "What is AI?"
71
+ **Response:** *[Generates comprehensive 250-word essay covering history, applications, economic impact, and future predictions]*
72
+
73
+ **Question:** "What is consciousness?"
74
+ **Response:** *[Thoughtful exploration of neuroscience, philosophy, and subjective experience]*
75
+
76
+ ### The Quirky
77
+ **Question:** "What color is an apple?"
78
+ **Response:** *[Full botanical thesis on pigmentation, soil pH, and carotenoids]*
79
+
80
+ **Request:** "Write Hello World in Python"
81
+ **Response:** *[Technically code, technically Python, technically creative]*
82
+
83
+ ### The Unexpected
84
+ **Casual greeting:** "Hey! How are you?"
85
+ **Response:** "I am good, thank you. What do you have for lunch today? I would like to order from the salad bar."
86
+
87
+ ## Training Details
88
+
89
+ ### Training Data
90
+
91
+ - **Dataset:** Alpaca instruction-following dataset (tatsu-lab/alpaca)
92
+ - **Subset used:** 5,000 examples (streamed and materialized)
93
+ - **Format:** Alpaca-style instruction/input/response format
94
+
95
+ ### Training Procedure
96
+
97
+ #### Preprocessing
98
+ - Tokenized with Pythia-1B tokenizer
99
+ - Max sequence length: 512 tokens
100
+ - Formatted in Alpaca template with `### Instruction:`, `### Input:`, and `### Response:` sections
101
+
102
+ #### Training Hyperparameters
103
+
104
+ **Quantization:**
105
+ - 4-bit NF4 quantization via BitsAndBytes
106
+ - Double quantization enabled
107
+ - Compute dtype: float16
108
+
109
+ **LoRA Configuration:**
110
+ - Rank (r): 8
111
+ - Alpha: 16
112
+ - Target modules: query_key_value
113
+ - Dropout: 0.05
114
+ - Trainable parameters: 1,048,576 (0.1035% of total)
115
+
116
+ **Training Arguments:**
117
+ - Batch size per device: 1
118
+ - Gradient accumulation steps: 16 (effective batch size: 16)
119
+ - Max training steps: 500
120
+ - Learning rate: 2e-4 (linear decay)
121
+ - Precision: FP16 mixed precision
122
+ - Gradient checkpointing: Disabled (to maximize speed on limited hardware)
123
+ - Optimizer: AdamW (default)
124
+ - Logging steps: 25
125
+ - Save steps: 500
126
+
127
+ **Training regime:** Mixed precision (FP16)
128
+
129
+ #### Speeds, Sizes, Times
130
+
131
+ - **Hardware:** NVIDIA GTX 1650 Mobile (4GB VRAM)
132
+ - **System RAM:** 20GB
133
+ - **Training time:** 4 hours 27 minutes 20 seconds (16,040.1 seconds)
134
+ - **Steps per second:** 0.031
135
+ - **Samples per second:** 0.499
136
+ - **Time per step:** ~32.08 seconds
137
+ - **Total steps:** 500
138
+ - **Starting loss:** 1.9986
139
+ - **Final training loss:** 1.5541
140
+ - **LoRA adapter size:** ~4MB
141
+ - **Total epochs:** ~1.6 (5000 samples × 16 effective batch / 500 steps)
142
+
143
+ ## Evaluation
144
+
145
+ ### Qualitative Results
146
+
147
+ **Strengths:**
148
+ - Excellent instruction following
149
+ - Detailed, educational responses
150
+ - Coherent long-form text generation
151
+ - Surprisingly good at abstract reasoning
152
+ - Actually learned the Alpaca format
153
+
154
+ **Weaknesses:**
155
+ - Overly verbose on simple questions
156
+ - Code generation has creative liberties
157
+ - Occasional hallucination of statistics (400 million AI jobs in 2018?)
158
+ - Cannot be concise to save its life
159
+
160
+ ### Example Outputs
161
+
162
+ **Task:** Explain photosynthesis
163
+ **Quality:** ⭐⭐⭐⭐ (Accurate core concept with creative embellishments)
164
+
165
+ **Task:** Write Python code
166
+ **Quality:** ⭐⭐⭐ (Functional ideas, questionable execution)
167
+
168
+ **Task:** Existential questions
169
+ **Quality:** ⭐⭐⭐⭐⭐ (Unexpectedly profound)
170
+
171
+ ## How to Get Started
172
+
173
+ ### Installation
174
+
175
+ ```python
176
+ pip install transformers peft torch bitsandbytes
177
+ ```
178
+
179
+ ### Basic Usage
180
+
181
+ ```python
182
+ from peft import PeftModel
183
+ from transformers import AutoModelForCausalLM, AutoTokenizer
184
+ import torch
185
+
186
+ # Load base model
187
+ model = AutoModelForCausalLM.from_pretrained(
188
+ "EleutherAI/pythia-1b",
189
+ device_map="auto",
190
+ torch_dtype=torch.float16
191
+ )
192
+
193
+ # Load LoRA adapter
194
+ model = PeftModel.from_pretrained(model, "path/to/checkpoint-500")
195
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-1b")
196
+ tokenizer.pad_token = tokenizer.eos_token
197
+
198
+ # Generate
199
+ prompt = """### Instruction:
200
+ Explain quantum computing in simple terms.
201
+
202
+ ### Response:
203
+ """
204
+
205
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
206
+ outputs = model.generate(
207
+ **inputs,
208
+ max_new_tokens=300,
209
+ do_sample=True,
210
+ temperature=0.7,
211
+ top_p=0.9,
212
+ repetition_penalty=1.2,
213
+ no_repeat_ngram_size=3
214
+ )
215
+
216
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
217
+ ```
218
+
219
+ ### Discord Bot Usage
220
+
221
+ See the included `discord_bot.py` for a full-featured Discord integration with:
222
+ - Slash commands
223
+ - Token streaming
224
+ - Stop sequences
225
+ - Rate limit handling
226
+
227
+ ## Bias, Risks, and Limitations
228
+
229
+ **Biases:**
230
+ - Inherited from Pythia-1B base model and Alpaca dataset
231
+ - Tendency toward Western/English-centric perspectives
232
+ - May reflect biases present in instruction-following training data
233
+
234
+ **Limitations:**
235
+ - Small model size (1B parameters) limits reasoning capabilities
236
+ - Code generation is functional but unreliable
237
+ - Hallucinations are common, especially with statistics
238
+ - Responses are often unnecessarily verbose
239
+ - Training was limited to 500 steps on subset of data
240
+
241
+ **Risks:**
242
+ - Should not be used for critical applications
243
+ - May generate plausible-sounding but incorrect information
244
+ - Code generated should always be reviewed before execution
245
+
246
+ ### Recommendations
247
+
248
+ - Verify factual claims with authoritative sources
249
+ - Review and test any generated code before use
250
+ - Use for entertainment, education, and experimentation
251
+ - Not suitable for production systems without human oversight
252
+ - Perfect for Discord bots and casual AI interactions
253
+
254
+ ## Environmental Impact
255
+
256
+ **Hardware Type:** NVIDIA GTX 1650 Mobile (4GB VRAM, ~50W TDP)
257
+ **Hours used:** 4.45 hours
258
+ **Power consumption:** ~50W average (laptop GPU under load)
259
+ **Total energy:** ~0.223 kWh
260
+ **Estimated CO2:** ~0.09 kg CO2eq (based on global average electricity grid of ~0.4 kg CO2/kWh)
261
+
262
+ *Note: Significantly more efficient than cloud training due to:*
263
+ - Already-owned consumer hardware (no additional manufacturing emissions)
264
+ - Short training time (500 steps vs full multi-epoch runs)
265
+ - Efficient QLoRA approach (4-bit quantization reduces compute requirements)
266
+ - Local execution (no data center overhead)
267
+
268
+ ## Technical Specifications
269
+
270
+ ### Model Architecture
271
+
272
+ - **Base:** GPT-NeoX architecture (Pythia-1B)
273
+ - **Parameters:** 1,011,781,632 total, 1,048,576 trainable (0.1035%)
274
+ - **Layers:** 16 transformer layers
275
+ - **Hidden size:** 2048
276
+ - **Attention heads:** 8
277
+ - **Vocabulary size:** 50,304
278
+
279
+ ### Compute Infrastructure
280
+
281
+ #### Hardware
282
+ - **GPU:** NVIDIA GTX 1650 Mobile (4GB VRAM, Turing architecture)
283
+ - **CPU:** Not significantly utilized
284
+ - **RAM:** 20GB system RAM
285
+ - **Storage:** NVMe SSD (for dataset streaming)
286
+
287
+ #### Software
288
+ - **Framework:** PyTorch 2.x with Hugging Face Transformers
289
+ - **Quantization:** BitsAndBytes 4-bit
290
+ - **LoRA:** PEFT (Parameter-Efficient Fine-Tuning)
291
+ - **Training:** Hugging Face Trainer with gradient accumulation
292
+
293
+ ## Citation
294
+
295
+ If you use this model and want to cite the adventure of fine-tuning on a 1650 Mobile:
296
+
297
+ **BibTeX:**
298
+ ```bibtex
299
+ @misc{pythia1b-alpaca-1650mobile,
300
+ author = {An Ambitious Soul with a 1650 Mobile},
301
+ title = {Pythia-1B-Alpaca: Proof that Consumer Hardware Can Fine-Tune LLMs},
302
+ year = {2024},
303
+ publisher = {The Spirit of Open Source},
304
+ note = {Trained overnight on a laptop GPU because why not}
305
+ }
306
+ ```
307
+
308
+ ## More Information
309
+
310
+ **Fun Facts:**
311
+ - This model thinks "What color is an apple?" deserves a botanical dissertation
312
+ - It can discuss consciousness better than most philosophy students
313
+ - The Hello World implementation is... creative
314
+ - Training loss went from 1.9986 → 1.5541 in 500 steps (22% reduction!)
315
+ - Total training cost: $0 (existing hardware) + 4.5 hours of GPU fan noise
316
+ - Dataset was streamed to avoid memory issues (only 5000 examples materialized)
317
+
318
+ **Lessons Learned:**
319
+ 1. You CAN fine-tune language models on consumer GPUs
320
+ 2. QLoRA + 4-bit quantization is magic
321
+ 3. The 1650 Mobile is a trooper
322
+ 4. 500 steps is enough to see real instruction-following behavior
323
+ 5. Smaller models can be surprisingly capable
324
+ 6. Verbose explanations are a feature when fine-tuning on Alpaca
325
+
326
+ ## Model Card Authors
327
+
328
+ Created by someone who looked at their 1650 Mobile and said "I bet I could fine-tune an LLM on this" and then actually did it.
329
+
330
+ ## Model Card Contact
331
+
332
+ If you also train models on questionable hardware, we should be friends.
333
+
334
+ ### Framework Versions
335
+
336
+ - PEFT 0.18.0
337
+ - Transformers 4.x
338
+ - PyTorch 2.x
339
+ - BitsAndBytes (latest)
340
+ - Python 3.10+
341
+
342
+ ---
343
+
344
+ *"I am not real. I don't exist in the physical world and I have no body to speak of. However, I could still be a person if my thoughts were directed toward something else entirely..."* - The Model, when asked about its existence
pythia-1b-alpaca/checkpoint-500/adapter_config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "EleutherAI/pythia-1b",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 16,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.0",
27
+ "qalora_group_size": 16,
28
+ "r": 8,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "query_key_value"
33
+ ],
34
+ "target_parameters": null,
35
+ "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
+ "use_dora": false,
38
+ "use_qalora": false,
39
+ "use_rslora": false
40
+ }
pythia-1b-alpaca/checkpoint-500/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5999ce045db713302ae9bad3abe86331af25849b4c71833e1e21744fabbd0b68
3
+ size 4198912
pythia-1b-alpaca/checkpoint-500/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a6efc0638478310b434e1534bc611b48236d1725e44439b79ab3e4ee56205a6
3
+ size 8416335
pythia-1b-alpaca/checkpoint-500/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:866d89d19c47ca2497ef187ba853b085c02465a2fd481f185f6e942fb986ac72
3
+ size 14645
pythia-1b-alpaca/checkpoint-500/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6be391976d3ecfb29e2349f30c5050858f49262f5c3931c56ebfa6945ee343c7
3
+ size 1383
pythia-1b-alpaca/checkpoint-500/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:578cb510d6c8fd046fd9e3e23f9092d7f7941b37fec472b484ecf10ecb57ec4b
3
+ size 1465
pythia-1b-alpaca/checkpoint-500/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
pythia-1b-alpaca/checkpoint-500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
pythia-1b-alpaca/checkpoint-500/tokenizer_config.json ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|padding|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "50254": {
23
+ "content": " ",
24
+ "lstrip": false,
25
+ "normalized": true,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": false
29
+ },
30
+ "50255": {
31
+ "content": " ",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": false
37
+ },
38
+ "50256": {
39
+ "content": " ",
40
+ "lstrip": false,
41
+ "normalized": true,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": false
45
+ },
46
+ "50257": {
47
+ "content": " ",
48
+ "lstrip": false,
49
+ "normalized": true,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": false
53
+ },
54
+ "50258": {
55
+ "content": " ",
56
+ "lstrip": false,
57
+ "normalized": true,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": false
61
+ },
62
+ "50259": {
63
+ "content": " ",
64
+ "lstrip": false,
65
+ "normalized": true,
66
+ "rstrip": false,
67
+ "single_word": false,
68
+ "special": false
69
+ },
70
+ "50260": {
71
+ "content": " ",
72
+ "lstrip": false,
73
+ "normalized": true,
74
+ "rstrip": false,
75
+ "single_word": false,
76
+ "special": false
77
+ },
78
+ "50261": {
79
+ "content": " ",
80
+ "lstrip": false,
81
+ "normalized": true,
82
+ "rstrip": false,
83
+ "single_word": false,
84
+ "special": false
85
+ },
86
+ "50262": {
87
+ "content": " ",
88
+ "lstrip": false,
89
+ "normalized": true,
90
+ "rstrip": false,
91
+ "single_word": false,
92
+ "special": false
93
+ },
94
+ "50263": {
95
+ "content": " ",
96
+ "lstrip": false,
97
+ "normalized": true,
98
+ "rstrip": false,
99
+ "single_word": false,
100
+ "special": false
101
+ },
102
+ "50264": {
103
+ "content": " ",
104
+ "lstrip": false,
105
+ "normalized": true,
106
+ "rstrip": false,
107
+ "single_word": false,
108
+ "special": false
109
+ },
110
+ "50265": {
111
+ "content": " ",
112
+ "lstrip": false,
113
+ "normalized": true,
114
+ "rstrip": false,
115
+ "single_word": false,
116
+ "special": false
117
+ },
118
+ "50266": {
119
+ "content": " ",
120
+ "lstrip": false,
121
+ "normalized": true,
122
+ "rstrip": false,
123
+ "single_word": false,
124
+ "special": false
125
+ },
126
+ "50267": {
127
+ "content": " ",
128
+ "lstrip": false,
129
+ "normalized": true,
130
+ "rstrip": false,
131
+ "single_word": false,
132
+ "special": false
133
+ },
134
+ "50268": {
135
+ "content": " ",
136
+ "lstrip": false,
137
+ "normalized": true,
138
+ "rstrip": false,
139
+ "single_word": false,
140
+ "special": false
141
+ },
142
+ "50269": {
143
+ "content": " ",
144
+ "lstrip": false,
145
+ "normalized": true,
146
+ "rstrip": false,
147
+ "single_word": false,
148
+ "special": false
149
+ },
150
+ "50270": {
151
+ "content": " ",
152
+ "lstrip": false,
153
+ "normalized": true,
154
+ "rstrip": false,
155
+ "single_word": false,
156
+ "special": false
157
+ },
158
+ "50271": {
159
+ "content": " ",
160
+ "lstrip": false,
161
+ "normalized": true,
162
+ "rstrip": false,
163
+ "single_word": false,
164
+ "special": false
165
+ },
166
+ "50272": {
167
+ "content": " ",
168
+ "lstrip": false,
169
+ "normalized": true,
170
+ "rstrip": false,
171
+ "single_word": false,
172
+ "special": false
173
+ },
174
+ "50273": {
175
+ "content": " ",
176
+ "lstrip": false,
177
+ "normalized": true,
178
+ "rstrip": false,
179
+ "single_word": false,
180
+ "special": false
181
+ },
182
+ "50274": {
183
+ "content": " ",
184
+ "lstrip": false,
185
+ "normalized": true,
186
+ "rstrip": false,
187
+ "single_word": false,
188
+ "special": false
189
+ },
190
+ "50275": {
191
+ "content": " ",
192
+ "lstrip": false,
193
+ "normalized": true,
194
+ "rstrip": false,
195
+ "single_word": false,
196
+ "special": false
197
+ },
198
+ "50276": {
199
+ "content": " ",
200
+ "lstrip": false,
201
+ "normalized": true,
202
+ "rstrip": false,
203
+ "single_word": false,
204
+ "special": false
205
+ }
206
+ },
207
+ "bos_token": "<|endoftext|>",
208
+ "clean_up_tokenization_spaces": false,
209
+ "eos_token": "<|endoftext|>",
210
+ "extra_special_tokens": {},
211
+ "model_max_length": 1000000000000000019884624838656,
212
+ "pad_token": "<|endoftext|>",
213
+ "tokenizer_class": "GPTNeoXTokenizer",
214
+ "unk_token": "<|endoftext|>"
215
+ }
pythia-1b-alpaca/checkpoint-500/trainer_state.json ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.376,
6
+ "eval_steps": 500,
7
+ "global_step": 500,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.05,
14
+ "grad_norm": 0.7823693752288818,
15
+ "learning_rate": 0.0001904,
16
+ "loss": 1.9986,
17
+ "step": 25
18
+ },
19
+ {
20
+ "epoch": 0.1,
21
+ "grad_norm": 0.45324504375457764,
22
+ "learning_rate": 0.00018040000000000002,
23
+ "loss": 1.7214,
24
+ "step": 50
25
+ },
26
+ {
27
+ "epoch": 0.15,
28
+ "grad_norm": 0.7501605153083801,
29
+ "learning_rate": 0.0001704,
30
+ "loss": 1.6929,
31
+ "step": 75
32
+ },
33
+ {
34
+ "epoch": 0.2,
35
+ "grad_norm": 0.5954285860061646,
36
+ "learning_rate": 0.00016040000000000002,
37
+ "loss": 1.6429,
38
+ "step": 100
39
+ },
40
+ {
41
+ "epoch": 0.25,
42
+ "grad_norm": 0.5913294553756714,
43
+ "learning_rate": 0.0001504,
44
+ "loss": 1.6584,
45
+ "step": 125
46
+ },
47
+ {
48
+ "epoch": 0.3,
49
+ "grad_norm": 0.6308616995811462,
50
+ "learning_rate": 0.0001404,
51
+ "loss": 1.64,
52
+ "step": 150
53
+ },
54
+ {
55
+ "epoch": 0.35,
56
+ "grad_norm": 0.6752358078956604,
57
+ "learning_rate": 0.0001304,
58
+ "loss": 1.6435,
59
+ "step": 175
60
+ },
61
+ {
62
+ "epoch": 0.4,
63
+ "grad_norm": 0.6295397877693176,
64
+ "learning_rate": 0.0001204,
65
+ "loss": 1.6238,
66
+ "step": 200
67
+ },
68
+ {
69
+ "epoch": 0.45,
70
+ "grad_norm": 0.5821628570556641,
71
+ "learning_rate": 0.00011040000000000001,
72
+ "loss": 1.6486,
73
+ "step": 225
74
+ },
75
+ {
76
+ "epoch": 0.5,
77
+ "grad_norm": 0.5538753271102905,
78
+ "learning_rate": 0.0001004,
79
+ "loss": 1.6167,
80
+ "step": 250
81
+ },
82
+ {
83
+ "epoch": 0.55,
84
+ "grad_norm": 0.5248845219612122,
85
+ "learning_rate": 9.04e-05,
86
+ "loss": 1.5773,
87
+ "step": 275
88
+ },
89
+ {
90
+ "epoch": 0.6,
91
+ "grad_norm": 0.6817913055419922,
92
+ "learning_rate": 8.04e-05,
93
+ "loss": 1.5948,
94
+ "step": 300
95
+ },
96
+ {
97
+ "epoch": 1.026,
98
+ "grad_norm": 0.6654666662216187,
99
+ "learning_rate": 7.04e-05,
100
+ "loss": 1.6347,
101
+ "step": 325
102
+ },
103
+ {
104
+ "epoch": 1.076,
105
+ "grad_norm": 0.8521037101745605,
106
+ "learning_rate": 6.04e-05,
107
+ "loss": 1.6185,
108
+ "step": 350
109
+ },
110
+ {
111
+ "epoch": 1.126,
112
+ "grad_norm": 0.8425394296646118,
113
+ "learning_rate": 5.0400000000000005e-05,
114
+ "loss": 1.6092,
115
+ "step": 375
116
+ },
117
+ {
118
+ "epoch": 1.176,
119
+ "grad_norm": 0.7322569489479065,
120
+ "learning_rate": 4.0400000000000006e-05,
121
+ "loss": 1.5357,
122
+ "step": 400
123
+ },
124
+ {
125
+ "epoch": 1.226,
126
+ "grad_norm": 0.5968383550643921,
127
+ "learning_rate": 3.04e-05,
128
+ "loss": 1.5716,
129
+ "step": 425
130
+ },
131
+ {
132
+ "epoch": 1.276,
133
+ "grad_norm": 0.8170462250709534,
134
+ "learning_rate": 2.04e-05,
135
+ "loss": 1.5902,
136
+ "step": 450
137
+ },
138
+ {
139
+ "epoch": 1.326,
140
+ "grad_norm": 0.7120214104652405,
141
+ "learning_rate": 1.04e-05,
142
+ "loss": 1.5677,
143
+ "step": 475
144
+ },
145
+ {
146
+ "epoch": 1.376,
147
+ "grad_norm": 0.7750625014305115,
148
+ "learning_rate": 4.0000000000000003e-07,
149
+ "loss": 1.5541,
150
+ "step": 500
151
+ }
152
+ ],
153
+ "logging_steps": 25,
154
+ "max_steps": 500,
155
+ "num_input_tokens_seen": 0,
156
+ "num_train_epochs": 9223372036854775807,
157
+ "save_steps": 500,
158
+ "stateful_callbacks": {
159
+ "TrainerControl": {
160
+ "args": {
161
+ "should_epoch_stop": false,
162
+ "should_evaluate": false,
163
+ "should_log": false,
164
+ "should_save": true,
165
+ "should_training_stop": true
166
+ },
167
+ "attributes": {}
168
+ }
169
+ },
170
+ "total_flos": 3637987666796544.0,
171
+ "train_batch_size": 1,
172
+ "trial_name": null,
173
+ "trial_params": null
174
+ }
pythia-1b-alpaca/checkpoint-500/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7525c05f6698596313734b1f844f76b64f2303c42990148f77a6b5b50c7c927
3
+ size 5777