pszemraj commited on
Commit
9583df7
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: microsoft/phi-1_5
4
+ tags:
5
+ - bees
6
+ - honey
7
+ - bzz
8
+ metrics:
9
+ - accuracy
10
+ datasets:
11
+ - BEE-spoke-data/bees-internal
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # phi-1bee5 🐝
18
+
19
+ > Where Code Meets Beekeeping: An Unbeelievable Synergy!
20
+
21
+ <a href="https://colab.research.google.com/gist/pszemraj/7ea68b3b71ee4e6c0729d2318f3f4158/we-bee-testing.ipynb">
22
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
23
+ </a>
24
+
25
+ Have you ever found yourself in the depths of a debugging session and thought, "I wish I could be basking in the glory of a blooming beehive right now"? Or maybe you've been donning your beekeeping suit, puffing on your smoker, and longed for the sweet aroma of freshly written code?
26
+
27
+ Well, brace yourselves, hive-minded humans and syntax-loving sapiens, for `phi-1bee5`, a groundbreaking transformer model that's here to disrupt your apiary and your IDE!
28
+
29
+
30
+ ## Details
31
+
32
+ This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the `BEE-spoke-data/bees-internal` dataset.
33
+
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 2.6982
36
+ - Accuracy: 0.4597
37
+
38
+ ## Usage
39
+
40
+ load model:
41
+
42
+ ```python
43
+ import torch
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+ # !pip install -U -q transformers accelerate einops
46
+
47
+ checkpoint = "BEE-spoke-data/phi-1bee5"
48
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
49
+ model = AutoModelForCausalLM.from_pretrained(
50
+ checkpoint,
51
+ device_map="auto",
52
+ torch_dtype=torch.float16,
53
+ trust_remote_code=True
54
+ )
55
+ ```
56
+ Run inference:
57
+
58
+ ```python
59
+ prompt = "Today was an amazing day because"
60
+ inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to(
61
+ model.device
62
+ )
63
+
64
+ outputs = model.generate(
65
+ **inputs, do_sample=True, max_new_tokens=128, epsilon_cutoff=7e-4
66
+ )
67
+ result = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
68
+ print(result)
69
+ # output will probably contain a story/info about bees
70
+ ```
71
+
72
+
73
+ ### Intended Uses:
74
+
75
+ 1. **Educational Edification**: Are you a coding novice with a budding interest in beekeeping? Or perhaps a seasoned developer whose curiosity has been piqued by the buzzing in your backyard? phi-1bee5 aims to serve as a fun, informative bridge between these two worlds.
76
+ 2. **Casual Queries**: This model can generate code examples and beekeeping tips. It's perfect for those late-night coding sessions when you feel like taking a virtual stroll through an apiary.
77
+ 3. **Academic & Research Insights**: Interested in interdisciplinary studies that explore the intersection of technology and ecology? phi-1bee5 might offer some amusing, if not entirely accurate, insights.
78
+
79
+ ### Limitations:
80
+
81
+ 1. **Not a beekeeping expert**: For the love of all things hexagonal, please do not use phi-1bee5 to make serious beekeeping decisions. While our model is well read in the beekeeping literature, it lacks the practical experience and nuanced understanding that professional beekeepers possess.
82
+ 2. **Licensing**: This model is derived from a base model under the Microsoft Research License. Any use must comply with the terms of that license.
83
+ 3. **Infallibility:** Like any machine learning model, phi-1bee5 can make mistakes. Always double check the code and bee facts before using it in production or in your hive.
84
+ 4. **Ethical Constraints**: This model may not be used for illegal or unethical activities, including but not limited to terrorism, harassment, or spreading disinformation.
85
+
86
+ ## Training procedure
87
+
88
+ While the full dataset is not yet complete and therefore not yet released for "safety reasons", you can check out a preliminary sample at: [bees-v0](https://huggingface.co/datasets/BEE-spoke-data/bees-v0)
89
+
90
+ ### Training hyperparameters
91
+
92
+ The following hyperparameters were used during training:
93
+ - learning_rate: 0.0001
94
+ - train_batch_size: 1
95
+ - eval_batch_size: 2
96
+ - gradient_accumulation_steps: 32
97
+ - total_train_batch_size: 32
98
+ - optimizer: Adam with betas=(0.9,0.995) and epsilon=1e-08
99
+ - lr_scheduler_type: cosine
100
+ - lr_scheduler_warmup_ratio: 0.03
101
+ - num_epochs: 2.0
added_tokens.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "\t\t": 50294,
3
+ "\t\t\t": 50293,
4
+ "\t\t\t\t": 50292,
5
+ "\t\t\t\t\t": 50291,
6
+ "\t\t\t\t\t\t": 50290,
7
+ "\t\t\t\t\t\t\t": 50289,
8
+ "\t\t\t\t\t\t\t\t": 50288,
9
+ "\t\t\t\t\t\t\t\t\t": 50287,
10
+ " ": 50286,
11
+ " ": 50285,
12
+ " ": 50284,
13
+ " ": 50283,
14
+ " ": 50282,
15
+ " ": 50281,
16
+ " ": 50280,
17
+ " ": 50279,
18
+ " ": 50278,
19
+ " ": 50277,
20
+ " ": 50276,
21
+ " ": 50275,
22
+ " ": 50274,
23
+ " ": 50273,
24
+ " ": 50272,
25
+ " ": 50271,
26
+ " ": 50270,
27
+ " ": 50269,
28
+ " ": 50268,
29
+ " ": 50267,
30
+ " ": 50266,
31
+ " ": 50265,
32
+ " ": 50264,
33
+ " ": 50263,
34
+ " ": 50262,
35
+ " ": 50261,
36
+ " ": 50260,
37
+ " ": 50259,
38
+ " ": 50258,
39
+ " ": 50257
40
+ }
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/phi-1_5",
3
+ "activation_function": "gelu_new",
4
+ "architecture": {
5
+ "block_cls": "parallel",
6
+ "mixer": {},
7
+ "mlp": {
8
+ "mlp_cls": "mlp"
9
+ }
10
+ },
11
+ "architectures": [
12
+ "MixFormerSequentialForCausalLM"
13
+ ],
14
+ "auto_map": {
15
+ "AutoConfig": "microsoft/phi-1_5--configuration_mixformer_sequential.MixFormerSequentialConfig",
16
+ "AutoModelForCausalLM": "microsoft/phi-1_5--modeling_mixformer_sequential.MixFormerSequentialForCausalLM"
17
+ },
18
+ "embd_layer": "default",
19
+ "embd_pdrop": 0.0,
20
+ "initializer_range": 0.02,
21
+ "layer_norm_epsilon": 1e-05,
22
+ "model_type": "mixformer-sequential",
23
+ "n_embd": 2048,
24
+ "n_head": 32,
25
+ "n_inner": null,
26
+ "n_layer": 24,
27
+ "n_positions": 2048,
28
+ "phyagi_version": "0.0.4.dev",
29
+ "resid_pdrop": 0.0,
30
+ "rotary_dim": 32,
31
+ "tie_word_embeddings": false,
32
+ "torch_dtype": "float32",
33
+ "transformers_version": "4.34.0.dev0",
34
+ "vocab_size": 51200
35
+ }
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "transformers_version": "4.34.0.dev0"
4
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e81ba22664e35156c7c3725615ee62e2f4bcc407043c6465fbd71a69e205f462
3
+ size 5673107168
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 2048,
7
+ "tokenizer_class": "CodeGenTokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4308d51607094a4894d0e96ac0c58394eca48bd93bee3b762a22c6f291bbc7d
3
+ size 4664
vocab.json ADDED
The diff for this file is too large to render. See raw diff