RichardErkhov commited on
Commit
0d6e557
·
verified ·
1 Parent(s): 7b4e799

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ phi-3-tiny-random - bnb 8bits
11
+ - Model creator: https://huggingface.co/yujiepan/
12
+ - Original model: https://huggingface.co/yujiepan/phi-3-tiny-random/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ library_name: transformers
20
+ pipeline_tag: text-generation
21
+ inference: true
22
+ widget:
23
+ - text: Hello!
24
+ example_title: Hello world
25
+ group: Python
26
+ ---
27
+
28
+ This model is randomly initialized, using the config from [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) but with smaller size.
29
+ Note the model is in float16.
30
+
31
+ Codes:
32
+ ```python
33
+ import transformers
34
+ import torch
35
+ import os
36
+ from huggingface_hub import create_repo, upload_folder
37
+
38
+ source_model_id = 'microsoft/Phi-3-mini-128k-instruct'
39
+ save_path = '/tmp/yujiepan/phi-3-tiny-random'
40
+ repo_id = 'yujiepan/phi-3-tiny-random'
41
+
42
+ config = transformers.AutoConfig.from_pretrained(
43
+ source_model_id, trust_remote_code=True)
44
+ config.hidden_size = 16
45
+ config.intermediate_size = 32
46
+ config.num_attention_heads = 4
47
+ config.num_hidden_layers = 2
48
+ config.num_key_value_heads = 4
49
+ config.rope_scaling['long_factor'] = [1.0299, 1.0499]
50
+ config.rope_scaling['short_factor'] = [1.05, 1.05]
51
+
52
+ model = transformers.AutoModelForCausalLM.from_config(
53
+ config, trust_remote_code=True)
54
+ model = model.to(torch.float16)
55
+ model.save_pretrained(save_path)
56
+
57
+ tokenizer = transformers.AutoTokenizer.from_pretrained(
58
+ source_model_id, trust_remote_code=True)
59
+ tokenizer.save_pretrained(save_path)
60
+
61
+ result = transformers.pipelines.pipeline(
62
+ 'text-generation',
63
+ model=model.float(), tokenizer=tokenizer)('Hello')
64
+ print(result)
65
+
66
+ os.system(f'ls -alh {save_path}')
67
+ create_repo(repo_id, exist_ok=True)
68
+ upload_folder(repo_id=repo_id, folder_path=save_path)
69
+
70
+ from transformers import AutoProcessor
71
+ AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True).push_to_hub(repo_id)
72
+ ```
73
+