RichardErkhov commited on
Commit
4499694
·
verified ·
1 Parent(s): 2acfd78

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ phi-2-pii-bbi - bnb 4bits
11
+ - Model creator: https://huggingface.co/dcipheranalytics/
12
+ - Original model: https://huggingface.co/dcipheranalytics/phi-2-pii-bbi/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ ---
22
+
23
+ # Definition
24
+
25
+ [phi-2] for [P]ersonal [I]dentifiable [I]nformation with [B]anking [B]anking [I]nsurance Dataset
26
+
27
+ # How to use model
28
+
29
+ ## Load model and tokenizer
30
+ ```
31
+ import torch
32
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
33
+
34
+ torch.set_default_device("cuda")
35
+
36
+ model_name = "dcipheranalytics/phi-2-pii-bbi"
37
+
38
+ quantization_config = BitsAndBytesConfig(
39
+ load_in_4bit=True,
40
+ bnb_4bit_compute_dtype=torch.bfloat16,
41
+ bnb_4bit_quant_type="nf4",
42
+ )
43
+
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ model_name,
46
+ device_map="auto",
47
+ # torch_dtype="auto",
48
+ torch_dtype=torch.bfloat16,
49
+ trust_remote_code=True,
50
+ quantization_config=quantization_config,
51
+ )
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
54
+ ```
55
+
56
+ ## Call generate method
57
+ ```
58
+ def generate(msg: str, max_new_tokens = 300, temperature=0.3):
59
+ chat_template = "<|im_start|>user\n{msg}<|im_end|><|im_start|>assistant\n"
60
+ prompt = chat_template.format(msg=msg)
61
+
62
+ with torch.no_grad():
63
+ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
64
+ output_ids = model.generate(
65
+ token_ids.to(model.device),
66
+ max_new_tokens=max_new_tokens,
67
+ do_sample=True,
68
+ temperature=temperature,
69
+ pad_token_id=tokenizer.eos_token_id,
70
+ eos_token_id=tokenizer.eos_token_id,
71
+ )
72
+ output = tokenizer.decode(output_ids[0][token_ids.size(1):-1]).strip()
73
+ return output
74
+
75
+ instruction_template = "List the personally identifiable information in the given text below.\nText:########\n{text}\n########"
76
+ text_with_pii = "My passport number is 123456789."
77
+ generate(instruction_template.format(text=text_with_pii))
78
+ ```
79
+
80
+
81
+ ## Batch predictions
82
+ ```
83
+ from transformers import TextGenerationPipeline
84
+
85
+ def get_prompt(text):
86
+ instruction_template = "List the personally identifiable information in the given text below.\nText:########\n{text}\n########"
87
+ msg = instruction_template.format(text=text)
88
+ chat_template = "<|im_start|>user\n{msg}<|im_end|><|im_start|>assistant\n"
89
+ prompt = chat_template.format(msg=msg)
90
+
91
+ return prompt
92
+
93
+ generator = TextGenerationPipeline(
94
+ model=model,
95
+ tokenizer=tokenizer,
96
+ max_new_tokens=300,
97
+ do_sample=True,
98
+ temperature=0.3,
99
+ pad_token_id=tokenizer.eos_token_id,
100
+ eos_token_id=tokenizer.eos_token_id,
101
+ )
102
+
103
+ texts = ["My passport number is 123456789.",
104
+ "My name is John Smith.",
105
+ ]
106
+ prompts = list(map(get_prompt, texts))
107
+ outputs = generator(prompts,
108
+ return_full_text=False,
109
+ batch_size=2)
110
+ ```
111
+
112
+ # Train Data
113
+
114
+ GPT4 generated customer service conversations.
115
+ 1. 100 unique banking topics, 8 examples per each,
116
+ 2. New 100 banking topics, 4 examples per each,
117
+ 3. 100 insurance topics, 4 examples per each.
118
+
119
+ # Evaluation Results
120
+
121
+ ## Average
122
+ ```
123
+ precision 0.836223
124
+ recall 0.781132
125
+ f1 0.801837
126
+ ```
127
+
128
+ ## Per topic:
129
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ea400bb1d9c4ef71ebb962/wUfwR-dmmyxF4pCYoebCX.png)
130
+
131
+ ## On TAB test split:
132
+ ```
133
+ precision 0.506118
134
+ recall 0.350976
135
+ f1 0.391614
136
+ ```
137
+