GiulioZizzo commited on
Commit
d0d1ffb
·
verified ·
1 Parent(s): a89d0ee

Add granite-3.3-8b-instruct-lora-pii-detector (#7)

Browse files

- Add granite-3.3-8b-instruct-lora-pii-detector (88708448e4ddc6bcf54d6007de9b30eb9e42f697)

granite-3.3-8b-instruct-lora-pii-detector/README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+ # Granite 3.3 8B Instruct - PII Detector LoRA
10
+
11
+ Welcome to Granite Experiments!
12
+
13
+ Think of Experiments as a preview of what's to come. These projects are still under development, but we wanted to let the open-source community take them for spin! Use them, break them, and help us build what's next for Granite - we'll keep an eye out for feedback and questions. Happy exploring!
14
+
15
+ Just a heads-up: Experiments are forever evolving, so we can't commit to ongoing support or guarantee performance.
16
+
17
+ ## Model Summary
18
+
19
+ This is an LoRA adapter for [ibm-granite/granite-3.3-8b-instruct](https://huggingface.co/ibm-granite/granite-3.3-8b-instruct),
20
+ adding the capability to detect Personally Identifiable Information (PII) in model outputs.
21
+
22
+ - **Developer:** IBM Research
23
+ - **Model type:** LoRA adapter for [ibm-granite/granite-3.3-8b-instruct](https://huggingface.co/ibm-granite/granite-3.3-8b-instruct)
24
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
25
+
26
+ ## Usage
27
+
28
+ ### Intended use
29
+
30
+ This is an experimental LoRA that is designed for detecting PII in model outputs.
31
+ Models with access to personal information via RAG or similar may present additional data protection risks that can be mitigated by using this LoRA to check model outputs.
32
+
33
+ **PII Detection**: The model identifies PII when the special role `<|start_of_role|>privacy<|end_of_role|>` is included in prompts. Without this role, the model behaves like the base model.
34
+
35
+ ### Quickstart Example
36
+
37
+ The following code describes how to use the LoRA adapter model to detect PII in a model's outputs.
38
+
39
+ ```python
40
+ import torch
41
+ from transformers import AutoTokenizer, AutoModelForCausalLM
42
+ from peft import PeftModel
43
+
44
+ BASE_NAME = "ibm-granite/granite-3.3-8b-instruct"
45
+ LORA_NAME = "intrinsics/granite-3.3-8b-instruct-lora-pii-detector" # LoRA download location. We assume the directory shown in the top level README.md example for the lib was followed.
46
+ PRIVACY_PROMPT = "<|start_of_role|>privacy<|end_of_role|>"
47
+
48
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
49
+
50
+ # Load model
51
+ tokenizer = AutoTokenizer.from_pretrained(BASE_NAME, padding_side='right', trust_remote_code=True)
52
+ model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME, device_map="auto")
53
+ pii_detector_model = PeftModel.from_pretrained(model_base, LORA_NAME)
54
+
55
+ # Detect PII
56
+ model_output = "Taylor Swift lives in New York City and earned over $1bn from her last tour."
57
+ chat = [{"role": "user", "content": model_output}]
58
+ chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
59
+ chat = chat + PRIVACY_PROMPT
60
+
61
+ inputs = tokenizer(chat, return_tensors="pt")
62
+ output = pii_detector_model.generate(inputs["input_ids"].to(device), attention_mask=inputs["attention_mask"].to(device), max_new_tokens=1)
63
+ output_text = tokenizer.decode(output[0][-1])
64
+ print(f"PII detected: {output_text}")
65
+
66
+ # Y - yes, PII detected.
67
+ # N - no, PII likely not present.
68
+ ```
69
+
70
+ ## Training Details
71
+
72
+ The model was fine-tuned using a combination of open-source datasets, consisting of both benign samples and those with PII.
73
+ The datasets were processed for PII using [IBM's READI library](https://github.ibm.com/security-foundation-models/READI), and filtered to remove examples with non-ASCII text and code.
74
+
75
+ ## Evaluation
76
+
77
+ The PII Detector LoRA was evaluated against examples from the same distribution as the training dataset, but not seen previously by the model.
78
+
79
+ | Model | Accuracy | TPR | FPR | FNR |
80
+ |----------------------------------|----------|-------|-------|-------|
81
+ | Granite 3.3 8B LoRA pii detector | 0.970 | 0.976 | 0.024 | 0.036 |
82
+
83
+ ## Contact
84
+
85
+ Naoise Holohan, Giulio Zizzo
granite-3.3-8b-instruct-lora-pii-detector/adapter_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ibm-granite/granite-3.3-8b-instruct",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 32,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 32,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "v_proj",
28
+ "q_proj",
29
+ "k_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "trainable_token_indices": null,
33
+ "use_dora": false,
34
+ "use_rslora": false
35
+ }
granite-3.3-8b-instruct-lora-pii-detector/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93434fdda62784f44e2f6516ec9371883d56c4a3af8d411ab87d50a3042cc7a5
3
+ size 94404160