Jayeshbankoti commited on
Commit
919e2c4
verified
1 Parent(s): 3271f23

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  library_name: peft
11
  ---
12
 
13
- # Harry Potter Multi-Persona QA Model (LoRA Adapter)
14
 
15
  This is a LoRA adapter for **google/gemma-2b** fine-tuned for Harry Potter universe question-answering with multiple character personas.
16
 
@@ -18,7 +18,7 @@ This is a LoRA adapter for **google/gemma-2b** fine-tuned for Harry Potter unive
18
  - **Hermione Granger**: Analytical, articulate, and deeply knowledgeable
19
  - **Harry Potter**: Courageous, emotional, and direct
20
  - **Severus Snape**: Cold, precise, and cutting
21
- - **Voldemort**: Grandiose, manipulative, and authoritative
22
  - **David Goggins**: Raw, gritty, and brutally honest
23
  - **Donald Trump**: Boastful, blunt, and repetitive
24
  - **General**: Expert on Harry Potter universe
 
10
  library_name: peft
11
  ---
12
 
13
+ # Harry Potter Multi-Persona QA Model V2 with 60K QA SFT (LoRA Adapter)
14
 
15
  This is a LoRA adapter for **google/gemma-2b** fine-tuned for Harry Potter universe question-answering with multiple character personas.
16
 
 
18
  - **Hermione Granger**: Analytical, articulate, and deeply knowledgeable
19
  - **Harry Potter**: Courageous, emotional, and direct
20
  - **Severus Snape**: Cold, precise, and cutting
21
+ - **Voldemort**: Grandiose, manipulative, and authoritative
22
  - **David Goggins**: Raw, gritty, and brutally honest
23
  - **Donald Trump**: Boastful, blunt, and repetitive
24
  - **General**: Expert on Harry Potter universe
adapter_config.json CHANGED
@@ -24,13 +24,13 @@
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
 
27
  "gate_proj",
 
28
  "up_proj",
29
  "o_proj",
30
- "down_proj",
31
- "v_proj",
32
  "q_proj",
33
- "k_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
+ "k_proj",
28
  "gate_proj",
29
+ "down_proj",
30
  "up_proj",
31
  "o_proj",
 
 
32
  "q_proj",
33
+ "v_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cbc6cede5d63a38c21ffbfa35265e45bc9bd0ab6e0e2b30379d1edda1e75c026
3
  size 313820248
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f840361f07eb61471aa14253628f1bad50292ecb5bea12d934dd4a16cb52cc3d
3
  size 313820248
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c205986a2dbec95f525e7bd1344013285dcc42e5059ebacfbcc86ca01fc6dd15
3
  size 5752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5473c9f5f84e8b6754dc97843a027f6ae0857633c87ba116d7d9610bf934e1df
3
  size 5752