RockyBai commited on
Commit
88eb26c
·
verified ·
1 Parent(s): 0f07609

Upload 7 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +34 -45
  3. tokenizer.json +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,55 +1,44 @@
1
- ---
2
- language:
3
- - en
4
- license: apache-2.0
5
- tags:
6
- - peft
7
- - lora
8
- - adapter
9
- - emotion-classification
10
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
11
- library_name: peft
12
- ---
13
 
14
- # Mirari - LoRA Emotion Classification Adapter
 
 
 
 
 
15
 
16
- This is a **LoRA (Low-Rank Adaptation)** adapter model for emotion classification.
17
-
18
- ## Model Type
19
- - **Type:** LoRA Adapter
20
- - **Base Model:** Meta-Llama-3.1-8B-Instruct (or similar)
21
- - **Task:** Emotion Classification
22
- - **Framework:** PEFT (Parameter-Efficient Fine-Tuning)
23
-
24
- ## Usage
25
-
26
- This is an adapter model and requires a base model to function.
27
 
28
  ```python
29
- from transformers import AutoModelForCausalLM, AutoTokenizer
30
- from peft import PeftModel
31
 
32
- # Load base model (adjust to your actual base model)
33
- base_model_name = "unsloth/Meta-Llama-3.1-8B-Instruct"
34
- base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
35
- tokenizer = AutoTokenizer.from_pretrained(base_model_name)
 
 
 
36
 
37
- # Load LoRA adapter
38
- model = PeftModel.from_pretrained(base_model, "RockyBai/Mirari")
39
 
40
  # Use the model
41
- text = "I am so happy today!"
42
- inputs = tokenizer(text, return_tensors="pt")
43
- outputs = model.generate(**inputs, max_new_tokens=50)
44
- print(tokenizer.decode(outputs[0]))
 
 
 
 
 
 
 
45
  ```
46
 
47
- ## Training Details
48
-
49
- - Adapter size: ~320MB
50
- - Training method: LoRA (Low-Rank Adaptation)
51
- - Task: Emotion classification
52
-
53
- ## Notes
54
-
55
- This model requires the base model to be loaded first, then this adapter is applied on top of it.
 
1
+ # Fine-Tuned Emotion Classification Model
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ ## Model Information
4
+ - **Base Model**: unsloth/Meta-Llama-3.1-8B-Instruct
5
+ - **Training Method**: LoRA (Low-Rank Adaptation)
6
+ - **LoRA Rank**: 32
7
+ - **Training Samples**: 56,400
8
+ - **Datasets Used**: GoEmotions, Emotion, TweetEval
9
 
10
+ ## How to Load This Model
 
 
 
 
 
 
 
 
 
 
11
 
12
  ```python
13
+ from unsloth import FastLanguageModel
 
14
 
15
+ # Load the fine-tuned model
16
+ model, tokenizer = FastLanguageModel.from_pretrained(
17
+ model_name="emotion_model_finetuned",
18
+ max_seq_length=2048,
19
+ dtype=None,
20
+ load_in_4bit=True,
21
+ )
22
 
23
+ # Enable inference mode
24
+ FastLanguageModel.for_inference(model)
25
 
26
  # Use the model
27
+ prompt = """<|im_start|>system
28
+ You are a compassionate mental health support assistant.<|im_end|>
29
+ <|im_start|>user
30
+ I'm feeling anxious about tomorrow.<|im_end|>
31
+ <|im_start|>assistant
32
+ """
33
+
34
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
35
+ outputs = model.generate(**inputs, max_new_tokens=128)
36
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
37
+ print(response)
38
  ```
39
 
40
+ ## Files Included
41
+ - `adapter_config.json` - LoRA adapter configuration
42
+ - `adapter_model.safetensors` - Fine-tuned weights
43
+ - `tokenizer.json` - Tokenizer files
44
+ - `training_config.json` - Training hyperparameters
 
 
 
 
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b9e4e7fb171f92fd137b777cc2714bf87d11576700a1dcd7a399e7bbe39537b
3
+ size 17209920